WO2017167587A1 - Imagerie rétinienne non invasive - Google Patents

Imagerie rétinienne non invasive Download PDF

Info

Publication number
WO2017167587A1
WO2017167587A1 PCT/EP2017/056331 EP2017056331W WO2017167587A1 WO 2017167587 A1 WO2017167587 A1 WO 2017167587A1 EP 2017056331 W EP2017056331 W EP 2017056331W WO 2017167587 A1 WO2017167587 A1 WO 2017167587A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
camera
fundus
processors
planar surface
Prior art date
Application number
PCT/EP2017/056331
Other languages
English (en)
Inventor
Frederik Jan De Bruijn
Gerhardus Wilhelmus Lucassen
Igor Wilhelmus Franciscus Paulussen
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Publication of WO2017167587A1 publication Critical patent/WO2017167587A1/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0091Fixation targets for viewing direction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F13/00Illuminated signs; Luminous advertising
    • G09F13/04Signs, boards or panels, illuminated from behind the insignia
    • G09F13/12Signs, boards or panels, illuminated from behind the insignia using a transparent mirror or other light reflecting surface transparent to transmitted light whereby a sign, symbol, picture or other is visible only when illuminated
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/18Arrangement of plural eye-testing or -examining apparatus
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0004Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by the type of physiological signal transmitted
    • A61B5/0013Medical image data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • A61B5/0079Devices for viewing the surface of the body, e.g. camera, magnifying lens using mirrors, i.e. for self-examination
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • A61B5/02416Detecting, measuring or recording pulse rate or heart rate using photoplethysmograph signals, e.g. generated by infrared radiation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • A61B5/14551Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters for measuring blood gases
    • A61B5/14555Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters for measuring blood gases specially adapted for the eye fundus

Definitions

  • the present disclosure is directed generally to health care. More particularly, various inventive methods and apparatus disclosed herein relate to unobtrusive and/or noninvasive fundus imaging.
  • the human eye is known to reflect physiological changes to, or conditions of, other parts of human body.
  • the light sensitive, image-capturing tissue of the eye referred to as the "retina” is known to exhibit a variety of characteristics that make it a valuable source of a diagnostic information for eye-related diseases, such as glaucoma, as well as non-eye-related diseases such as diabetes.
  • a unique feature of the retina is that it offers a relatively obstruction-free view of various arteries and veins, without being inhibited by occluding and/or scattering layers such as the skin. This enables ophthalmologists to check for the presence of vascular anomalies such as aneurisms and even neovascularization.
  • the unobstructed view on the retinal blood vessels poses an advantage in assessment of the levels of various metabolic compounds carried by the vascular system by exploiting the differences in characteristic absorption of visible and invisible ultraviolet or infrared light.
  • Ophthalmoscopes provide a direct view of a fragment of the retina under coaxial illumination from a built-in light source.
  • Ophthalmoscopes come in various forms and can be used for "direct” observation or for "indirect” observation by using a relay lens held separately near the eye.
  • Direct ophthalmoscopy e.g., with a classical ophthalmoscope or the newer "panoptic" ophthalmoscopes, is typically performed relatively close to the eye. In some instances, a portion of the ophthalmoscope may even rest softly on the orbital region of the patient's eye for support.
  • indirect ophthalmoscopy can be done from a distance but requires a handheld relay lens to be held close to the eye.
  • Ophthalmoscopes are typically used by trained professionals in medical settings, which means patients' eyes are seldom examined for retinal symptoms other than during the occasional eye exam for reading glasses or the Amsler test. Consequently, markers of various diseases that may be observable in patients' eyes often remain undetected. Moreover, current methods of retinal imaging (e.g., using an ophthalmoscope) are obtrusive and completely block the visual field of the examined eye. Thus, there is a need in the art for more frequent and less intrusive retinal monitoring.
  • a user may position themselves in front of a so-called "smart mirror," or another generally planar display (e.g., a television, computer monitor, a tablet computer, a smart phone, etc.).
  • the user may be prompted to focus her eyes on the display, e.g., by rendering and/or projecting one or more graphical elements on the display. This causes the user's focal plane to be aligned with a plane defined by the planar surface.
  • an imaging device such as a camera may be focused so that its focal plane also coincides with the plane defined by the planar surface.
  • the camera may be focused (e.g., using an external optical element) so that its focal plane is located behind the tablet or smart phone.
  • the user may be prompted to move her gaze to one or more locations on the planar display, so that various portions of the user's fundus are exposed to the camera's field of view.
  • the camera may capture one or more images of various portions of the user's fundus, and these images may be used collectively or individually for a variety of diagnostic purposes.
  • a system may include: a planar surface on which to render visual content to a user positioned at distance from the planar surface; a camera aimed towards the user; and one or more processors operably coupled with the camera.
  • the one or more processors may be configured to: identify a portion of the user's fundus to be targeted by the camera; calculate a target position on the planar surface that, when focused on by the user, causes the identified portion of the user's fundus to be within a field of view of the camera; render a graphical element at the target position on the planar surface; and while the graphical element is rendered at the target position, cause the camera to capture one or more images of the targeted portion of the user's fundus.
  • the planar surface may be an electronic display.
  • the electronic display may be a touchscreen.
  • the electronic display may be a smart mirror display.
  • the system may include a projector operably coupled with the one or more processors, the planar surface may be a projection surface, and the one or more processors may cause the projector to project a rendition of the graphical element onto the projector surface.
  • the memory may store instructions that cause the one or more processors to: detect a lateral shift by the user relative to the planar surface;
  • a focal plane of the camera may be coplanar with the planar surface. In various embodiments, a focal plane of the camera may be on an opposite side of the camera from the planar surface.
  • the system may include a smart phone, and the planar surface may be a touchscreen of the smart phone. In various versions, the camera may include a camera of the smart phone.
  • the graphical element may be a clock or weather indicator, or may portray a status of a personal care device or a personal health status of the user.
  • the portion of the user's fundus to be targeted by the camera may be identified based at least in part on a record of one or more previously-targeted portions of the user's fundus.
  • the portion of the user's fundus to be targeted by the camera may be identified based at least in part on a detected location of a pupil of the user within field of view of the camera.
  • a light source may be operably coupled with the one or more processors, and the one or more processors may operate the light source to provide coaxial illumination towards the targeted portion of the user's fundus.
  • the system may include a semi-transparent mirror angled relative to the light source and camera to guide light emitted by the light source towards the targeted portion of the user's fundus.
  • the memory may store instructions that cause the one or more processors to: operate the light source and camera to capture two or more successive images of the targeted portion of the user's fundus that alternate between being coaxially illuminated and non-coaxially illuminated; and generate a composite image of the targeted portion of the user's fundus based on the two or more successive images.
  • memory may store instructions that cause the one or more processors to subtract one of the two or more successive images that is non-coaxially illuminated from another of the two or more successive images that is coaxially illuminated.
  • the memory may store instructions that cause the one or more processors to: operate the light source to project a calibration light pattern onto the user's eye; detect a sharpness of the projected calibration light pattern from the user's eye; and cause the camera to capture one or more images of the user's fundus while the detected sharpness of the projected calibration light pattern satisfies a sharpness threshold.
  • the target position of the user's fundus may be a first target position
  • the memory may store instructions that cause the one or more processors to: identify a second portion of the user's fundus to be targeted by the camera; calculate a second target position on the planar surface that, when focused on by the user, causes the second identified portion of the user's fundus to be within the field of view of the camera; render a graphical element at the second target position on the planar surface; while the graphical element is rendered at the second target position, cause the camera to capture one or more images of the second target position of the user's fundus; and stitch together the one or more images of the first target position of the user's fundus with the one or more images of the second target position of the user's fundus to generate one or more composite images of the user's fundus.
  • the memory may store instructions that cause the one or more processors to: cause the camera to capture one or more images of the user's skin simultaneously with capture of the one or more images of the targeted portion of the user's fundus; determine a momentary phase in a cardiac cycle of the user based on the captured one or more images of the user's skin; and cause the camera to capture one or more additional images of the target position of the user's fundus at a moment selected based at least in part on the determined momentary phase in the user's cardiac cycle.
  • the term "smart mirror” refers to any assembly that includes a mirrored surface on which one or more graphical elements may be rendered.
  • a two-way mirror may be placed in front of a display device so that graphics rendered on the display device are visible through the two-way mirror.
  • a "bathroom” television may be equipped with a reflective touchscreen that can be operated by a user to control content displayed on the screen.
  • Smart mirrors may be used to display a variety of content, such as weather information, emails, texts, movies, and so forth— any content that would typically be displayed on a computer or a smart phone may similarly be displayed on a smart mirror.
  • Fig. 1 schematically illustrates an example fundus imaging system configured with selected aspects of the present disclosure, in accordance with various embodiments.
  • Fig. 2 depicts an example method for unobtrusive fundus imaging in accordance with various aspects of the present disclosure.
  • Fig. 3 depicts an example imaging processing technique that may be employed in accordance with various aspects of the present disclosure.
  • Fig. 4 schematically depicts an example of how a system configured with selected aspects of the present disclosure may handle lateral user movement.
  • Fig. 5 schematically illustrates another example fundus imaging system configured with selected aspects of the present disclosure, in accordance with various embodiments.
  • Fig. 6 schematically illustrates another example fundus imaging system configured with selected aspects of the present disclosure, in accordance with various embodiments.
  • System 100 configured with selected aspects of the present disclosure is depicted schematically in relation to an eye 102 of a user (not depicted).
  • System 100 may include a variety of components coupled together via one or more wired or wireless data/electrical/communication pathways 104, such as one or more buses. These components may include, for instance, logic 106, memory 108, one or more speakers 109, a planar surface 110 (e.g., a display screen, television), one or more audio inputs 111, and one or more imaging devices such as a camera 112.
  • logic 106 may include one or more processors configured to execute instructions stored in memory 108. In other embodiments, logic 106 may come in other forms, such as a field-programmable gate array (“FPGA") or an application-specific integrated circuit ("ASIC"). In various embodiments, logic 106 may be communicatively coupled with one or more remote computing devices (not depicted) via one or more networks 114.
  • One or more networks 114 may include one or more local area networks, wide area networks (e.g., the Internet), one or more so-called “personal area networks” (e.g., Bluetooth), and so forth.
  • System 100 may also include other components which may or may not be operably coupled with logic 106.
  • system 100 includes a lens 115 positioned in front of camera 112, a light source 116, and a semi-transparent mirror 118 positioned adjacent the light source and in front of camera 112.
  • Semi-transparent mirror 118 may be angled to reflect light 120 (which may be visible, infrared, ultraviolet, etc.) emitted by light source 116 as coaxial illumination along a field of view 122 of camera 112.
  • light source 116 may be operably coupled with, and hence controllable by, logic 106.
  • lens 115 may be omitted, and refraction of eye 102 may be directly projected onto an image sensor of camera 112.
  • lens 115 may be a microlens array as described in U.S. Patent No. 8,934,005 to De Bruijn et al.
  • camera 112 may be tilted inward to capture a larger part of the volume in front of the planar surface 110 (e.g., the space in a bathroom in front of the smart mirror).
  • lens 115 may be corrective to ensure that a focal plane of camera 1 12 coincides with a plane 126 defined by the planar surface 110.
  • Such an optical configuration to tilt the focus plane is sometimes referred to as the "Scheimpflug principle.”
  • Planar surface 110 may take various forms that may be selected in order to cause a user to position themselves at a relatively predictable and/or fixed distance from planar surface 110.
  • planar surface 110 may take the form of a "smart mirror" that hangs, for instance, on a user's bathroom wall over the sink, and that is configured for rendition of graphical elements on a reflective surface facing the user. In that manner, the user may see, in addition to his or her own reflection, one or more graphical elements 124 (e.g., targets) on the mirror.
  • graphical elements 124 e.g., targets
  • planar surface 110 may take the form of a display device (e.g., a computer monitor, flat-screen television, etc.) that lacks a reflective surface. For example, many office workers spend hours each day in front of a computer screen. These may present prime opportunities to obtain multiple images of users' eyes over any length of time.
  • planar surface 110 may be a touchscreen, e.g., of a tablet computer or smart phone. An example of such an embodiment is depicted in Fig. 5.
  • planar surface 110 may be a passive component such as a projection screen or simply a wall surface upon which one or more graphical elements may be projected. An example of such an embodiment is depicted in Fig. 6.
  • the user is positioned at some distance (e.g., more than several inches away from) from planar surface 110.
  • This facilitates unobtrusive examination of the user's fundus, which can be done as a matter of routine, e.g., daily, weekly, etc.
  • traditional examination by a professional requires use of an ophthalmoscope, which either requires obtrusive contact with the user, or at the very least, requires the professional to hold a relay mirror at a particular position to obtain an image of the user's fundus.
  • planar surface 110 may define a plane 126 that may serve as a shared focal plane of eye 102 and camera 112.
  • camera 112 is adjusted so that it has a focal point 128 that lies on the plane 126.
  • eye 102 has been adjusted by the user so that its field of view 130 is focused on the graphical element 124, which also lies on plane 126. Consequently, both eye 102 and camera 112 share a common focal plane at 126.
  • a lens (not specifically indicated in Fig. 1) of eye 102 is properly adjusted so that field of view 122 of camera 112 is properly focused on, and is able to capture clear images of, a targeted posterior portion 132 of eye 102.
  • Point 134 in Fig. 1 may represent the optic disc.
  • system 100 may be configured to obtain, in an unobtrusive manner, one or more images of one or more selected portions of an interior (e.g., posterior) of eye 102. These one or more images may be used to diagnose and/or monitor various diseases, ailments, or other conditions of the user that are detectable based on one or observable attributes of eye 102.
  • logic 106 may be configured to adjust a focus setting of camera 112 so that a focal plane of camera 112 coincides with planar surface 110 (e.g., with plane 126).
  • Logic 106 may be further configured to identify, at block 204, a portion 132 of the user's fundus to be targeted by camera 112. For example, if it is desired to determine whether the user has diabetes, then portions of the user's eyes likely to exhibit diabetic retinopathy may be targeted. If it is desired to examine aspects of the user's blood circulation, then one or more retinal blood vessels may be targeted.
  • the targeted retinal feature may be selected in various ways. In some embodiments, the user may select the retinal feature, e.g., in response to instructions from a doctor, by operating a computing device (or planar surface 110 itself if a touchscreen).
  • the user's doctor's office may have the ability to remotely instruct logic to target a specific feature of the user's fundus.
  • the targeted retinal feature may be selected based on one or more attributes of that retinal feature detected during routine monitoring, or based on one or attributes of other retinal features that may justify examination of the selected retinal feature.
  • a location of a retinal feature such as the user's pupil, e.g., within field of view 122 of camera 112 may be determined at block 206.
  • logic 106 may calculate a target position on planar surface 110 that, when focused on by the user, causes the targeted portion of the user's fundus to be within field of view 122 of camera 112. For example, logic 106 may calculate a position on planar surface 110 such that if the user gazes at that position, the targeted portion 132 of the user's fundus will be exposed to field of view 122 of camera 112.
  • logic 106 may cause a graphical element (e.g., 124) to be rendered at the target position on planar surface 1 10. Meanwhile, at block 212, logic 106 may cause camera 1 12 to capture one or more images of the targeted portion 132 of the user's fundus.
  • a graphical element e.g., 124
  • a brief scanning procedure may be implemented to find a suitable retinal feature to serve as a starting point (or reference point).
  • a graphical element may be rendered on the planar surface such that the imaged area (i.e. the portion of the user's fundus captured within the camera's field of view) is on (or near) the optic disc.
  • an ophthalmologist using an ophthalmoscope may find the optic disc by detecting a blood vessel in the field of view, and then following the vascular bifurcations in opposite direction, to quickly trace back his/her way towards the optic disc.
  • a similar principle may be applied automatically by some embodiments of the present disclosure. For example, upon detection of a blood vessel by camera 1 12, logic 106 may change the position on planar surface 1 10 at which graphical element 124 is rendered, in order to quickly get the optic disc 134 in field of view 122 of camera 1 12.
  • graphical element 124 may take a variety of forms.
  • graphical element 124 may portray a target such as the "X" depicted Fig. 1 that the user is overtly instructed to look at in order to obtain a proper reading of the user's eye. The user may be instructed to follow the target with her eyes using audio and/or visual output, such as an instruction rendered on planar surface 1 10 to "FOLLOW THE TARGET.”
  • graphical element 124 may take other forms select to covertly attract the user's gaze.
  • graphical element 124 may portray a clock (e.g., a drawing of a clock and/or an LCD readout), an animated character (e.g., a bug, a smiley face, etc.), a weather indicator and/or icon (e.g., cloudy, chance of rain, temperature, etc.), a status of a personal care device of the user (e.g., "your electric toothbrush has 10% battery power remaining," or an image of a battery with a corresponding portion filled in), and/or a personal health status of the user (e.g., the user's weight if she is currently or has recently stepped on a scale, the user's temperature, etc.).
  • a clock e.g., a drawing of a clock and/or an LCD readout
  • an animated character e.g., a bug, a smiley face, etc.
  • a weather indicator and/or icon e.g., cloudy, chance of rain, temperature, etc.
  • planar surface 1 10 is a touchscreen— which may be the case, for instance, where system 100 includes a tablet computer or smart phone, or where planar surface 1 10 is a smart mirror with its reflective surface being a touchscreen as well—
  • graphical element 124 may be portrayed as a user interface element such as a button or an actuable element of a video game.
  • logic 106 may cause graphical element 124 to be rendered at different locations, e.g., in a predetermined sequence, in order to expose different posterior portions of the user's eye to field of view 122 of camera 112.
  • the predetermined sequence may be selected, for instance, so that the resulting sequence of digital images may be stitched together to generate a composite image and/or otherwise used to make various calculations for various diagnoses.
  • logic 106 may cause graphical element 124 to be rendered to hover around a single position for some predetermined amount of time. This may enable camera 112 to obtain multiple images that slightly overlap, which may facilitate correlation and/or stitching of those images into a larger composite image that is useful for various purposes.
  • logic 106 may calculate a position on planar surface 1 10 at which graphical element 124 should be rendered based at least in part on a detected location of various retinal features of eye 102, such as the pupil, within field of view 122 (i.e. within a camera frame) of camera 112.
  • the retinal feature also referred to herein as a "reference retinal feature”
  • logic 106 may calculate a position on planar surface 110 that, when focused on by the user, causes a desired portion of the user's fundus to be exposed to field of view 122 of camera 112. In this manner, system 100 may operate as a "closed loop" system.
  • the reference retinal feature will be detected at a new location within the visible fundus area captured by the camera and used to recalculate a new position on planar surface 110 at which to render graphical element 124. This may in turn lead to camera 112 capturing an image stream in which the reference retinal feature appears at a relatively stable position across frames.
  • other retinal features may be used, such as the optic disc 134, vascular bifurcation, a specific artery or vein, and so forth.
  • the reference retinal feature may be selected to be relatively stable across frames, e.g., to facilitate time- resolved measurements such as a sequence of images illuminated at various wavelengths, and/or a sequence of images that depict a time-variant physiological process such as a user's pulse and/or related photoplethysmographic ("PPG") response.
  • PPG photoplethysmographic
  • logic 106 may render graphical element 124 at a sequence of locations that each is selected based at least in part on previous locations on planar surface 110 at which graphical element 124 was rendered, and/or based on images previously captured by camera 112. For example, suppose system 100 is configured to monitor for a particular condition by obtaining images of a particular portion of the user's fundus over time. Logic 106 may keep track, e.g., in memory 108, of which positions at which graphical element 124 has been rendered, and may select new positions for rendition of graphical element 124 to target different posterior portions of eye 102. Additionally or alternatively, logic 106 may examine images recently captured by camera 112, e.g., over the span of several days, a week, a month, etc., and may identify posterior portions of eye 102 that need additional imaging.
  • image processing may be applied, e.g., by logic 106 or by another computing component, to cause various features of the posterior portion of eye 102 to become clearer. For example, retinal arteries and veins may become more clearly visible after image processing.
  • Fig. 3 depicts on example method 300 of performing image processing on images captured by camera 112.
  • correction may be made for any static disturbances of camera 112.
  • Static disturbances may cause a spatial pattern of spurious pixel- value offsets that may be the same for every captured image, regardless of which portion of eye 102 is captured by camera 112.
  • Such correction may be based on a calculation of an average noise/glare image using, for example, one hundred consecutive images under active coaxial illumination of a non-reflective black surface.
  • a resulting image may combine the measurement of the following two imaging disturbances: a pixel value offset due to dark fixed-pattern noise of a complementary metal-oxide semiconductor ("CMOS") sensor employed as part of camera 112 (e.g., giving rise to a static pattern of colored vertical stripes); and a pixel value offset due to a glare of the coaxial illumination system due to internal reflections.
  • CMOS complementary metal-oxide semiconductor
  • a correction may be made for dynamic correlated-noise of the acquisition system.
  • Such dynamic correlated noise may cause spurious correlated-noise signals that differ for every captured image, regardless of what retinal features are captured in the image.
  • the correction may be based at least in part on the correction of so-called clamp noise, a phenomenon common to analogue television giving rise to a similar image-wise disturbance.
  • a correction may be made for dynamic uncorrelated-noise.
  • a correction may be made for glare caused by the image object.
  • features of the user's fundus may appear sharp when the user being tested focuses on the display plane (e.g., 126 in Fig. 1) that coincides with the camera focus plane.
  • other features of the user such as the user's face, may appear out of focus. Due to a strong de focus blur, anything in an image captured by camera 112 that is in the vicinity of the user's pupil may tend to bleed into the sharp image of the retina, potentially reducing image contrast.
  • logic 106 may operate light source 116 to provide (e.g., by way of semi-transparent mirror 118) alternating coaxial illumination towards the targeted portion 132 of the user's fundus.
  • logic 106 may operate light source 116 and camera 112 to capture two or more successive images of targeted portion 132 of the user's fundus that alternate between being coaxially illuminated and non- coaxially illuminated.
  • logic 106 may generate a composite image of targeted portion 132 of the user's fundus based on the two or more successive images. For example, in some embodiments, logic may subtract one of the two or more successive images that is non- coaxially illuminated from another of the two or more successive images that is coaxially illuminated.
  • the resulting image may clearly depict the desired retinal features without the surrounding features that are not of interest.
  • This clearly-depicted retinal feature may give rise to several benefits.
  • the clearly-depicted retinal feature may be a pupil that can be used for pupil location detection within a frame of camera 112.
  • the resulting image may also be relatively free of glare caused by facial structures in the vicinity of the pupil. This glare may be the cause of defocus blur, and so its removal may improve the contrast of the image.
  • captured images taken over time may allow for comparison with prior image captures. For example, changes of the same feature over time may be followed (e.g., to detect the gradual onset of diabetes).
  • Similar features e.g., arteries, veins
  • Similar features may be considered members of a "class,” and may be collected for a combined analysis.
  • Features belonging to multiple classes may also be collected and used for various calculations. For example, in some embodiments, a determination may be made of the level of blood oxygenation of the user based on the specific absorption of Hb0 2 and Hb respectively, using the statistical average of the vessels classified as "arteries” in relation to the statistical average of the those classified as “veins.”
  • multiple image captured by camera 112 may be stitched together to generate a new composite image covering a wider retinal area.
  • captured images may be analyzed individually as needs vary.
  • logic 106 may be configured to perform various "calibration" operations to account for one or more observable parameters of the user. For example, in some embodiments, logic 106 may cause camera 112 to capture one or more images of the user's skin simultaneously with capture of one or more images of targeted portion 132 of the user's fundus. Logic 106 may then determine a momentary phase in a cardiac cycle of the user based on the captured one or more images of the user's skin (e.g., similar to a PPG signal). Logic 106 may then cause camera 112 to capture one or more additional images of the targeted portion 132 of the user's fundus at a moment selected based at least in part on the determined momentary phase in the user's cardiac cycle.
  • logic 106 may account for the momentary phase in the user's cardiac cycle when, for instance, logic 106 compares one or more attributes (e.g., vessel diameter) of a retinal feature with one or more thresholds. In this manner, any light absorption detected in the user's retinal arteries or veins may be corrected and/or calibrated to avoid spurious readings.
  • attributes e.g., vessel diameter
  • the aforementioned captured momentary cardiac phase, recorded in association with a captured fundus image may be used to generate a new composite image sequence.
  • This new composite image sequence may cover a wider retinal area and collectively depict the effect of the blood flow during one single cardiac cycle.
  • there may be multiple captures of each portion of the fundus, each at a different cardiac phase.
  • the generation of each phase-specific image in the sequence may result from interpolation between two or more of the phase-specific composing images captured at that approximate target phase.
  • logic 106 may be configured determine whether the user has properly focused eye 102 on plane 126 before capturing images. This may be accomplished in various ways. In some embodiments, techniques described in U.S. Patent No. 8,934,005 to De Bruijn et al. may be employed. For example, logic 106 may to operate light source 116 to project a calibration light pattern (e.g., near infrared, or "NIR") onto eye 102. Logic 106 may then detect a sharpness of the projected calibration light pattern from eye 102. Logic 106 may then cause camera 112 to capture one or more images of the user's fundus while the detected sharpness of the projected calibration light pattern satisfies a sharpness threshold.
  • a calibration light pattern e.g., near infrared, or "NIR”
  • system 100 may be configured to adjust various parameters in response to a determination that a user has shifted position.
  • logic 106 may be configured to detect a lateral shift by the user relative to planar surface 110.
  • logic 106 may calculate, based on the detected lateral shift, an updated target position on planar surface 110 that, when focused on by the user, causes targeted portion 132 of the user's fundus to be within field of view 122 of camera 112. Then, logic 106 may cause graphical element 124 to be rendered at the updated target position.
  • FIG. 4 which is an overhead view of a user's eye 402 focusing on a target position ⁇ a target plane 426 defined by a planar surface (not depicted in Fig. 4).
  • the point C represents a position of a camera
  • the point R represents a targeted posterior portion of eye 402 that is targeted by camera C.
  • the user's pupil is represented by the point P.
  • the distance between the camera C and the eye 402 is indicated by z P .
  • the lateral offset of eye 402 from camera C is indicated by x P .
  • the distance between camera C and the target plane 426 is indicated by z T .
  • the lateral offset of the target T from camera C is indicated by x T .
  • the viewing target J may be shifted to T' in order to cause eye 402 to correspondingly rotate.
  • the triangle spanned in Fig. 4 by R, S, and P should not change shape, which also means that the angle ⁇ enclosed by the lines PT and PC should remain constant. This may be achieved, for instance, by moving the target Tto a new position T', so that (9' remains the same as ⁇ .
  • an equation such as the following may be used to calculate ⁇ (and hence, T'):
  • logic 106 may employ head tracking techniques to detect when the user has shifted. For example, the blurred appearance of the user's face within the field of view 122 of camera 112 may be analyzed in combination with a model of a human face, e.g., stored in memory 108. Or, in some embodiments, a second camera may be employed to capture additional images that can be used to detect head movement.
  • Fig. 5 depicts an alternative embodiment of a system 500 configured with components that, for the most part, are similar to those depicted in Fig. 1 (except numbered "5XX” instead of "1XX”). Accordingly, most of the components will not be described again, and many are omitted from Fig. 5 altogether for the sake of clarity).
  • system 500 differs from system 100 in at least one key aspect.
  • planar surface 510 takes the form of a tablet computing device or smart phone that includes a touchscreen 560, and the camera takes the form of a front-facing camera 512.
  • touchscreen 560 defines a plane 526 that, like plane 126 in Fig. 1, is meant to be used as a focal plane for an eye 502 of a user.
  • logic (not depicted in Fig. 5, see 106 in Fig. 1) of the tablet computer or smart phone may cause a graphical element 524 to be rendered on touchscreen 560 at a location selected to cause eye to expose a targeted posterior portion 532 of eye 502 to be exposed to a field of view 522 of camera 512, much in the same way as was described previously.
  • camera 512 of Fig. 5 may be focused so that its focal plane 562 is artificially positioned behind the tablet computer/smart phone, e.g., on the opposite side of plane 526 as the user.
  • This may be a build-in focusing capability of the camera or this may be achieved by placement of a focus correcting optical element in front of the camera.
  • This may neutralize optics of eye 502 by moving the focal plane backwards, in turn facilitating clear imaging of posterior portions of the user's fundus when the user is relatively close to the camera 512 and focusing at a distance that coincides with focal plane 562, as would be the case with Fig. 5.
  • front-facing camera 512 is depicted in Fig. 5, this is not meant to be limiting. In some instances, a rear-facing camera may be more powerful than a front facing camera. In some such instances, various types of optics (e.g., mirrors, casting video streams to other devices, etc.) may be employed to facilitate implementation of disclosed techniques with a rear facing camera.
  • optics e.g., mirrors, casting video streams to other devices, etc.
  • Fig. 6 depicts an alternative embodiment of a system 600 configured with components that, for the most part, are similar to those depicted in Fig. 1 (except numbered "6XX” instead of "1XX”). Accordingly, most of the components will not be described again.
  • system 600 differs from system 100 in at least one key aspect.
  • planar surface 610 takes the form of a projection surface such as a projection screen or a blank wall.
  • a projector 670 may be operably coupled with logic 606 so that logic 106 can perform operations similar to those described above, such as rendering graphical element 624 at various locations on the projection surface to cause eye 602 to look at those locations, exposing a targeted portion 632 of a posterior of a fundus of eye 602 to a field of view 622 of camera 612.
  • Images captured by the various cameras may be used by various medical personnel in various ways to diagnose and/or monitor various ailments and conditions.
  • logic e.g., 106, 506, 606
  • this transfer of data may take place only when certain criteria are met, such as upon image-wise coverage of sufficient retinal area, upon collection of a sufficient number measurements (e.g. in order to attain a desired statistical significance), upon reaching a desired signal to noise ratio (e.g. in order to sufficiently suppress acquisition noise), and/or when a characteristic retinal feature changes beyond given thresholds.
  • a system may include a second semi-transparent mirror, behind which a second camera or other image sensor may be placed.
  • This second image sensor may operate as a point-wise optical detector to perform a momentary integral measurement of what is in front of the camera.
  • the second image sensor may be configured to capture images at a different focus distances, such as capturing a sharp image of the user's face and/or capturing a sharp image of the iris.
  • the appearance of the user's face and/or specific retinal features may be used as a means of personal identification, for instance with the aim to discriminate among multiple subjects using the same system.
  • multiple cameras may be positioned around a periphery of a planar surface (e.g., 110, 510, 610).
  • the multiple cameras may share the same focal plane.
  • One of the cameras e.g. at the bottom edge of the planar surface, may capture images of the upper half of the user's ocular fundus.
  • Another camera positioned at the top edge of the planar surface may capture images of the bottom half of the user's fundus.
  • cameras flanking the planar surface on either side may capture images of respective sides of the user's ocular fundus.
  • Such an arrangement may facilitate capturing of images of the user's ocular fundus both to the left and right of the user's fovea.
  • inventive embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed.
  • inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein.
  • the phrase "at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements.
  • This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase "at least one" refers, whether related or unrelated to those elements specifically identified.
  • At least one of A and B can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

Cette invention concerne un système, comprenant éventuellement : une surface plane (110, 510 610) sur laquelle un contenu visuel (124 , 524 624) est rendu à un utilisateur positionné à une certaine distance de la surface plane ; une caméra (112, 512, 612) ; un ou plusieurs processeurs (106, 506, 606) ; et une mémoire (108, 508, 608) fonctionnellement couplée audit/auxdits processeur(s). La mémoire peut stocker des instructions qui amènent ledit/lesdits processeurs à : identifier une partie (132) de la rétine de l'utilisateur à cibler par la caméra ; calculer une position cible sur la surface plane qui, lorsqu'elle est focalisée par l'utilisateur, amène la partie identifiée de la rétine de l'utilisateur à se trouver dans un champ de vision (122) de la caméra ; afficher un élément graphique (124) sur la position cible ; et tandis que l'élément graphique est affiché sur la position cible, amener la caméra à capturer une image de la partie ciblée de la rétine de l'utilisateur.
PCT/EP2017/056331 2016-03-29 2017-03-17 Imagerie rétinienne non invasive WO2017167587A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP16162547.0 2016-03-29
EP16162547 2016-03-29

Publications (1)

Publication Number Publication Date
WO2017167587A1 true WO2017167587A1 (fr) 2017-10-05

Family

ID=55642268

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/056331 WO2017167587A1 (fr) 2016-03-29 2017-03-17 Imagerie rétinienne non invasive

Country Status (1)

Country Link
WO (1) WO2017167587A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2394569A1 (fr) * 2010-06-10 2011-12-14 Nidek Co., Ltd Appareil ophtalmique
US8934005B2 (en) 2011-07-14 2015-01-13 Koninklijke Philips N.V. System and method for remote measurement of optical focus
WO2016028877A1 (fr) * 2014-08-22 2016-02-25 Brien Holden Vision Diagnostics Systèmes, procédés et dispositifs de surveillance du mouvement de l'œil pour tester un champ visuel

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2394569A1 (fr) * 2010-06-10 2011-12-14 Nidek Co., Ltd Appareil ophtalmique
US8934005B2 (en) 2011-07-14 2015-01-13 Koninklijke Philips N.V. System and method for remote measurement of optical focus
WO2016028877A1 (fr) * 2014-08-22 2016-02-25 Brien Holden Vision Diagnostics Systèmes, procédés et dispositifs de surveillance du mouvement de l'œil pour tester un champ visuel

Similar Documents

Publication Publication Date Title
US11766172B2 (en) Ophthalmic examination and disease management with multiple illumination modalities
JP7252144B2 (ja) 眼科撮像の改良のためのシステムおよび方法
CA2730720C (fr) Appareil et procede d'imagerie de l'oeil
US9436873B2 (en) Method and system for monitoring the skin color of a user
US10098592B2 (en) Blood flow image diagnosing device and method
KR20190041818A (ko) 이미지 기반 황달 진단 방법 및 장치
CN110916608B (zh) 一种屈光度检测装置
WO2011002837A2 (fr) Système de surveillance à domicile du mouvement des yeux
KR102041460B1 (ko) 두부의 움직임이 필요없는 가상현실을 이용한 어지럼증 진단 장치
WO2015037316A1 (fr) Dispositif et procédé d'imagerie d'organe
US11144755B2 (en) Support glint for remote eye tracking
US20230064792A1 (en) Illumination of an eye fundus using non-scanning coherent light
WO2017167587A1 (fr) Imagerie rétinienne non invasive
JP2005261447A (ja) 眼科撮影装置
JP5829850B2 (ja) 瞳孔計測装置
EP3440990A1 (fr) Système d'imagerie d'un fond d'oeil
JP2015146961A (ja) 眼科装置及び眼科装置の制御方法
KR102198356B1 (ko) 휴대용 안진기 및 이의 캘리브레이션 방법
CN104337497B (zh) 一种oct扫描装置及眼科oct设备
JP2019024608A (ja) 視線検出校正方法、システム、及びコンピュータプログラム
JP7391586B2 (ja) 眼科システム
Liu et al. Detecting AMD caused vision scotoma through eye tracking
Bharathan et al. A Refined VR Based Video Indirect Ophthalmoscope
JPH11309116A (ja) 検眼装置
JP2022083328A (ja) 身体状態の推定装置、身体状態の推定方法、プログラム、および記録媒体

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17710754

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17710754

Country of ref document: EP

Kind code of ref document: A1