WO2022123237A1 - Vision aid device - Google Patents

Vision aid device Download PDF

Info

Publication number
WO2022123237A1
WO2022123237A1 PCT/GB2021/053205 GB2021053205W WO2022123237A1 WO 2022123237 A1 WO2022123237 A1 WO 2022123237A1 GB 2021053205 W GB2021053205 W GB 2021053205W WO 2022123237 A1 WO2022123237 A1 WO 2022123237A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
vision aid
visual acuity
aid device
screen
Prior art date
Application number
PCT/GB2021/053205
Other languages
French (fr)
Inventor
Stanislav KARPENKO
Elodie DRAPERI
Piotr IMIELSKI
Original Assignee
Vision Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vision Technologies Ltd filed Critical Vision Technologies Ltd
Publication of WO2022123237A1 publication Critical patent/WO2022123237A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted

Definitions

  • This invention relates to a vision aid device and, more particularly but not necessarily exclusively, to a vision aid for use by users having sight loss defined as ‘low vision’ by the World Health Organisation.
  • the World Health Organization defines ‘low vision’ as a visual acuity of less than 6/18.
  • Such low vision can be manifested in a number of different conditions.
  • AMD age-related macular degeneration
  • Stargardt disease can cause central vision loss
  • albinism can affect the whole visual field
  • glaucoma and optic Neuritis can cause reduced contrast sensitivity. All of these can cause reduced contrast sensitivity. All of these, and other, disorders can result in a degradation of a patient’s vision to the extent that it qualifies as ‘low vision’ as defined by the WHO.
  • Head mounted visual aids for people with low or impaired vision have been used for hundreds of years, most commonly in the form of optics-based solutions such as spectacles.
  • wearable head-mounted devices such as virtual reality (VR) headsets and Augmented Reality (AR) glasses have become increasingly common, and some technical advances in this field of technology have yielded AR headsets specifically designed to enable users with low vision to utilise their residual sight by making image content accessible through (partially) functioning parts of the retina.
  • VR virtual reality
  • AR Augmented Reality
  • WO201 9/232082 describes a hybrid see-through augmented reality (AR) device that comprises a head-mounted frame configured to be worn by a user, over their eyes, in the manner of a pair of glasses or goggles.
  • the device includes a camera disposed on the frame and configured to capture real-time video image data of the user’s environment, and a processor for processing the image data to produce a video stream which is displayed on a screen within the frame, replacing the central portion of the user’s field of view and blending with the peripheral portion of the user’s field of view, thereby enhancing (and expanding) the low vision user’s view of their environment.
  • AR augmented reality
  • the visual acuity of individual ‘low vision’ patients varies greatly, depending on many different factors, including age, eye health, the particular condition causing low vision, the degree to which a patient’s visual acuity is impaired, the part of the retina affected, and the degree to which the particular medical condition causing low vision has progressed, to name just some.
  • the visual acuity of a low vision patient can change and degrade over time. Therefore, in relation to a visual aid, a ‘one size fits all’ approach is not really appropriate or effective.
  • any vision loss can be indicative of progression of wet AMD which should be treated urgently by the administration of the above-mentioned anti-VEGF injection.
  • aspects of the present invention seek to address at least one or more of these issues.
  • a vision aid device for a low vision user comprising:
  • a light blocking head-mountable base unit having a distal end located in a user’s eyeline when mounted for use;
  • an image capture device arranged and configured to capture image data from a user’s environment, in use
  • a processor for receiving said image data and configured to apply image compensation thereto to generated enhanced image data and cause said enhanced image data to be displayed on the or each said screen;
  • a user input means such as a manual controller, joystick, voice command recognition means, etc.
  • the processor being further configured to generate, during use of said device as a vision aid, a visual acuity test flow associated with said user, obtain, from said test flow, a visual acuity measurement for said user, and transmit said visual acuity measurement to a remote server.
  • the processor may be configured to present a visual acuity test on the at least one screen for completion by a user using user input means (which may be the same as that used for image manipulation or additional user input means provided for the purpose) during use of the device as a vision aid.
  • the visual acuity test comprises briefly displaying a series of characters or shapes in turn on the screen, each displayed character being smaller and/or of lower contrast to the background than the previous character, until the user indicates that they can no longer see the character sufficiently clearly on the screen, and the processor is configured to calculate a visual acuity measurement from the last clearly seen displayed character.
  • each displayed character in a first part of the test, is smaller than the previous character and a user indicates the smallest character they can see. Then, in a second part of the test, the character is displayed at an increasingly lower contrast and a user indicates the lowest contrast at which they can see the character; this second part providing increased granularity and accuracy in respect of the visual acuity measurement.
  • Other validated tests such as contrast sensitivity, amsler grid and/or the paracentral acuity test, may also be presented.
  • the processor is configured to display a screen calibration test flow for completion by the user prior to displaying the visual acuity test.
  • the processor is configured to display a test calibration flow for completion by the user prior to displaying the visual acuity test.
  • the controller is beneficially configured to generate image manipulation data in response to user control actions, and the processor is configured to utilise the image manipulation data to manipulate the displayed image data in substantially real time.
  • the processor is configured to monitor image data displayed on the or each said screen to detect changes or anomalies therein, and use a said detected change or anomaly to identify a change in visual acuity of the user.
  • the processor may be configured to transmit an alert to said remote server indicating that the visual acuity of the user may have changed.
  • the processor may be configured to generate a visual acuity measurement using a significant change in image manipulation data.
  • the processor may be configured to monitor image manipulation data over a period of time to determine a standard pattern for said user, and to identify a significant change therein if the image manipulation data deviates by more than a predetermined threshold from said standard pattern.
  • the processor may be configured to use computer vision techniques to identify activities being performed by a user during use of the vision aid, and record activity data representative thereof a sa real world evidence log for said user.
  • the vision aid may comprise a pair of screens located side by side at said distal end of said base unit, wherein enhanced image data is displayed on each of the screens simultaneously during use of the device as a vision aid.
  • the processor is configured to generate and present a visual acuity test flow for each of a user’s eyes separately.
  • the processor is configured to switch one of the screens to black (or switch it off) whilst a visual acuity test flow is being presented on the other screen.
  • a method performed in a communications system comprising at least one screen and a light blocking head-mountable base unit configured, in use, to be supported on a user’s head such that the or each screen is in their eyeline, the method comprising, under control of a processor of the communications system:
  • OCT Optical Coherence Topology
  • OCT is a non-invasive diagnostic technique using an instrument designed to image a patient’s retina.
  • OCT relies on low coherence interferometry that can be used to generate a cross-sectional image of the patient’s macula.
  • This cross-sectional view of the macula shows if its layers are distorted and can be used to monitor whether distortion of the layers of the macula has increased or decreased relative to an earlier cross-sectional image to assess the impact of treatment of the macular degeneration.
  • Retinal imaging systems are typically large and expensive devices that often require a trained technician and/or a motorized control loop to properly align the optical axis of the OCT imaging system with the optical axis of the eye under examination. As a result, many such OCT systems are restricted to specialised eye clinics.
  • WO201 9/092697 describes a home OCT imaging system which includes a viewer assembly on which a user can rest their head, with one eye directed into a specified region, the viewing assembly being configured such that when the user’s head is correctly engaged on its interface surface, the eye directed into the imaging system is correctly aligned with the optical axis.
  • Image data of the eye thus captured, is transmitted to a remote server for processing, and the respective images are then transmitted to the clinician under whose care the patient is, who can decide, based on the image data, what, if any, action is required.
  • the equipment is large and bulky, and it requires the user to remember to test their eyes regularly. It can take several minutes for the required image data to be collected and there is, therefore, a high probability that the patient may delay or miss scheduled testing events, thus risking further macular degeneration going undetected for a long period of time. Also, the patient needs to have the mantal capacity to, not only remember to perform a scheduled test, but also position their head correctly so as to align their eyes with the optical axis of the imaging system.
  • retinal imaging is performed using a fundus camera that also requires a trained technican and/or a motorized control loop to align the optical axis with the patient’s pupil to be able to capture an image of the retina.
  • the visual acuity testing of the type described above is useful and, by incorporating it into a visual aid, there is a reduced likelihood that testing will be missed.
  • a vison aid device comprising head-mountable base unit having a distal end located in a user’s eyeline when mounted for use, the base unit housing a retinal imaging system comprising: an imaging system comprising a light source and an image capture device located outside of the user’s eyeline and an optical element arrangement configured, in use, to direct light from the light source into the user’s eyeline and for directing light from the user’s eyeline to the image capture device, the retinal imaging system further comprising a processor for receiving image data captured by said image capture device and transmitting said image data, or data representative thereof, to a remote server.
  • the retinal imaging system can, for example, be used to recreate key data available from OCT scans and a computer vision image analysis system, either in the processor or at the remote server, can be used to acquire objective information on retinal structure without the physical or financial expense of OCT equipment.
  • the processor may be configured to analyse said image data to identify one or more captured images in which the user’s pupil is aligned with the optical axis of the imaging system as defined by the optical element arrangement and transmit image data representative of the one or more identified images to said remote server.
  • the processor may be configured to identify when the user’s pupil is aligned with the optical axis of the imaging system as defined by the optical element arrangement and cause said image capture device to capture an image when such alignment is identified.
  • the retinal imaging system may include means for generating a fixation target which, when a user fixes their gaze thereon, results in the user’s pupil being aligned with the optical axis of the imaging system as defined by the optical element arrangement.
  • the optical element arrangement may comprise at least two reflective elements for directing light from said light source into the user’s eyeline, in use, and directing light reflected from the user’s retina to the image capture device.
  • the optical element arrangement may comprise a pair of reflective elements or mirrors, a first mirror being configured to reflect light from the light source through substantially 90° to a second mirror, the second mirror being configured to reflect said light back through substantially 90° into the user’s eyeline, wherein the optical axis of the imaging system and the user’s eyeline are substantially parallel to, and laterally spaced apart from, each other.
  • the light source may be configured to illuminate the user’s retina with light of a selected one or more wavelengths.
  • a filter may be provided for illuminating the user’s retina with light of a selected one or more wavelengths.
  • the imaging system may include focusing lens configured, in use, to focus light received from the user’s retina, via said optical element arrangement, onto the image capture device.
  • the retinal imaging system may beneficially also include a condensing lens located in the user’s eyeline when the vision aid is in use.
  • the image capture device and light source are located outside of the user’s eyeline, to minimise disruption, and the optical axes of both the light source and the camera are accurately directed into the user’s eyeline by the optical element arrangement, which is such that the entire imaging system can be placed within a relatively small housing (perhaps ⁇ 1cm 3 ).
  • the optical element arrangement may beneficially comprise a first input/output axis and a second input/output axis, the first and second input/output axes being substantially parallel to, and spaced apart from, each other.
  • the first input/output axis is beneficially aligned with the optical axis of the image capture device and the second input/output axis is beneficially aligned with a condensing lens within the user’s eyeline, such that light from the light source is directed from the first input/output axis to the second input/output axis and onto the condensing lens, and reflected light from the user’s retina is directed through the condensing lens to the second input/output axis and then to the first input/output axis back to the image capture device.
  • a focusing lens may be located within the first input/output axis, between the image capture device and the optical element arrangement.
  • a vision aid device for a user having ‘low vision’ of a particular type Whilst the present invention is predominantly described below in relation to a vision aid device for a user having ‘low vision’ of a particular type, it is to be understood that a vision aid for users having low vision more generally, e.g. a pair of spectacles for a user that is myopic, could be provided with a retinal imaging system of the type described above, to provide continuous remote sight monitoring to detect changes in sight and enable personalised eye healthcare.
  • aspects of the invention can thus provide rapid, reliable and standardised objective imaging in a wide variety of patients with minimal interference in their daily lives, thereby ensuring that any changes in their eyesight requiring medical attention can be quickly and efficiently identified, and the invention is not necessarily intended to be limited to vision aids specifically with the type of ‘low vision’ defined above.
  • Figure 1 is a schematic perspective view of a vision aid device according to an exemplary embodiment of the invention
  • Figure 2 is a schematic cross-sectional side view of a smartphone for use in a vison aid device according to an exemplary embodiment of the invention
  • Figure 3 is a is a schematic block diagram illustrating an exemplary vision aid system including a communications server apparatus for receiving and processing patient data from an exemplary vision aid device;
  • Figure 3a is a schematic diagram illustrating the configuration of an exemplary vision aid system configured within, for example, a patient’s home, and including an access hub for connection to the patient’s home communications network hub;
  • Figure 4 illustrates the concept of focal points to aid understanding of where images are formed in the human eye
  • Figure 5a illustrates a Landholt ring of the type used in the Visual Acuity Measurement Standard, first published in the Italian Journal of Opthalmology in 1988;
  • Figure 5b illustrates the Landholt ring including key parameters that need to be taken into account in setting testing conditions
  • Figure 6 is a flow chart illustrating schematically a visual acuity test flow triggered by an exemplary vision aid device
  • Figure 7 is a flow chart illustrating schematically a process flow for calibrating the screen of an exemplary vision aid device prior to presenting a visual acuity test
  • Figure 8 is a flow chart illustrating schematically a test calibration process flow for calibrating a visual acuity test of an exemplary vision aid device prior to presenting a visual acuity test;
  • Figure 9 is a flow chart illustrating schematically a visual acuity test process flow for presenting a visual acuity test on a screen of an exemplary vision aid device, receiving patient response data associated with the visual acuity test elements, and transmitting data thereof to a communications server apparatus for processing;
  • Figure 10 is a schematic block diagram illustrating a retinal imaging system for use in a vision aid according to an exemplary embodiment of the present invention
  • Figure 11 is a schematic perspective view of a vision aid device according to an exemplary embodiment of the present invention incorporating a retinal imaging system such as that illustrated schematically and described in relation to Figure 10.
  • a vision aid device comprises a virtual reality (VR) headset 1 , an audio-visual device in the form of, for example, smartphone 2, and a bluetooth® (or other wireless) remote control 9.
  • the headset 1 comprises a base unit 3 comprising an elliptical cylinder having an elasticated strap 4 at one end 3a to enable the headset to be held tightly in place over a user’s eyes, in use, such that the longitudinal axis of the elliptical cylinder extends generally horizontally relative to the user’s eyes and the edges of the cylinder form a seal to prevent light from entering the cylinder between the edges and the user’s face.
  • a rubber (or similarly resiliently flexible) sealing layer 8 is provided substantially all the way around the end 3a of the base unit, which, in use, abuts the user’s skin on their forehead, the outer edges of their eye sockets, the tops of their cheekbones and the bridge of their nose, such that the device can be worn on the face, in front of the eyes, with the weight being supported through the headset by a nose bridge and the strap 4, in a similar manner to a conventional generic virtual reality (VR) headset.
  • the longitudinal walls of the elliptical cylinder are substantially planar and parallel to each other, with the end walls being rounded.
  • the outer surface of the elliptical cylinder may be of any desired colour, but the inner walls thereof are beneficially matt black or another very dark colour.
  • the other end 3b of the base unit 3 is either substantially open, or has a transparent wall and a support tray 5 extends outwardly from a ‘lower’ edge of this end 3b of the base unit 3.
  • the support tray 5 is configured to receive and support the audio-visual device 2 in a landscape (or ‘horizontal’) orientation, such that its display screen abuts the open or transparent end of the base unit 3.
  • a support strap 6 is attached, at one end, to or near the centre of the support tray 5, with the other end having a connecting portion 6a configured to removably secure that end of the support strap 6 to the ‘top’ planar wall (when oriented for use) of the base unit 3, thereby holding the audio visual device 2 in place.
  • the audio visual device can thus be easily removed (e.g. for cleaning) by simply releasing the support strap 6 at the connecting portion end.
  • the audio visual device 2 could be mounted inside the headset 1.
  • the cavity inside the base unit 3 may include an imaging wall (not shown) that spans the cavity and incorporates openings and/or one or more focusing devices, such as lenses or the like, corresponding to the position of the user’s eyes, when in use.
  • the imaging wall may be adjustable to enable the device to be adapted specifically to the user’s needs.
  • Means may also be provided for selectively blocking or closing one or other of the openings, so that the user views the screen with one eye or the other, for calibration and testing purposes, as will be described in more detail below. However, this can alternatively be achieved by the user closing one or other of their eyes when instructed, and the present invention is not necessarily intended to be limited in this regard.
  • the audio visual device 2 hereinafter referred to as a smartphone, may be of any known type comprising a housing 10 having a display screen 12, and within which is housed electronics modules 14 including a processor 16 and a wireless communications module 18 for enabling the device to be connected to a wireless communications network for communication with a communications server apparatus.
  • the smartphone (or tablet) 2 comprises a camera including a camera lens 20 mounted in the rear panel of the housing 10 (i.e. opposite the screen 12).
  • An embodiment of the present invention comprises a communications system comprising a communications server, a user communications device (i.e. the smartphone 2), and a service provider communications device (which may be used by a medical practitioner treating a low vision user).
  • Communications system 100 comprises communications server apparatus 102, smartphone (or tablet) 2 and practitioner communications device 106. These devices are connected in the communications network 108 (for example the Internet) through respective communications links 110, 112, 114 implementing, for example, internet communications protocols. Communications devices 2, 106 may be able to communicate through other communications networks, such as public switched telephone networks (PSTN networks), including mobile cellular communications networks, but these are omitted from Figure 3 for the sake of clarity.
  • PSTN networks public switched telephone networks
  • mobile cellular communications networks including mobile cellular communications networks
  • Communications server apparatus 102 may be a single server as illustrated schematically in Figure 3, or have the functionality performed by the server apparatus 102 distributed across multiple server components.
  • communications server apparatus 102 may comprise a number of individual components including, but not limited to, one or more microprocessors 116, a memory 118 (e.g. a volatile memory such as a RAM) for the loading of executable instructions 120, the executable instructions defining the functionality the server apparatus 102 carries out under control of the processor 116.
  • microprocessors 116 e.g. a volatile memory such as a RAM
  • Communications server apparatus 102 also comprises an input/output module 122 allowing the server to communicate over the communications network 108.
  • User interface 124 is provided for user control and may comprise, for example, computing peripheral devices such as display monitors, computer keyboards and the like.
  • Communications server apparatus 102 also comprises a database 126, the purpose of which will become readily apparent from the following discussion. In this embodiment, database 126 is part of the communications server apparatus 102, however, it should be appreciated that database 126 can be separated from communications server apparatus 102 and database 126 may be connected to the communications server apparatus 102 via communications network 108 or via another communications link (not shown).
  • the smartphone (or tablet) 2 may comprise a number of individual components including, but not limited to, one or more microprocessors 128, a memory 130 (e.g. a volatile memory such as a RAM) for the loading of executable instructions 132, the executable instructions defining the functionality the smartphone 2 carries out under control of the processor 128.
  • the smartphone or tablet 2 also comprises an input/output module 134 allowing the smartphone 2 to communicate over the communications network 108.
  • User interface 136 is provided for user control.
  • the user interface 136 may have a touch panel display as is prevalent in many smart phone and other handheld devices.
  • the smartphone or tablet 2 further includes a camera (not shown) configured, in use, to capture a real time video stream of the user’s environment (via the camera lens 20) and display the resultant moving images on the screen 12.
  • Practitioner communications device 106 may be, for example, a smartphone or tablet device with the same or a similar hardware architecture to that of the smartphone or tablet 2 described above.
  • Practitioner communications device 106 may comprise a number of individual components including, but not limited to, one or more microprocessors 138, a memory 140 (e.g. a volatile memory such as a RAM) for the loading of executable instructions 142, the executable instructions defining the functionality the practitioner communications device 106 carries out under control of the processor 138.
  • Practitioner communications device 106 also comprises an input/output module (which may be or include a transmitter module/receiver module) 144 allowing the practitioner communications device 106 to communicate over the communications network 108.
  • User interface 146 is provided for user control.
  • the user interface 146 will have a touch panel display as is prevalent in many smart phone and other handheld devices.
  • the practitioner communications device is, say, a desktop or laptop computer
  • the user interface may have, for example, computing peripheral devices such as display monitors, computer keyboards and the like.
  • the smartphone or tablet 2 is configured to push data representative of the user (e.g. identity, current visual acuity, current image enhancement settings, activity patterns, etc) regularly to the communications server apparatus 102 over communications network 108.
  • the communications server apparatus 102 polls the smartphone or tablet 2 for information.
  • the data from the smartphone or tablet 2 also referred to herein as ‘user data’
  • the communications server apparatus 102 are communicated to the communications server apparatus 102 and at least some parameters or characteristics thereof stored in relevant locations in the database 126 as historical data, such that diagnostic data can be generated in real time for use by the medical practitioner in monitoring the user’s visual acuity over time and to enable them to quickly determine if and/or when further treatment may become necessary.
  • the headset 1 when the headset 1 is mounted over a user’s eyes, and the smartphone or tablet 2 is mounted over the distal end of the base unit, the user can see whatever is displayed on the screen 12 of the smartphone 2, whether that be recorded or live stream video images, or image data of the user’s environment, captured by the camera. Whilst there may be additional screens and/or imaging components within the base unit of the headset, depending on the particular form of sight loss with which the user has been diagnosed, the principal function of the base unit is to block out ambient light from the user’s field of view and to mount the smartphone 2 where the user can see its screen through the cavity defined within the base unit 3.
  • a system including a vision aid device is illustrated schematically as comprising a vision aid device 300 and an access hub 302 configured to be connected wirelessly (e.g. by Bluetooth® connectivity or similar short range wireless communications technology).
  • the access hub 302 is configured to connect (by a hard wire 303 or by short range wireless communications technology) to the user’s WiFi router 304.
  • the vison aid device (and, more particularly, the smartphone or tablet 2) can be programmed and provided with a range of image enhancement options to accommodate users with a range of sight loss conditions. These image enhancements can be applied in real time to the images being presented to the user. Although these images could be any moving or still images obtained from a variety of different sources, for the purposes of the remainder of this description, the images will be referred to as a live feed (of the user’s environment) captured by the camera in the smartphone or tablet 2 via the camera lens 20 located at the Tear’ of the smartphone or tablet 2 (when it is oriented and mounted in the headset for use).
  • the image enhancement options include, but are not necessarily limited to:
  • the device may further include a range of image manipulation options, such as (but not necessarily limited to):
  • the device can be customised with respect to overall brightness, intensity of blue light filter, view adjustment (alignment of image location with a user’s eyes to correct for double vision if present), and audio volume.
  • An audio function may be provided to indicate, for example, battery status and may even be configured to provide an auditory tutorial of basic functions of the device and how they can be used.
  • a vision aid device can be configured to improve the visual acuity of patients diagnosed with low vision to a level that enables engagement in activities associated with day-to-day life.
  • the device offers a range of image enhancement options (such as zoom and contrast enhancement) that can enable even registered blind users to utilise their residual sight by making image content accessible through (partially) functioning parts of the retina.
  • image enhancement options such as zoom and contrast enhancement
  • the device can be used to improve both functional near- and far distance acuity and facilitates (primarily) stationary activities such as reading, watching television, recognising people’s faces, watching presentations or whiteboards, engaging in a large variety of hobbies (such as painting, playing music, handicrafts, museum and art gallery visits, attendance at sporting events, theatre or cinema).
  • the image enhancement options can be selected and pre-set to suit the patient’s diagnosis, whereas the image manipulation options can be adjusted at will by the user (using the remote control 9).
  • the user’s engagement with these image manipulation options can be activity dependent, but changes in the degree and extent of their engagement with these options over time can be indicative of changes in the user’s visual acuity and, in many cases, be representative of a need for the medical practitioner to review the patient’s diagnosis and change the image enhancement options and/or offer treatment such as the anti- VEGF injection described above. This is discussed in more detail below.
  • the user can use the vision aid to enhance their vision during normal day to day life, adjusting the images as required using the image manipulation options, via the remote control.
  • the wireless connectivity of the vision aid device described above enables user data to be periodically pushed or uploaded to the communications server apparatus 102 for storage in the database 126 and access by the practitioner communications device 106.
  • the user data may take a number of different forms, depending on the test(s) used for monitoring a user’s visual acuity. Irrespective of the test(s) so used, a significant technical advantage of aspects of the invention is that the test data is obtained whilst the user is wearing the vision aid device and performing day to day tasks. There is no requirement for them to remember to perform any tests or tasks on a regular basis, nor is there any need for regular (routine) visits to the medical practitioner to undergo testing.
  • the vision aid device can be adapted to perform continuous monitoring of the user’s visual acuity (using, for example, changes in their engagement, over time, with the image manipulation options) and/or periodically (using, for example, the regular presentation of digital adaptations of specially designed visual acuity tests which require the user to provide feedback based on their perception of images displayed on the smartphone or tablet 2 during the course of their day to day lives).
  • the user data thus collected, can be processed by the communications server apparatus 102 so as to generate visual acuity data, and the communications server apparatus uses identity data received with the user data to map the visual acuity data to a specific patient and store it in an appropriately protected patient record location in the database 126.
  • the protected patient record includes data representative of the medical practitioner treating a particular patient and security data and protocols that enable only that medical practitioner to access or receive the visual acuity data collected or captured for a patient.
  • the visual acuity test may comprise a short, dedicated sight test that is periodically presented to a user on the screen of the smartphone or tablet 2 whilst they are using the vision aid device.
  • the test would effectively “interrupt” their current activity and request completion before the user can continue to use the vision aid.
  • a ‘snooze’ function may be provided to allow the user to temporarily delay performing the test for a short period of time, for example, one hour, after which the test would ‘pop up’ again and request completion.
  • the visual acuity test may be based on a measurement of the patient’s ability to recognise black, high contrast optotypes on a white background.
  • the optotypes may be Landholt rings the stroke width of which may be one-fifth of the outer diameter of the ring. The borders on the Landholt ring are parallel and there are no serifs.
  • the present invention is not intended to be limited in this regard. The following technical adaptations are advantageous in order to accurately present and perform this test on the screen of the smartphone or tablet 2 of the vision aid device.
  • the screen resolution of the smartphone 2 may be 1440 x 2960 pixels, which provides a ratio of 18.5:9 ( ⁇ 551 ppi density), equating to 15pixels per mm (x, y).
  • the Gaussian form of the lens equation is graphically illustrated to aid understanding.
  • Figure 5a of the drawings illustrates a Landholt ring of the type used in the Visual Acuity Measurement Standard, first published in the Italian Journal of Opthalmology in 1988, and Figure 5b illustrates the Landholt ring including key parameters that need to be taken into account in setting testing conditions.
  • the processor of the smartphone or tablet 2 is configured, in use to display live images captured by the camera and enhanced by one or more of the image enhancement options described above (which are pre-set according to the specific type and degree of sight loss they suffer), allowing the user to manipulate the images in real time (using the remote control) according to their needs and the specific activity they are partaking in.
  • the view presented to the user is essentially two screens, one ofr each eye (and each at a predetermined distance from the respective eye), each screen displaying an image, as will be understood by a person skilled in the art. This may be two separate screens, or a single screen split into two.
  • a dedicated visual acuity test event is periodically presented to the user on the screen of the smartphone or tablet 2, i.e.
  • the dedicated visual acuity test based on a measurement of the patient’s ability to recognise black, high contrast optotypes on a white background, as described above.
  • the test starts by determining which eye to test first.
  • the smartphone or tablet 2 is configured to generate verbal instructions to guide the user through the test, although in alternative embodiments, this may be done by way of instructions displayed on the screen, and the present invention is not necessarily intended to be limited in this regard.
  • the user is prompted (by verbal audio instruction and/or on the screen) to indicate (by voice command or via the remote control) which of their eyes the see best with.
  • the test will start with the other eye, i.e. that in which their sight is worst. Whilst one eye is being tested, the screen for the other eye is switched to black or switched off.
  • FIG. 6 a flow diagram illustrating the testing process is illustrated generally.
  • a calibration process is necessary. First, all previous calibration is reset to zero [Reset Results], Then, it is first necessary to calibrate (step 601 ) for the goggles, to ensure that the test images appear substantially centrally on the screen. Next, the test itself is calibrated according to the patient’s current visual acuity (step 602), and only then is the test itself performed for each eye separately (step 603).
  • the screen of the smartphone or tablet 2 is initially all black (step 701 ).
  • two circles, one green and one blue, are displayed on the screen.
  • the patient can use the remote control (or a joystick, for example) to move the circles.
  • moving the joystick left causes the circles to move closer together (steps 703a and 703b)
  • moving the joystick right causes the circles to move further apart (steps 704a and 704b).
  • the aim is to move the circles until they are perceived as aligned by the patient.
  • the patient confirms the relative positions, e.g.
  • this screen/goggle calibration is performed using a grey circle displayed on the screen, first for one eye and then the other, with the patient being asked to manipulate the circle to the centre of the screen, and the settings for each eye are stored as before.
  • the screen of the smartphone or tablet first starts black as before (step 801 ).
  • the patient is shown sets of Landholt rings with gaps at the top, bottom, left and right and the intention is to determine the smallest completely visible chart (i.e. the smallest set of Landholt circles on which the patient can see the gaps).
  • a first set of circles is displayed on the screen and the patient is asked (on the screen or by audio prompt) at step 803, whether or not they can see all of the gaps in the displayed circles. If yes, the size is stored (e.g.
  • step 804 the process moves to step 805, where the size of the circles is reduced and a smaller set of circles is displayed (step 802).
  • This loop is repeated until the patient enters a negative response, indicating that they cannot see all of the gaps in the circles currently being displayed.
  • the process exits the above-described loop and the last visible circle size is memorised and/or data representative of the circle size is transmitted to the communications server apparatus 102 and stored in the database 126.
  • this process may be performed for each eye separately (see steps 806, 807) to determine and store the smallest respective circle set for each eye.
  • the visual acuity test process can start.
  • the test starts (at step 901 ) with the patient’s ‘worst’ eye by asking the patient to close their other eye such that they are viewing the screen on the smartphone or tablet 2 using only the other eye.
  • the Landholt circles can be used, whereby differently sized sets of such circles (i.e. each set including Landholt circles of the same size but with the gap in a different place) are displayed, one circle at a time for a short period of time, and the patient is asked to indicate where the gap is in each case.
  • E is instead used.
  • a series of high contrast E’s is presented, each one in the series being slightly smaller than the one before, until the patient indicates that they can no longer see the last-presented E. From this, the last size of E the patient could see is used for the second part of the test, wherein the contrast of the E against the background gets progressively less until the patient can no longer see the E.
  • the smallest size of E that can be presented on the screen is limited by pixel size (and, therefore, screen resolution), and degree to which each E can be reduced in size is similarly limited.
  • the second part of the test is designed to provide further granularity and accuracy in relation to the visual acuity measurement, as follows.
  • the first E (greatest contrast) is displayed on the screen of the smartphone/tablet 2.
  • the E is displayed at one of four positions relative to the centre of the screen, namely ‘top’, ‘bottom’, ‘left’ and Tight’, and the patient provides an indication that they can see it (in this case, by pushing the joystick in the direction corresponding to the position at which they see the displayed ‘ E’). If the patient pushes the joystick ‘up’ (at step 903), the response is detected and the result (correct/incorrect) is recorded (at step 904), and the process is repeated with the next ‘E’.
  • step 905 If not, but the patient instead pushes the joystick ‘down’ (at step 905), the response is detected and the result (correct/incorrect) is recorded (at step 906), and the process is repeated with the next ‘ E’ . If not, but the patient instead pushes the joystick ‘left’ (at step 907), the response is detected and the result (correct/incorrect) is recorded (at step 908), and the process is repeated with the next ‘E’. If not, but the patient instead pushes the joystick Tight’ (at step 909), the response is detected and the result (correct/incorrect) is recorded (at step 910). Finally, if no response is received after a predetermined period of time, an ‘incorrect’ response is recorded and the process moves on to the next ‘E’. The process monitors (at step 911 ) whether or not the joystick has been returned to centre and if, after a predetermined period of time, it still has not been returned to centre, the process moves on anyway to the next ‘E’.
  • This process is repeated for each of a number of E’s with the same background contrast but in different respective positions on the screen, and then moves on to sets of E’s with progressively lower contrast, until the test is complete. Then, the process returns to step 901 and is repeated for the other eye. All of the results are collated by the communications server apparatus 102, and a visual acuity measurement is calculated therefrom by the microprocessor 116, and the measurement is stored in the database 126 against patient identity data and optionally with data representative of the date/time at which the visual acuity test was performed.
  • This visual acuity test data is accessible by the patient’s medical practitioner via the practitioner communications device 106, and may also be displayed to the user on the screen of the smartphone 2 after the test has been completed.
  • the above-described low contrast visual acuity test may be preceded by a high contrast test in which a set of E’s with the same (high) contrast are presented, each E so presented becoming progressively smaller, until the user indicates that they can no longer see an ‘E’.
  • the smallest size of ‘E’ that the user could see is then used for the low contrast acuity test described above to provide further granularity in relation to the final visual acuity measurement.
  • the conversion chart below represents the manner in which the user’s input in respect of the visual acuity tests described above can be mapped to determine their visual acuity measurement.
  • any one of a number of different types of set visual acuity tests could be periodically presented to a patient whilst using the vision aid of the present invention.
  • the periodicity of testing may be predetermined or may even be triggered by the medical practitioner treating a patient by transmitting a trigger signal to the smartphone/tablet 2 of the vision aid device from the practitioner communications device when a visual acuity test is due to be performed.
  • This trigger signal could be automatically generated from the patient’s electronic medical record, or it may be triggered manually.
  • the visual acuity test could be triggered by significant changes in their interaction with the image manipulation options referenced above. For example, if the user’s use (i.e. real time manipulation) of the zoom or digital magnification function of the image controls exceeds some predetermined level, a visual acuity test could be triggered, and the resultant visual acuity measurement transmitted to the practitioner communications device, to alert the medical practitioner that the patient’s visual acuity might be decreasing and that treatment may be required.
  • a retinal imaging system may be provided in addition to, or instead of, the visual acuity test flow described above, in order to enable remote monitoring of a user’s visual acuity and any changes in their vision.
  • a retinal imaging system 100 may be mounted on a pair of spectacles 101 (or other head-mounted vision aid device).
  • the imaging system is beneficially mounted outside of the user’s ‘normal’ eyeline.
  • the retinal imaging system 101 comprises a light source 102, for example, a white LED or the like, and an image capture device 103, such as a charge coupled device (CCD) array or the like.
  • a processor 104 incorporated with the light source 102 generates control signals for the light source 102
  • a processor 105 incorporated with the image capture device 103 generates control signals for the image capture device 103.
  • the light source may include a filter (not shown) for enabling the light source 102 to emit light a one or more different wavelengths (e.g. infrared, green, etc).
  • the operation of the filter is controlled by the processor 104.
  • the image capture device may receive control signals triggering an image capture event, according to some predefined schedule. It may, in some embodiments, also receive control signals triggering an image capture event when a user’s pupil is determined by the aligned with the optical axis of the imaging system.
  • a focusing lens 106 is provided at the input to the image capture device 103.
  • a pair of mirrors 107, 108 create a ‘folding’ effect for light 109 from the light source and reflected light 110 returned to the image capture device 103.
  • a first elongate mirror 107 is located at an obtuse angle (e.g. ⁇ 135°) to the optical axis of the image capture device 103 and arranged and configured to receive light 109 from the light source 102 and reflect it through 90° toward a second elongate mirror 108.
  • the second elongate mirror 108 is located a lateral distance from, but substantially in line with, the first mirror 107 and arranged at an acute angle (e.g.
  • FIG. 10 An example image 113 of a user’s retina, as seen by the condensing lens 111 when the user’s pupil is aligned with the optical axis of the condensing lens 111 , is provided in Figure 10 for illustrative purposes only. It will be apparent to a person skilled in the art, from the foregoing description, that modifications and variations can be made to the described embodiments without departing from the scope of the invention as defined by the appended claims.

Abstract

A vison aid device, comprising head-mountable base unit (3, 101) having a distal end located in a user's eyeline when mounted for use, the base unit housing a retinal imaging system (100) comprising: an imaging system comprising a light source (102) and an image capture 5 device (103) located outside of the user's eyeline and an optical element arrangement (107, 108) configured, in use, to direct light from the light source (102) into the user's eyeline and for directing light from the user's eyeline to the image capture device (103), the retinal imaging system further comprising a processor (105) for receiving image data captured by said image capture 10 device (103) and transmitting said image data, or data representative thereof, to a remote server.

Description

VISION AID DEVICE
Field of the Invention
This invention relates to a vision aid device and, more particularly but not necessarily exclusively, to a vision aid for use by users having sight loss defined as ‘low vision’ by the World Health Organisation.
Background of the Invention
The World Health Organisation (WHO) defines ‘low vision’ as a visual acuity of less than 6/18. Such low vision can be manifested in a number of different conditions. For example, age-related macular degeneration (AMD) and Stargardt disease can cause central vision loss, whereas albinism can affect the whole visual field, and glaucoma and optic Neuritis can cause reduced contrast sensitivity. All of these can cause reduced contrast sensitivity. All of these, and other, disorders can result in a degradation of a patient’s vision to the extent that it qualifies as ‘low vision’ as defined by the WHO.
Patients diagnosed with such low vision often have difficulty in engaging with many different activities that would be considered to be a normal part of daily life, such as reading and watching the television, and they can even have difficulty in recognising people’s faces. Clearly, this can have a severe adverse effect on the quality of their lives.
Head mounted visual aids for people with low or impaired vision have been used for hundreds of years, most commonly in the form of optics-based solutions such as spectacles. In recent years, wearable head-mounted devices, such as virtual reality (VR) headsets and Augmented Reality (AR) glasses have become increasingly common, and some technical advances in this field of technology have yielded AR headsets specifically designed to enable users with low vision to utilise their residual sight by making image content accessible through (partially) functioning parts of the retina.
WO201 9/232082 describes a hybrid see-through augmented reality (AR) device that comprises a head-mounted frame configured to be worn by a user, over their eyes, in the manner of a pair of glasses or goggles. The device includes a camera disposed on the frame and configured to capture real-time video image data of the user’s environment, and a processor for processing the image data to produce a video stream which is displayed on a screen within the frame, replacing the central portion of the user’s field of view and blending with the peripheral portion of the user’s field of view, thereby enhancing (and expanding) the low vision user’s view of their environment.
However, the visual acuity of individual ‘low vision’ patients varies greatly, depending on many different factors, including age, eye health, the particular condition causing low vision, the degree to which a patient’s visual acuity is impaired, the part of the retina affected, and the degree to which the particular medical condition causing low vision has progressed, to name just some. Furthermore, the visual acuity of a low vision patient can change and degrade over time. Therefore, in relation to a visual aid, a ‘one size fits all’ approach is not really appropriate or effective.
There is, therefore, a desire to provide a visual aid, particularly, but not necessarily exclusively, for low vision users that can be adapted to suit a specific user’s vision loss and the visual acuity provided by their residual sight, and that can also adapt (or be adapted) to account for changes and/or worsening of visual acuity over time.
Once a patient has been diagnosed with ‘low vision’, it is usual for their visual acuity to be monitored by a medical specialist, so as to track the effects of progression of the underlying medical condition causing the vision loss, as well as changes in, and progression of, vision loss over time. Regular checks of this type are essential, to ensure the more general wellbeing of the patient and to optimise their visual potential. This is particularly important, for example, for patients with AMD. There is no cure for the disease itself, but for the wet form (neovascular nAMD), anti-VEGF eye injections are a standard treatment to prevent severe vision loss. These anti- VEGF eye injections require ongoing monitoring of visual acuity, and they are given when a patient’s condition is found to have deteriorated. Clearly, this treatment plan and associated monitoring schedule places a burden on patient time and hospital resources. Dry AMD patients are left to monitor their own vision, and return to the medical specialist only if their vision ‘worsens’ or changes in some way. Wet AMD patients have to return to the clinic at set time intervals, and are left for increasingly lengthy periods of time between treatments, asked to return in the intervening time for a follow-up vision test and eye injection. In those cases where the patient is left to monitor their own vision, it is in itself is a very inexact science, because small, progressive changes may go virtually unnoticed and/or a patient may choose to wait until a more distinct change is noticed before seeking medical intervention.
However, any vision loss can be indicative of progression of wet AMD which should be treated urgently by the administration of the above-mentioned anti-VEGF injection.
Aspects of the present invention seek to address at least one or more of these issues.
Summary of the Invention
In accordance with an aspect of the present invention, there is provided a vision aid device for a low vision user, the device comprising:
• a light blocking head-mountable base unit having a distal end located in a user’s eyeline when mounted for use;
• at least one screen mounted or located at said distal end;
• an image capture device arranged and configured to capture image data from a user’s environment, in use;
• a processor for receiving said image data and configured to apply image compensation thereto to generated enhanced image data and cause said enhanced image data to be displayed on the or each said screen; and
• a user input means (such as a manual controller, joystick, voice command recognition means, etc.) configured to allow a user to interact with image data displayed on said screen; the processor being further configured to generate, during use of said device as a vision aid, a visual acuity test flow associated with said user, obtain, from said test flow, a visual acuity measurement for said user, and transmit said visual acuity measurement to a remote server.
In one embodiment, the processor may be configured to present a visual acuity test on the at least one screen for completion by a user using user input means (which may be the same as that used for image manipulation or additional user input means provided for the purpose) during use of the device as a vision aid. In an embodiment, the visual acuity test comprises briefly displaying a series of characters or shapes in turn on the screen, each displayed character being smaller and/or of lower contrast to the background than the previous character, until the user indicates that they can no longer see the character sufficiently clearly on the screen, and the processor is configured to calculate a visual acuity measurement from the last clearly seen displayed character. In a preferred embodiment, in a first part of the test, each displayed character is smaller than the previous character and a user indicates the smallest character they can see. Then, in a second part of the test, the character is displayed at an increasingly lower contrast and a user indicates the lowest contrast at which they can see the character; this second part providing increased granularity and accuracy in respect of the visual acuity measurement. Other validated tests, such as contrast sensitivity, amsler grid and/or the paracentral acuity test, may also be presented.
Beneficially, the processor is configured to display a screen calibration test flow for completion by the user prior to displaying the visual acuity test. In an embodiment, the processor is configured to display a test calibration flow for completion by the user prior to displaying the visual acuity test.
The controller is beneficially configured to generate image manipulation data in response to user control actions, and the processor is configured to utilise the image manipulation data to manipulate the displayed image data in substantially real time. In an embodiment, the processor is configured to monitor image data displayed on the or each said screen to detect changes or anomalies therein, and use a said detected change or anomaly to identify a change in visual acuity of the user. The processor may be configured to transmit an alert to said remote server indicating that the visual acuity of the user may have changed. Alternatively, or in addition, the processor may be configured to generate a visual acuity measurement using a significant change in image manipulation data. The processor may be configured to monitor image manipulation data over a period of time to determine a standard pattern for said user, and to identify a significant change therein if the image manipulation data deviates by more than a predetermined threshold from said standard pattern. In an exemplary embodiment, the processor may be configured to use computer vision techniques to identify activities being performed by a user during use of the vision aid, and record activity data representative thereof a sa real world evidence log for said user.
In a preferred embodiment, the vision aid may comprise a pair of screens located side by side at said distal end of said base unit, wherein enhanced image data is displayed on each of the screens simultaneously during use of the device as a vision aid. In a preferred embodiment, the processor is configured to generate and present a visual acuity test flow for each of a user’s eyes separately. In order to facilitate this, in an embodiment, the processor is configured to switch one of the screens to black (or switch it off) whilst a visual acuity test flow is being presented on the other screen. In other embodiments, there may be a light blocking wall within the base unit, located between the two screens, to prevent optical interference therebetween.
In accordance with another aspect of the invention, there is provided a method performed in a communications system comprising at least one screen and a light blocking head-mountable base unit configured, in use, to be supported on a user’s head such that the or each screen is in their eyeline, the method comprising, under control of a processor of the communications system:
• capturing image data from the user’s environment;
• applying image compensation to said captured image data using image enhancement data stored in a memory of said communications system and associated with the user;
• causing said compensated image data to be displayed on the or each screen in substantially real time;
• receiving image manipulation control data from the user and manipulating the displayed image data in accordance with the control data; generating a visual acuity test flow associated with the user, receiving, from the user, data representative of visual acuity, using said test flow to obtain a visual acuity measurement for said user, and transmitting said visual acuity measurement to a communications server apparatus of the communications system.
In accordance with yet another aspect of the present invention, there is provided a computer program or computer program product comprising instructions for implementing the method substantially as described above.
Furthermore, early detection of macular degeneration and timely treatment decisions can be accomplished using a suitable retinal imaging system.
For example, Optical Coherence Topology (OCT) is a non-invasive diagnostic technique using an instrument designed to image a patient’s retina. OCT relies on low coherence interferometry that can be used to generate a cross-sectional image of the patient’s macula. This cross-sectional view of the macula shows if its layers are distorted and can be used to monitor whether distortion of the layers of the macula has increased or decreased relative to an earlier cross-sectional image to assess the impact of treatment of the macular degeneration.
Retinal imaging systems are typically large and expensive devices that often require a trained technician and/or a motorized control loop to properly align the optical axis of the OCT imaging system with the optical axis of the eye under examination. As a result, many such OCT systems are restricted to specialised eye clinics.
WO201 9/092697 describes a home OCT imaging system which includes a viewer assembly on which a user can rest their head, with one eye directed into a specified region, the viewing assembly being configured such that when the user’s head is correctly engaged on its interface surface, the eye directed into the imaging system is correctly aligned with the optical axis. Image data of the eye, thus captured, is transmitted to a remote server for processing, and the respective images are then transmitted to the clinician under whose care the patient is, who can decide, based on the image data, what, if any, action is required.
However, even with such a home OCT device, the equipment is large and bulky, and it requires the user to remember to test their eyes regularly. It can take several minutes for the required image data to be collected and there is, therefore, a high probability that the patient may delay or miss scheduled testing events, thus risking further macular degeneration going undetected for a long period of time. Also, the patient needs to have the mantal capacity to, not only remember to perform a scheduled test, but also position their head correctly so as to align their eyes with the optical axis of the imaging system.
Another example of retinal imaging is performed using a fundus camera that also requires a trained technican and/or a motorized control loop to align the optical axis with the patient’s pupil to be able to capture an image of the retina.
More generally, physical testing strategies are dependent on laser technologies and, as they are expensive, they are not considered suitable for widespread use.
Furthermore, they do not typically incorporate vision testing as such.
As explained, the visual acuity testing of the type described above is useful and, by incorporating it into a visual aid, there is a reduced likelihood that testing will be missed. However, there is a desire to provide a means for enabling more accurate, less subjective and unobtrusive monitoring of a patient’s eyes without the need for the patient to remember to perform a test, or even interrupt their daily lives, thereby ensuring that any changes will be quickly identified and can be treated without delay. It would be desirable to achieve such monitoring using an imaging system but known imaging systems for opthalmic monitoring of the type required are typically bulky, as described above, and eminently unsuitable for use in a head mounted system.
Thus, in accordance with an aspect of the present invention, there is provided a vison aid device, comprising head-mountable base unit having a distal end located in a user’s eyeline when mounted for use, the base unit housing a retinal imaging system comprising: an imaging system comprising a light source and an image capture device located outside of the user’s eyeline and an optical element arrangement configured, in use, to direct light from the light source into the user’s eyeline and for directing light from the user’s eyeline to the image capture device, the retinal imaging system further comprising a processor for receiving image data captured by said image capture device and transmitting said image data, or data representative thereof, to a remote server.
The retinal imaging system can, for example, be used to recreate key data available from OCT scans and a computer vision image analysis system, either in the processor or at the remote server, can be used to acquire objective information on retinal structure without the physical or financial expense of OCT equipment.
In an exemplary embodiment, the processor may be configured to analyse said image data to identify one or more captured images in which the user’s pupil is aligned with the optical axis of the imaging system as defined by the optical element arrangement and transmit image data representative of the one or more identified images to said remote server.
In another embodiment, the processor may be configured to identify when the user’s pupil is aligned with the optical axis of the imaging system as defined by the optical element arrangement and cause said image capture device to capture an image when such alignment is identified. In an exemplary embodiment, the retinal imaging system may include means for generating a fixation target which, when a user fixes their gaze thereon, results in the user’s pupil being aligned with the optical axis of the imaging system as defined by the optical element arrangement.
In an embodiment, the optical element arrangement may comprise at least two reflective elements for directing light from said light source into the user’s eyeline, in use, and directing light reflected from the user’s retina to the image capture device. In an exemplary embodiment, the optical element arrangement may comprise a pair of reflective elements or mirrors, a first mirror being configured to reflect light from the light source through substantially 90° to a second mirror, the second mirror being configured to reflect said light back through substantially 90° into the user’s eyeline, wherein the optical axis of the imaging system and the user’s eyeline are substantially parallel to, and laterally spaced apart from, each other.
In an exemplary embodiment, the light source may be configured to illuminate the user’s retina with light of a selected one or more wavelengths. For example, a filter may be provided for illuminating the user’s retina with light of a selected one or more wavelengths. The imaging system may include focusing lens configured, in use, to focus light received from the user’s retina, via said optical element arrangement, onto the image capture device. The retinal imaging system may beneficially also include a condensing lens located in the user’s eyeline when the vision aid is in use.
The image capture device and light source are located outside of the user’s eyeline, to minimise disruption, and the optical axes of both the light source and the camera are accurately directed into the user’s eyeline by the optical element arrangement, which is such that the entire imaging system can be placed within a relatively small housing (perhaps ~1cm3). The optical element arrangement may beneficially comprise a first input/output axis and a second input/output axis, the first and second input/output axes being substantially parallel to, and spaced apart from, each other. The first input/output axis is beneficially aligned with the optical axis of the image capture device and the second input/output axis is beneficially aligned with a condensing lens within the user’s eyeline, such that light from the light source is directed from the first input/output axis to the second input/output axis and onto the condensing lens, and reflected light from the user’s retina is directed through the condensing lens to the second input/output axis and then to the first input/output axis back to the image capture device. A focusing lens may be located within the first input/output axis, between the image capture device and the optical element arrangement.
Whilst the present invention is predominantly described below in relation to a vision aid device for a user having ‘low vision’ of a particular type, it is to be understood that a vision aid for users having low vision more generally, e.g. a pair of spectacles for a user that is myopic, could be provided with a retinal imaging system of the type described above, to provide continuous remote sight monitoring to detect changes in sight and enable personalised eye healthcare.
Aspects of the invention can thus provide rapid, reliable and standardised objective imaging in a wide variety of patients with minimal interference in their daily lives, thereby ensuring that any changes in their eyesight requiring medical attention can be quickly and efficiently identified, and the invention is not necessarily intended to be limited to vision aids specifically with the type of ‘low vision’ defined above.
Brief Description of the Drawings
These and other aspects of the present invention will be apparent from the following detailed description, in which embodiments of the invention are described, by way of examples only, and with reference to the accompanying drawings, in which:
Figure 1 is a schematic perspective view of a vision aid device according to an exemplary embodiment of the invention; Figure 2 is a schematic cross-sectional side view of a smartphone for use in a vison aid device according to an exemplary embodiment of the invention;
Figure 3 is a is a schematic block diagram illustrating an exemplary vision aid system including a communications server apparatus for receiving and processing patient data from an exemplary vision aid device;
Figure 3a is a schematic diagram illustrating the configuration of an exemplary vision aid system configured within, for example, a patient’s home, and including an access hub for connection to the patient’s home communications network hub;
Figure 4 illustrates the concept of focal points to aid understanding of where images are formed in the human eye;
Figure 5a illustrates a Landholt ring of the type used in the Visual Acuity Measurement Standard, first published in the Italian Journal of Opthalmology in 1988;
Figure 5b illustrates the Landholt ring including key parameters that need to be taken into account in setting testing conditions;
Figure 6 is a flow chart illustrating schematically a visual acuity test flow triggered by an exemplary vision aid device;
Figure 7 is a flow chart illustrating schematically a process flow for calibrating the screen of an exemplary vision aid device prior to presenting a visual acuity test;
Figure 8 is a flow chart illustrating schematically a test calibration process flow for calibrating a visual acuity test of an exemplary vision aid device prior to presenting a visual acuity test;
Figure 9 is a flow chart illustrating schematically a visual acuity test process flow for presenting a visual acuity test on a screen of an exemplary vision aid device, receiving patient response data associated with the visual acuity test elements, and transmitting data thereof to a communications server apparatus for processing;
Figure 10 is a schematic block diagram illustrating a retinal imaging system for use in a vision aid according to an exemplary embodiment of the present invention; and Figure 11 is a schematic perspective view of a vision aid device according to an exemplary embodiment of the present invention incorporating a retinal imaging system such as that illustrated schematically and described in relation to Figure 10.
Detailed Description
Referring to Figure 1 , a vision aid device according to an embodiment of the invention, comprises a virtual reality (VR) headset 1 , an audio-visual device in the form of, for example, smartphone 2, and a bluetooth® (or other wireless) remote control 9. The headset 1 comprises a base unit 3 comprising an elliptical cylinder having an elasticated strap 4 at one end 3a to enable the headset to be held tightly in place over a user’s eyes, in use, such that the longitudinal axis of the elliptical cylinder extends generally horizontally relative to the user’s eyes and the edges of the cylinder form a seal to prevent light from entering the cylinder between the edges and the user’s face. To this end (and that of the comfort of the user), a rubber (or similarly resiliently flexible) sealing layer 8 is provided substantially all the way around the end 3a of the base unit, which, in use, abuts the user’s skin on their forehead, the outer edges of their eye sockets, the tops of their cheekbones and the bridge of their nose, such that the device can be worn on the face, in front of the eyes, with the weight being supported through the headset by a nose bridge and the strap 4, in a similar manner to a conventional generic virtual reality (VR) headset. The longitudinal walls of the elliptical cylinder are substantially planar and parallel to each other, with the end walls being rounded. The outer surface of the elliptical cylinder may be of any desired colour, but the inner walls thereof are beneficially matt black or another very dark colour.
The other end 3b of the base unit 3 is either substantially open, or has a transparent wall and a support tray 5 extends outwardly from a ‘lower’ edge of this end 3b of the base unit 3. The support tray 5 is configured to receive and support the audio-visual device 2 in a landscape (or ‘horizontal’) orientation, such that its display screen abuts the open or transparent end of the base unit 3. A support strap 6 is attached, at one end, to or near the centre of the support tray 5, with the other end having a connecting portion 6a configured to removably secure that end of the support strap 6 to the ‘top’ planar wall (when oriented for use) of the base unit 3, thereby holding the audio visual device 2 in place. The audio visual device can thus be easily removed (e.g. for cleaning) by simply releasing the support strap 6 at the connecting portion end. In an alternative embodiment, the audio visual device 2 could be mounted inside the headset 1.
The cavity inside the base unit 3 may include an imaging wall (not shown) that spans the cavity and incorporates openings and/or one or more focusing devices, such as lenses or the like, corresponding to the position of the user’s eyes, when in use. If present, the imaging wall may be adjustable to enable the device to be adapted specifically to the user’s needs. Means may also be provided for selectively blocking or closing one or other of the openings, so that the user views the screen with one eye or the other, for calibration and testing purposes, as will be described in more detail below. However, this can alternatively be achieved by the user closing one or other of their eyes when instructed, and the present invention is not necessarily intended to be limited in this regard. There are numerous virtual reality headsets of this type available, each with different features and imaging options, and any of which could be (more or less) applicable to the present invention, which is not necessarily intended to be limited in this regard.
Referring to Figure 2 of the drawings, the audio visual device 2, hereinafter referred to as a smartphone, may be of any known type comprising a housing 10 having a display screen 12, and within which is housed electronics modules 14 including a processor 16 and a wireless communications module 18 for enabling the device to be connected to a wireless communications network for communication with a communications server apparatus. The smartphone (or tablet) 2 comprises a camera including a camera lens 20 mounted in the rear panel of the housing 10 (i.e. opposite the screen 12). An embodiment of the present invention comprises a communications system comprising a communications server, a user communications device (i.e. the smartphone 2), and a service provider communications device (which may be used by a medical practitioner treating a low vision user).
Referring to Figure 3 of the drawings, such a communications system 100 is illustrated. Communications system 100 comprises communications server apparatus 102, smartphone (or tablet) 2 and practitioner communications device 106. These devices are connected in the communications network 108 (for example the Internet) through respective communications links 110, 112, 114 implementing, for example, internet communications protocols. Communications devices 2, 106 may be able to communicate through other communications networks, such as public switched telephone networks (PSTN networks), including mobile cellular communications networks, but these are omitted from Figure 3 for the sake of clarity.
Communications server apparatus 102 may be a single server as illustrated schematically in Figure 3, or have the functionality performed by the server apparatus 102 distributed across multiple server components. In the example of Figure 3, communications server apparatus 102 may comprise a number of individual components including, but not limited to, one or more microprocessors 116, a memory 118 (e.g. a volatile memory such as a RAM) for the loading of executable instructions 120, the executable instructions defining the functionality the server apparatus 102 carries out under control of the processor 116.
Communications server apparatus 102 also comprises an input/output module 122 allowing the server to communicate over the communications network 108. User interface 124 is provided for user control and may comprise, for example, computing peripheral devices such as display monitors, computer keyboards and the like. Communications server apparatus 102 also comprises a database 126, the purpose of which will become readily apparent from the following discussion. In this embodiment, database 126 is part of the communications server apparatus 102, however, it should be appreciated that database 126 can be separated from communications server apparatus 102 and database 126 may be connected to the communications server apparatus 102 via communications network 108 or via another communications link (not shown).
The smartphone (or tablet) 2 may comprise a number of individual components including, but not limited to, one or more microprocessors 128, a memory 130 (e.g. a volatile memory such as a RAM) for the loading of executable instructions 132, the executable instructions defining the functionality the smartphone 2 carries out under control of the processor 128. The smartphone or tablet 2 also comprises an input/output module 134 allowing the smartphone 2 to communicate over the communications network 108. User interface 136 is provided for user control. The user interface 136 may have a touch panel display as is prevalent in many smart phone and other handheld devices. The smartphone or tablet 2 further includes a camera (not shown) configured, in use, to capture a real time video stream of the user’s environment (via the camera lens 20) and display the resultant moving images on the screen 12.
Practitioner communications device 106 may be, for example, a smartphone or tablet device with the same or a similar hardware architecture to that of the smartphone or tablet 2 described above. Practitioner communications device 106 may comprise a number of individual components including, but not limited to, one or more microprocessors 138, a memory 140 (e.g. a volatile memory such as a RAM) for the loading of executable instructions 142, the executable instructions defining the functionality the practitioner communications device 106 carries out under control of the processor 138. Practitioner communications device 106 also comprises an input/output module (which may be or include a transmitter module/receiver module) 144 allowing the practitioner communications device 106 to communicate over the communications network 108. User interface 146 is provided for user control. If the practitioner communications device 106 is, say, a smart phone or tablet device, the user interface 146 will have a touch panel display as is prevalent in many smart phone and other handheld devices. Alternatively, if the practitioner communications device is, say, a desktop or laptop computer, the user interface may have, for example, computing peripheral devices such as display monitors, computer keyboards and the like.
In one embodiment, the smartphone or tablet 2 is configured to push data representative of the user (e.g. identity, current visual acuity, current image enhancement settings, activity patterns, etc) regularly to the communications server apparatus 102 over communications network 108. In another, the communications server apparatus 102 polls the smartphone or tablet 2 for information. In either case, the data from the smartphone or tablet 2 (also referred to herein as ‘user data’) are communicated to the communications server apparatus 102 and at least some parameters or characteristics thereof stored in relevant locations in the database 126 as historical data, such that diagnostic data can be generated in real time for use by the medical practitioner in monitoring the user’s visual acuity over time and to enable them to quickly determine if and/or when further treatment may become necessary. In use, when the headset 1 is mounted over a user’s eyes, and the smartphone or tablet 2 is mounted over the distal end of the base unit, the user can see whatever is displayed on the screen 12 of the smartphone 2, whether that be recorded or live stream video images, or image data of the user’s environment, captured by the camera. Whilst there may be additional screens and/or imaging components within the base unit of the headset, depending on the particular form of sight loss with which the user has been diagnosed, the principal function of the base unit is to block out ambient light from the user’s field of view and to mount the smartphone 2 where the user can see its screen through the cavity defined within the base unit 3.
Referring to Figure 3a of the drawings, a system including a vision aid device according to an exemplary embodiment of the present invention is illustrated schematically as comprising a vision aid device 300 and an access hub 302 configured to be connected wirelessly (e.g. by Bluetooth® connectivity or similar short range wireless communications technology). The access hub 302 is configured to connect (by a hard wire 303 or by short range wireless communications technology) to the user’s WiFi router 304.
The vison aid device (and, more particularly, the smartphone or tablet 2) can be programmed and provided with a range of image enhancement options to accommodate users with a range of sight loss conditions. These image enhancements can be applied in real time to the images being presented to the user. Although these images could be any moving or still images obtained from a variety of different sources, for the purposes of the remainder of this description, the images will be referred to as a live feed (of the user’s environment) captured by the camera in the smartphone or tablet 2 via the camera lens 20 located at the Tear’ of the smartphone or tablet 2 (when it is oriented and mounted in the headset for use).
The image enhancement options include, but are not necessarily limited to:
• Digital magnification
• Contrast enhancement (edge enhancement)
• Monochrome luminance inversion
• Binary image conversion for reading, with, for example, a choice of yellow on blue or yellow on black Image capture and storage and/or further manipulation for later review and/or posting on social media, for example.
The device may further include a range of image manipulation options, such as (but not necessarily limited to):
• Exposure control (auto and fixed exposure including manual exposure compensation)
• Freeze frame
• Zoom (or additional digital magnification)
• Brightness and contrast
• Memory function to enable preferred image settings to be stored
The device can be customised with respect to overall brightness, intensity of blue light filter, view adjustment (alignment of image location with a user’s eyes to correct for double vision if present), and audio volume. An audio function may be provided to indicate, for example, battery status and may even be configured to provide an auditory tutorial of basic functions of the device and how they can be used.
Accordingly, a vision aid device according to an aspect of the invention can be configured to improve the visual acuity of patients diagnosed with low vision to a level that enables engagement in activities associated with day-to-day life. The device offers a range of image enhancement options (such as zoom and contrast enhancement) that can enable even registered blind users to utilise their residual sight by making image content accessible through (partially) functioning parts of the retina. The device can be used to improve both functional near- and far distance acuity and facilitates (primarily) stationary activities such as reading, watching television, recognising people’s faces, watching presentations or whiteboards, engaging in a large variety of hobbies (such as painting, playing music, handicrafts, museum and art gallery visits, attendance at sporting events, theatre or cinema).
The image enhancement options, including those listed above, can be selected and pre-set to suit the patient’s diagnosis, whereas the image manipulation options can be adjusted at will by the user (using the remote control 9). The user’s engagement with these image manipulation options can be activity dependent, but changes in the degree and extent of their engagement with these options over time can be indicative of changes in the user’s visual acuity and, in many cases, be representative of a need for the medical practitioner to review the patient’s diagnosis and change the image enhancement options and/or offer treatment such as the anti- VEGF injection described above. This is discussed in more detail below.
Thus, once the low vision patient has been diagnosed, and the appropriate image enhancement options have been programmed and set, the user can use the vision aid to enhance their vision during normal day to day life, adjusting the images as required using the image manipulation options, via the remote control. The wireless connectivity of the vision aid device described above, enables user data to be periodically pushed or uploaded to the communications server apparatus 102 for storage in the database 126 and access by the practitioner communications device 106.
The user data may take a number of different forms, depending on the test(s) used for monitoring a user’s visual acuity. Irrespective of the test(s) so used, a significant technical advantage of aspects of the invention is that the test data is obtained whilst the user is wearing the vision aid device and performing day to day tasks. There is no requirement for them to remember to perform any tests or tasks on a regular basis, nor is there any need for regular (routine) visits to the medical practitioner to undergo testing. The vision aid device can be adapted to perform continuous monitoring of the user’s visual acuity (using, for example, changes in their engagement, over time, with the image manipulation options) and/or periodically (using, for example, the regular presentation of digital adaptations of specially designed visual acuity tests which require the user to provide feedback based on their perception of images displayed on the smartphone or tablet 2 during the course of their day to day lives). The user data, thus collected, can be processed by the communications server apparatus 102 so as to generate visual acuity data, and the communications server apparatus uses identity data received with the user data to map the visual acuity data to a specific patient and store it in an appropriately protected patient record location in the database 126. The protected patient record includes data representative of the medical practitioner treating a particular patient and security data and protocols that enable only that medical practitioner to access or receive the visual acuity data collected or captured for a patient. In its simplest form, the visual acuity test may comprise a short, dedicated sight test that is periodically presented to a user on the screen of the smartphone or tablet 2 whilst they are using the vision aid device. Thus, the test would effectively “interrupt” their current activity and request completion before the user can continue to use the vision aid. A ‘snooze’ function may be provided to allow the user to temporarily delay performing the test for a short period of time, for example, one hour, after which the test would ‘pop up’ again and request completion.
The visual acuity test may be based on a measurement of the patient’s ability to recognise black, high contrast optotypes on a white background. In an embodiment, the optotypes may be Landholt rings the stroke width of which may be one-fifth of the outer diameter of the ring. The borders on the Landholt ring are parallel and there are no serifs. In the embodiment described herein, there are four separate optotype orientations: gap up, gap down, gap left, and gap right. However, and as will be apparent to a person skilled in the art, the present invention is not intended to be limited in this regard. The following technical adaptations are advantageous in order to accurately present and perform this test on the screen of the smartphone or tablet 2 of the vision aid device.
The screen resolution of the smartphone 2 may be 1440 x 2960 pixels, which provides a ratio of 18.5:9 (~551 ppi density), equating to 15pixels per mm (x, y). Referring to Figure 4 of the drawings, the Gaussian form of the lens equation is graphically illustrated to aid understanding.
Figure 5a of the drawings illustrates a Landholt ring of the type used in the Visual Acuity Measurement Standard, first published in the Italian Journal of Opthalmology in 1988, and Figure 5b illustrates the Landholt ring including key parameters that need to be taken into account in setting testing conditions.
The tables below set out the testing standards and values that are used to determine a visual acuity score in relation to a complete visual acuity measurement using this test. However, and as will be appreciated, other visual acuity tests are known and the manner in which such tests can be presented and interpreted will be understood by a person skilled in the art. Table of Dimension
Figure imgf000021_0001
Figure imgf000022_0001
f=1/P (power of lens) ho=hi[(f-o)/f] with f=40mm o=35mm hi= outer diameter coefficient c= 0.125 so ho=c*hi=0,125*hi <=> ho=hi*1/8 pixels = 15*ho=15*0.125*hi=1.875*hi
As described above, the processor of the smartphone or tablet 2 is configured, in use to display live images captured by the camera and enhanced by one or more of the image enhancement options described above (which are pre-set according to the specific type and degree of sight loss they suffer), allowing the user to manipulate the images in real time (using the remote control) according to their needs and the specific activity they are partaking in. The view presented to the user is essentially two screens, one ofr each eye (and each at a predetermined distance from the respective eye), each screen displaying an image, as will be understood by a person skilled in the art. This may be two separate screens, or a single screen split into two. In one embodiment, a dedicated visual acuity test event is periodically presented to the user on the screen of the smartphone or tablet 2, i.e. in their field of view, essentially ‘forcing’ them to complete the test in order to resume normal use of the vision aid. A ‘snooze’ function may be provided, thereby allowing them a predetermined period of time before the visual acuity event is once again presented. In one example, the dedicated visual acuity test based on a measurement of the patient’s ability to recognise black, high contrast optotypes on a white background, as described above.
The test starts by determining which eye to test first. The smartphone or tablet 2 is configured to generate verbal instructions to guide the user through the test, although in alternative embodiments, this may be done by way of instructions displayed on the screen, and the present invention is not necessarily intended to be limited in this regard. The user is prompted (by verbal audio instruction and/or on the screen) to indicate (by voice command or via the remote control) which of their eyes the see best with. Once the user has provided the required indication, the test will start with the other eye, i.e. that in which their sight is worst. Whilst one eye is being tested, the screen for the other eye is switched to black or switched off.
Referring to Figure 6, a flow diagram illustrating the testing process is illustrated generally. Before the visual acuity test starts properly, and because the visual acuity test is being performed on a screen of the smartphone or tablet 2 through the headset 1 , a calibration process is necessary. First, all previous calibration is reset to zero [Reset Results], Then, it is first necessary to calibrate (step 601 ) for the goggles, to ensure that the test images appear substantially centrally on the screen. Next, the test itself is calibrated according to the patient’s current visual acuity (step 602), and only then is the test itself performed for each eye separately (step 603).
Referring to Figure 7 of the drawings, in a process for calibrating for the goggles, the screen of the smartphone or tablet 2 is initially all black (step 701 ). At step 702, two circles, one green and one blue, are displayed on the screen. The patient can use the remote control (or a joystick, for example) to move the circles. In the example illustrated, moving the joystick left causes the circles to move closer together (steps 703a and 703b), whereas moving the joystick right causes the circles to move further apart (steps 704a and 704b). The aim is to move the circles until they are perceived as aligned by the patient. When that has been achieved, the patient confirms the relative positions, e.g. by pressing the trigger of the joystick (step 705) and the position is stored in a local memory of the smartphone/tablet 2 and/or data representative of the circle positions is transmitted to the communications server apparatus 102 and stored in the database 126. In an alternative embodiment, this screen/goggle calibration is performed using a grey circle displayed on the screen, first for one eye and then the other, with the patient being asked to manipulate the circle to the centre of the screen, and the settings for each eye are stored as before.
Next, and referring to Figure 8 of the drawings, in a process for calibrating the test itself to the patient’s current visual acuity, the screen of the smartphone or tablet first starts black as before (step 801 ). In this calibration process, the patient is shown sets of Landholt rings with gaps at the top, bottom, left and right and the intention is to determine the smallest completely visible chart (i.e. the smallest set of Landholt circles on which the patient can see the gaps). Thus, at step 802, a first set of circles is displayed on the screen and the patient is asked (on the screen or by audio prompt) at step 803, whether or not they can see all of the gaps in the displayed circles. If yes, the size is stored (e.g. in local memory) at step 804 and the process moves to step 805, where the size of the circles is reduced and a smaller set of circles is displayed (step 802). This loop is repeated until the patient enters a negative response, indicating that they cannot see all of the gaps in the circles currently being displayed. At that point, the process exits the above-described loop and the last visible circle size is memorised and/or data representative of the circle size is transmitted to the communications server apparatus 102 and stored in the database 126. In an alternative embodiment, this process may be performed for each eye separately (see steps 806, 807) to determine and store the smallest respective circle set for each eye.
Once the test has been calibrated, the visual acuity test process can start. Referring to Figure 9 of the drawings, and as previously stated, the test starts (at step 901 ) with the patient’s ‘worst’ eye by asking the patient to close their other eye such that they are viewing the screen on the smartphone or tablet 2 using only the other eye. For the visual acuity test, and as previously described, the Landholt circles can be used, whereby differently sized sets of such circles (i.e. each set including Landholt circles of the same size but with the gap in a different place) are displayed, one circle at a time for a short period of time, and the patient is asked to indicate where the gap is in each case. The circles are shown randomly and get progressively smaller until the patient’s correct responses are less than some predetermined threshold. However, in an alternative embodiment, a letter ‘E’ is instead used. In a first part of the test, a series of high contrast E’s is presented, each one in the series being slightly smaller than the one before, until the patient indicates that they can no longer see the last-presented E. From this, the last size of E the patient could see is used for the second part of the test, wherein the contrast of the E against the background gets progressively less until the patient can no longer see the E. It will be appreciated that the smallest size of E that can be presented on the screen is limited by pixel size (and, therefore, screen resolution), and degree to which each E can be reduced in size is similarly limited. Thus, when the patient states that they can no longer see the E, there may be quite a ‘jump’ in terms of visual acuity between the last E and the immediately preceding one that they could see. The second part of the test is designed to provide further granularity and accuracy in relation to the visual acuity measurement, as follows.
The second part of the test is illustrated schematically in Figure 9 of the drawings. At step 902, the first E (greatest contrast) is displayed on the screen of the smartphone/tablet 2. The E is displayed at one of four positions relative to the centre of the screen, namely ‘top’, ‘bottom’, ‘left’ and Tight’, and the patient provides an indication that they can see it (in this case, by pushing the joystick in the direction corresponding to the position at which they see the displayed ‘ E’). If the patient pushes the joystick ‘up’ (at step 903), the response is detected and the result (correct/incorrect) is recorded (at step 904), and the process is repeated with the next ‘E’. If not, but the patient instead pushes the joystick ‘down’ (at step 905), the response is detected and the result (correct/incorrect) is recorded (at step 906), and the process is repeated with the next ‘ E’ . If not, but the patient instead pushes the joystick ‘left’ (at step 907), the response is detected and the result (correct/incorrect) is recorded (at step 908), and the process is repeated with the next ‘E’. If not, but the patient instead pushes the joystick Tight’ (at step 909), the response is detected and the result (correct/incorrect) is recorded (at step 910). Finally, if no response is received after a predetermined period of time, an ‘incorrect’ response is recorded and the process moves on to the next ‘E’. The process monitors (at step 911 ) whether or not the joystick has been returned to centre and if, after a predetermined period of time, it still has not been returned to centre, the process moves on anyway to the next ‘E’.
This process is repeated for each of a number of E’s with the same background contrast but in different respective positions on the screen, and then moves on to sets of E’s with progressively lower contrast, until the test is complete. Then, the process returns to step 901 and is repeated for the other eye. All of the results are collated by the communications server apparatus 102, and a visual acuity measurement is calculated therefrom by the microprocessor 116, and the measurement is stored in the database 126 against patient identity data and optionally with data representative of the date/time at which the visual acuity test was performed. This visual acuity test data is accessible by the patient’s medical practitioner via the practitioner communications device 106, and may also be displayed to the user on the screen of the smartphone 2 after the test has been completed. In a preferred embodiment, the above-described low contrast visual acuity test may be preceded by a high contrast test in which a set of E’s with the same (high) contrast are presented, each E so presented becoming progressively smaller, until the user indicates that they can no longer see an ‘E’. The smallest size of ‘E’ that the user could see is then used for the low contrast acuity test described above to provide further granularity in relation to the final visual acuity measurement.
The conversion chart below represents the manner in which the user’s input in respect of the visual acuity tests described above can be mapped to determine their visual acuity measurement.
Figure imgf000026_0001
Figure imgf000027_0001
As discussed above, any one of a number of different types of set visual acuity tests could be periodically presented to a patient whilst using the vision aid of the present invention. The periodicity of testing may be predetermined or may even be triggered by the medical practitioner treating a patient by transmitting a trigger signal to the smartphone/tablet 2 of the vision aid device from the practitioner communications device when a visual acuity test is due to be performed. This trigger signal could be automatically generated from the patient’s electronic medical record, or it may be triggered manually.
In other embodiments, the visual acuity test could be triggered by significant changes in their interaction with the image manipulation options referenced above. For example, if the user’s use (i.e. real time manipulation) of the zoom or digital magnification function of the image controls exceeds some predetermined level, a visual acuity test could be triggered, and the resultant visual acuity measurement transmitted to the practitioner communications device, to alert the medical practitioner that the patient’s visual acuity might be decreasing and that treatment may be required.
In yet other embodiments of a vision aid device, a retinal imaging system may be provided in addition to, or instead of, the visual acuity test flow described above, in order to enable remote monitoring of a user’s visual acuity and any changes in their vision.
Referring to Figures 10 and 11 of the drawings, a retinal imaging system 100 may be mounted on a pair of spectacles 101 (or other head-mounted vision aid device). The imaging system is beneficially mounted outside of the user’s ‘normal’ eyeline. Referring specifically to Figure 10 of the drawings, the retinal imaging system 101 comprises a light source 102, for example, a white LED or the like, and an image capture device 103, such as a charge coupled device (CCD) array or the like. A processor 104 incorporated with the light source 102 generates control signals for the light source 102, and a processor 105 incorporated with the image capture device 103 generates control signals for the image capture device 103. The light source may include a filter (not shown) for enabling the light source 102 to emit light a one or more different wavelengths (e.g. infrared, green, etc). The operation of the filter is controlled by the processor 104. The image capture device may receive control signals triggering an image capture event, according to some predefined schedule. It may, in some embodiments, also receive control signals triggering an image capture event when a user’s pupil is determined by the aligned with the optical axis of the imaging system.
A focusing lens 106 is provided at the input to the image capture device 103. A pair of mirrors 107, 108 create a ‘folding’ effect for light 109 from the light source and reflected light 110 returned to the image capture device 103. A first elongate mirror 107 is located at an obtuse angle (e.g. ~135°) to the optical axis of the image capture device 103 and arranged and configured to receive light 109 from the light source 102 and reflect it through 90° toward a second elongate mirror 108. The second elongate mirror 108 is located a lateral distance from, but substantially in line with, the first mirror 107 and arranged at an acute angle (e.g. ~45°) to the optical axis of a condensing lens 111 facing a user’s eye 112 when the vision aid device 101 is in use. Thus, light 109 from the light source 102 is reflected from the first mirror 107 to the second mirror 108 and then onto the condensing lens 111 toward the user’s eye 112, and light 110 reflected from the user’s retina is directed from the condensing lens 111 onto the second mirror 108, reflected through substantially 90° onto the first mirror 107 and then reflected back through substantially 90° onto the image capture device 103 via the focusing lens 106. Image data, thus captured, may be pre- processed by the processor 105 and image data, thus generated, may then be transmitted to a remote server (for further processing and/or display). An example image 113 of a user’s retina, as seen by the condensing lens 111 when the user’s pupil is aligned with the optical axis of the condensing lens 111 , is provided in Figure 10 for illustrative purposes only. It will be apparent to a person skilled in the art, from the foregoing description, that modifications and variations can be made to the described embodiments without departing from the scope of the invention as defined by the appended claims.

Claims

28
1 . A vison aid device, comprising head-mountable base unit having a distal end located in a user’s eyeline when mounted for use, the base unit housing a retinal imaging system comprising: an imaging system comprising a light source and an image capture device located outside of the user’s eyeline and an optical element arrangement configured, in use, to direct light from the light source into the user’s eyeline and for directing light from the user’s eyeline to the image capture device, the retinal imaging system further comprising a processor for receiving image data captured by said image capture device and transmitting said image data, or data representative thereof, to a remote server.
2. A vision aid device according to claim 1 , wherein the processor is configured to analyse said image data to identify one or more captured images in which the user’s pupil is aligned with the optical axis of the imaging system as defined by the optical element arrangement and transmit image data representative of the one or more identified images to said remote server.
3. A vision aid device according to claim 1 , wherein the processor is configured to identify when the user’s pupil is aligned with the optical axis of the imaging system as defined by the optical element arrangement and cause said image capture device to capture an image when such alignment is identified.
4. A vision aid device according to any of the preceding claims, wherein the retinal imaging system includes means for generating a fixation target which, when a user fixes their gaze thereon in use, results in the user’s pupil being aligned with the optical axis of the imaging system as defined by the optical element arrangement.
5. A vision aid device according to any of the preceding claims, wherein the optical element arrangement comprises at least two reflective elements for directing light from said light source into the user’s eyeline, in use, and directing light reflected from the user’s retina to the image capture device.
6. A vision aid device according to claim 5, wherein the optical element arrangement comprises a pair of reflective elements or mirrors, a first mirror being configured to reflect light from the light source through substantially 90° to a second mirror, the second mirror being configured to reflect said light back through substantially 90° into the user’s eyeline.
7. A vision aid device according to claim 7, wherein the optical axis of the image capture device and the user’s eyeline are substantially parallel to, and laterally spaced apart from, each other, when in use.
8. A vision aid device according to any of the preceding claims, wherein the light source is configured to generate light of a selected one or more wavelengths.
9. A vision aid device according to claim 8, wherein said light source includes a filter configured to output light of a selected one or more wavelengths.
10. A vision aid device according to any of the preceding claims, wherein the retinal imaging device includes a focusing lens configured, in use, to focus light received from the user’s retina, via said optical element arrangement, onto the image capture device.
11 .A vision aid device according to any of the preceding claims, wherein the retinal imaging system comprises a condensing lens located in the user’s eyeline when the vision aid is in use.
12. A vision aid device according to any of the preceding claims further comprising a housing mounted on, or integrated into the base unit and housing the retinal imaging system, the housing including a condensing lens in an outer wall thereof facing a user’s eye when the vision aid device is in use.
13. A vision aid device according to claim 12, wherein the optical element arrangement comprises a first input/output axis and a second input/output axis, the first and second input/output axes being substantially parallel to, and spaced apart from, each other, and wherein the first input/output axis is aligned with an optical axis of the image capture device and the second input/output axis is aligned with the condensing lens, such that, in use, light from the light source is directed from the first input/output axis to the second input/output axis and onto the condensing lens, and reflected light from the user’s retina is directed through the condensing lens to the second input/output axis and then onto the first input/output axis back to the image capture device.
14. A vision aid device for a low vision user, the device comprising: • a light blocking head-mountable base unit having a distal end located in a user’s eyeline when mounted for use;
• at least one screen mounted or located at said distal end;
• an image capture device arranged and configured to capture image data from a user’s environment, in use;
• a processor for receiving said image data and configured to apply image compensation thereto to generated enhanced image data and cause said enhanced image data to be displayed on the or each said screen; and
• a user input means configured to allow a user to interact with image data displayed on the or each said screen; the processor being further configured to generate, during use of said device as a vision aid, a visual acuity test flow associated with said user, obtain, from said test flow, a visual acuity measurement for said user, and transmit said visual acuity measurement to a remote server. A vision aid device according to claim 14, wherein the processor is configured to present a visual acuity test on the at least one screen for completion by a user using said user input means during use of the device as a vision aid. A vision aid device according to claim 15, wherein the visual acuity test comprises briefly displaying a series of characters or shapes in turn on the screen, each displayed character being smaller and/or of lower contrast to the background than the previous character, until the user indicates that they can no longer see the character sufficiently clearly on the screen, and the processor is configured to calculate a visual acuity measurement from a user indication of the smallest character that can be seen and/or the lowest contrast at which a character can be seen. A vision aid device according to any of claims 14 to 16, comprising a pair of screens mounted or located side by side at said distal end of the base unit, wherein the processor is configured to present a visual acuity test flow on each said screen separately, and the vision aid comprises means for, at least when a visual acuity test flow is being presented on one screen, preventing any optical interference from the other screen.
18. A vision aid device according to claim 17, wherein the processor is configured to switch off a screen, or otherwise cause it to go black, for the duration of a visual acuity test flow being presented on the other screen.
19. A vision aid device according to any of claims 14 to 18, wherein the processor is configured to display a screen calibration test flow for completion by the user prior to displaying the visual acuity test.
20. A vision aid device according to any of claims 14 to 19, wherein the processor is configured to display a test calibration flow for completion by the user prior to displaying the visual acuity test.
21 .A vision aid device according to any of claims 14 to 20, wherein the user input means is configured to generate image manipulation instructions in response to user control actions, and the processor is configured to utilise the image manipulation data to manipulate the displayed image data in substantially real time.
22. A vision aid device according to any of claims 14 to 21 , wherein the processor is configured to monitor image data displayed on the or each said screen to detect changes or anomalies therein, and use a said detected change or anomaly to identify a change in visual acuity of the user.
23. A vision aid device according to claim 22, wherein the processor is configured to transmit an alert to said remote server indicating that the visual acuity of the user may have changed.
24. A vision aid device according to claim 22 or claim 23, wherein the processor is configured to monitor enhanced image data displayed on the or each screen over a period of time to determine a standard pattern for said user, and to identify a significant change therein if the enhanced image data deviates by more than a predetermined threshold from said standard pattern.
25. A retinal imaging system mounted or integrated into a vision aid device according to any of the preceding claims, the retinal imaging system comprising: an imaging system comprising a light source and an image capture device located outside of the user’s eyeline and an optical element arrangement configured, in use, to direct light from the light source into the user’s eyeline 32 and for directing light from the user’s eyeline to the image capture device, the retinal imaging system further comprising a processor for receiving image data captured by said image capture device and transmitting said image data, or data representative thereof, to a remote server.
26. A vision aid device according to any of claims 14 to 24 including a retinal imaging system according to claim 25.
27. A method performed in a communications system comprising at least one screen and a light blocking head-mountable base unit configured, in use, to be supported on a user’s head such that the or each screen is in their eyeline, the method comprising, under control of a processor of the communications system:
• capturing image data from the user’s environment;
• applying image compensation to said captured image data using image enhancement data stored in a memory of said communications system and associated with the user;
• causing said compensated image data to be displayed on the or each screen in substantially real time;
• receiving image manipulation control data from the user and manipulating the displayed image data in accordance with the control data;
• generating a visual acuity test flow associated with the user, receiving, from the user, data representative of visual acuity, using said test flow to obtain a visual acuity measurement for said user, and transmitting said visual acuity measurement to a communications server apparatus of the communications system.
28. A computer program or computer program product comprising instructions for implementing the method of claim 27.
PCT/GB2021/053205 2020-12-08 2021-12-08 Vision aid device WO2022123237A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB2019278.7 2020-12-08
GBGB2019278.7A GB202019278D0 (en) 2020-12-08 2020-12-08 Vision aid device

Publications (1)

Publication Number Publication Date
WO2022123237A1 true WO2022123237A1 (en) 2022-06-16

Family

ID=74175152

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2021/053205 WO2022123237A1 (en) 2020-12-08 2021-12-08 Vision aid device

Country Status (2)

Country Link
GB (1) GB202019278D0 (en)
WO (1) WO2022123237A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116636808A (en) * 2023-06-28 2023-08-25 交通运输部公路科学研究所 Intelligent cockpit driver visual health analysis method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160135675A1 (en) * 2013-07-31 2016-05-19 Beijing Zhigu Rui Tuo Tech Co., Ltd System for detecting optical parameter of eye, and method for detecting optical parameter of eye
US20170000329A1 (en) * 2015-03-16 2017-01-05 Magic Leap, Inc. Augmented and virtual reality display systems and methods for determining optical prescriptions
US20190094552A1 (en) * 2017-09-27 2019-03-28 University Of Miami Digital Therapeutic Corrective Spectacles
WO2019092697A1 (en) 2017-11-07 2019-05-16 Notal Vision Ltd. Retinal imaging device and related methods
US20190269324A1 (en) * 2014-02-11 2019-09-05 Welch Allyn, Inc. Opthalmoscope Device
WO2019232082A1 (en) 2018-05-29 2019-12-05 Eyedaptic, Inc. Hybrid see through augmented reality systems and methods for low vision users
US20200069173A1 (en) * 2017-12-11 2020-03-05 1-800 Contacts, Inc. Digital visual acuity eye examination for remote physician assessment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160135675A1 (en) * 2013-07-31 2016-05-19 Beijing Zhigu Rui Tuo Tech Co., Ltd System for detecting optical parameter of eye, and method for detecting optical parameter of eye
US20190269324A1 (en) * 2014-02-11 2019-09-05 Welch Allyn, Inc. Opthalmoscope Device
US20170000329A1 (en) * 2015-03-16 2017-01-05 Magic Leap, Inc. Augmented and virtual reality display systems and methods for determining optical prescriptions
US20190094552A1 (en) * 2017-09-27 2019-03-28 University Of Miami Digital Therapeutic Corrective Spectacles
WO2019092697A1 (en) 2017-11-07 2019-05-16 Notal Vision Ltd. Retinal imaging device and related methods
US20200069173A1 (en) * 2017-12-11 2020-03-05 1-800 Contacts, Inc. Digital visual acuity eye examination for remote physician assessment
WO2019232082A1 (en) 2018-05-29 2019-12-05 Eyedaptic, Inc. Hybrid see through augmented reality systems and methods for low vision users

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ITALIAN JOURNAL OF OPTHALMOLOGY, 1988

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116636808A (en) * 2023-06-28 2023-08-25 交通运输部公路科学研究所 Intelligent cockpit driver visual health analysis method and device
CN116636808B (en) * 2023-06-28 2023-10-31 交通运输部公路科学研究所 Intelligent cockpit driver visual health analysis method and device

Also Published As

Publication number Publication date
GB202019278D0 (en) 2021-01-20

Similar Documents

Publication Publication Date Title
US11733542B2 (en) Light field processor system
NZ773820A (en) Methods and systems for diagnosing and treating health ailments
US11612316B2 (en) Medical system and method operable to control sensor-based wearable devices for examining eyes
WO2022123237A1 (en) Vision aid device
CN110785693A (en) Apparatus, method and system for measuring ophthalmic lens design effects
US20200178793A1 (en) Refraction Measurement of the Human Eye with a Reverse Wavefront Sensor
US20210338077A1 (en) Means and Methods of Measuring Refraction
EP4181761A1 (en) Means and methods of measuring refraction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21830321

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21830321

Country of ref document: EP

Kind code of ref document: A1