WO2017186721A1 - Eye movement monitoring system - Google Patents

Eye movement monitoring system Download PDF

Info

Publication number
WO2017186721A1
WO2017186721A1 PCT/EP2017/059803 EP2017059803W WO2017186721A1 WO 2017186721 A1 WO2017186721 A1 WO 2017186721A1 EP 2017059803 W EP2017059803 W EP 2017059803W WO 2017186721 A1 WO2017186721 A1 WO 2017186721A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
user
saccade
controller
image
Prior art date
Application number
PCT/EP2017/059803
Other languages
French (fr)
Inventor
Christian Andreas TIEMANN
Ronaldus Maria Aarts
Murtaza Bulut
Warner Rudolph Theophile Ten Kate
Kars-Michiel Hubert Lenssen
Paul Anthony SHRUBSOLE
Chaitanya DONGRE
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Publication of WO2017186721A1 publication Critical patent/WO2017186721A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/08Elderly
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0091Fixation targets for viewing direction
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/028Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
    • A61B3/032Devices for presenting test symbols or characters, e.g. test chart projectors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/162Testing reaction times
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Definitions

  • the invention relates to an eye movement monitoring system, in particular a system for providing saccadic eye movement data.
  • Remotely managed, at-home care services for the elderly are part of a fast growing market which enables the elderly to live independently in the comfort of their home, whilst still having access to medical support when needed, at a lower cost than hands-on care.
  • These services may typically include a video call system to enable remote interaction between the caregiver and elderly patient.
  • the caregiver asks targeted questions with the aim to get a complete overview of the patient's health status. For instance, topics that are being addressed include sleep, physical activity, nutrition intake, toileting behaviour, social interactions, cognition, and medication adherence.
  • MMSE Mini-Mental State Examination
  • MoCA Montreal Cognitive Assessment
  • US 2012/0059282 discloses algorithmic methods for diagnosing declarative memory loss using mouse tracking and/or eye tracking to follow the visual gaze of a subject during participation of the subject in a visual paired comparison text. Results of this test can be used in diagnosis of dementia, mild cognitive impairment and Alzheimer's disease.
  • a visual paired comparison test involves displaying two images to a subject for a set period of time, before the images are delayed for a set delay time. Two new images then appear, one which is identical to a previously displayed image, and one which is different. A normal subject will tend to spend more time looking at a new image than an old one. A cognitively impaired patient may tend to spend equal amounts of time looking at both. By monitoring these relative lengths of time, cognitive impairment is estimated. Using eye tracking hardware, this can be done automatically.
  • Desirable would be a means of achieving automated or semi-automated monitoring of one or more cognitive performance indicators in a way that is non-intrusive, requires little active participation on the part of subject or user, and is therefore readily repeatable with relatively great frequency over an extended period of time.
  • an eye movement monitoring system comprising:
  • an input unit adapted to acquire eye tracking data corresponding to a position and/or movement of a user's eye
  • a controller configured to:
  • a saccade latency period determines by means of the acquired input data a saccade latency period, the saccade latency period corresponding to the time interval between the displaying of the prompt image and the movement of the user's gaze to the displayed prompt image;
  • the invention is based on the concept of measuring saccadic eye movements of users and using these measurements to monitor the progression of saccadic parameters over time. Certain saccadic parameters have been shown to be strongly correlated with cognitive functioning.
  • a saccade is a rapid simultaneous movement of the eyes which serves as a mechanism for fixation, and is controlled cortically by the frontal eye fields, or subcortically by the superior colliculus.
  • Different types of saccades exist such as the visually guided reflexive saccade, which concerns movement of the eyes triggered upon the appearance of a visual stimulus.
  • a saccade may be characterized by four parameters: amplitude, peak velocity, duration, and latency.
  • Figure 1 schematically illustrates these four parameters with reference to an exemplar graph.
  • the graph shows eye (angular) position 22 and eye (angular) velocity 24 for a typical saccade over time.
  • Arrow 30 depicts peak velocity - the maximal (angular) velocity of the eye during a saccade motion.
  • Arrows 32 indicate saccade duration: the total length of time taken to complete the saccade.
  • Arrows 34 show saccade amplitude: the total size or reach of the saccade (the total angular distance travelled by the eye during the saccade).
  • Saccade latency is known to have particular significance for cognitive functioning. For instance, Meyniel et al. observed a longer saccade latency in fronto-temporal dementia patients compared to controls (237 ⁇ 62ms vs. 183 ⁇ 25ms), when they were asked to follow a target that jumped from a central position to an unpredictable 25° right or left location (C. Meyniel, S. Rivaud-Pechoux, P. Damier, and B. Gaymard, "Saccade latency
  • Mosimann et al. performed similar experiments and observed longer latencies in Parkinson's disease dementia (PDD) patients and dementia with Lewy bodies (DLB) patients (U. P. Mosimann, R. M. Miiri, D. J. Burn, J. Felblinger, J. T. O'Brien, and I. G. McKeith, "Saccadic eye movement changes in Parkinson's disease dementia and dementia with Lewy bodies", Brain, vol. 128, nr. 6, pp. 1267-1276, jun. 2005).
  • Hershey et al. also measured saccadic latencies in patients with Alzheimer's dementia and other types of dementia. The saccadic latencies for both groups were considerably longer than those for age-matched controls (Hershey LA, Whicker L, Jr, Abel LA, Dell'Osso LF, Traccis S, and Grossniklaus D, "Saccadic latency measurements in dementia", Arch. Neurol, vol. 40, nr. 9, pp. 592-593, sep. 1983).
  • the proposed invention concerns a saccade test for subjects such as elderly citizens.
  • the test is based on triggering a saccade of a subject by displaying a prompt image or other graphic feature at a spatial region some distance away from the current target of the subject's focus (termed the user's 'gaze' in this document). This distance should nonetheless be close enough to the initial target of their gaze that the image falls within the subject's peripheral vision and is noticeable.
  • a person's typical instinct is to shift the eyes to inspect the object or image - i.e. triggering a saccade.
  • the subject's eye(s) can be tracked by well-known eye-tracking technologies and information relating to the position and/or movement of the eyes fed into the controller of the current system.
  • the system may comprise an integrated eye tracking unit.
  • Received position and movement data can be used to determine at least saccade latency, and may also be used to determine any or all of the above described saccade parameters.
  • the test demands no active participation, and can be integrated easily into many other display-based activities without necessarily interrupting them. These could be any of a range of enjoyable or routine activities which require use of a display device. It can for example be performed during regular video calls with a caregiver or relative. It might even be integrated with film or television viewing for instance.
  • the test enables data to be gathered which can be used to identify potential progressive changes in saccadic parameters, which may indicate a decline in cognitive health.
  • Embodiments of the invention could in example applications form part of a battery of tests performed during video calls to test different cognitive aspects. It could be a small, almost unobtrusive addition in a video call if it is embedded well.
  • Embodiments of the proposed invention have the advantage that they can be performed easily during recurring video calls with a caregiver, for example. Hence, the test can be performed at a regular basis in the comfort of a home environment. The opportunity of frequent testing also provides a means to detect gradual changes in saccadic parameters in an early stage. The test may in one or more examples be initiated by a remote professional caregiver and does not require active participation of the elder and takes little time.
  • test is language independent and cannot be learned by the elder, i.e., the test does not become less effective over time.
  • test integration of the test with video calls represents just one example of a screen- based activity which the test could be combined with.
  • the test might even be combined with for example the watching of television or films. It might be initiated automatically or manually by a caregiver or even the subject/patient themselves.
  • the eye tracking data may relate to the movement and/or position of a user's eye pupil. In this case it may be the angular position or motion of the eye within the eye socket that is of interest. In other examples, translational position and/or movement of the eye may additionally or alternatively be tracked.
  • the absolute displacement of the eye relative to the display unit for instance may be the parameter of interest. This may enable embodiments to acquire data corresponding to movement (such as turning) of the head as well as the eye.
  • Well-known eye-tracking apparatus and methods are capable of capturing data representing both of these.
  • the display output may be for controlling any variety of display device including, by way of non-limiting example, a screen-based display, or a 2D or 3D projector.
  • the spatial location of the prompt image may be a spatial location on a screen for instance, or a spatial location on an incident surface in the case of a projector. It may be a spatial location in 3D space in the case of a 3D display device.
  • the controller may be adapted to generate at said first point in time an output for controlling the display device to display an initial image at an initial image spatial location, the initial image spatial location being different to the prompt image spatial location.
  • the purpose of the initial image is to draw the focus of the subject to a known spatial location, so that a prompt image may then subsequently be displayed at a different location having a precisely known displacement from the first image.
  • the prompt image may simply be overlaid on an arbitrary static or video image.
  • the controller may be adapted to define the prompt image location based on the determined initial gaze position of a user's eye.
  • This approach may be more limited in that the range of displacements at which a prompt image may be displayed is constrained by the display location(s) of the arbitrary static or video image.
  • the approach has the benefit of enabling the test to be combined with any enjoyable or routine video-based activity, wherein prompt images are overlaid atop the images displayed as part of this activity.
  • system may further comprise:
  • a display unit in operative communication with the controller; and/or an eye tracking unit, in operative communication with the input unit, for tracking the position and/or movement of a user's eye.
  • the input unit may in embodiments be configured to acquire eye tracking data corresponding to position and/or movement of two eyes of a user and/or one or more eyes of a plurality of users. This may improve accuracy.
  • the controller may be further adapted to determine, by means of eye tracking data acquired by the input unit one or more of the following parameters:
  • the system may further comprise a video telephony unit in operative communication with the controller, and adapted to output video telephony images for display on the display unit, wherein the controller is configured to overlay said prompt image on top of images output by the video telephony unit.
  • the video telephony unit may be configured to telephonically receive control commands for controlling the eye movement monitoring system, wherein the controller is configured to be responsive to said control commands.
  • the control commands may be used to trigger initiation of the saccade test for example. This may enable a carer or relative to choose an appropriate moment to initiate a test during a video telephone call. They might additionally or alternatively be used to control other parameters of the test, such as prompt image displacement from current gaze location (i.e. desired saccade amplitude).
  • the controller may be adapted to initiate control steps at a random or semi-random time.
  • the system may be configured to initiate the test automatically at one or more pre-determined or pre-set times. Test times could be set in advance by a carer or relative, or might be determined based on prior test results for instance.
  • the input unit may be configured to receive one or more control commands, and the controller may be configured to initiate control steps in response to receipt of said one or more control commands.
  • Control commands may be broadcast or otherwise communicated to the input unit via a remote connection. This might make use of an internet or telephone connection for instance, or any other network connection, either wired or wireless.
  • system may further comprise a user interface unit for generating control commands in response to user input.
  • a user may control the system, including for instance initiating a saccade test and/or controlling or varying one or more other parameters of the test, such as test saccade amplitude (as discussed above).
  • a user may decide to initiate a saccade test for instance in response to noticing their own cognitive performance has changed.
  • the controller may further be adapted to generate outputs for controlling the display device to display images for facilitating visual acuity or visual ability testing. It is known that deterioration in visual ability can also have a significant impact on reaction times (i.e. saccade latency). By incorporating a visual ability test into the saccade test, visual deterioration can be controlled for in any subsequent analysis of the test results. This may prevent mere visual decline being mistaken for cognitive deterioration.
  • the controller may be configured to generate outputs for controlling the display device to display images for facilitating testing or assessment of one or more other properties or parameters of a user's vision.
  • a peripheral vision or tunnel vision test could be incorporated into or combined with the saccade test.
  • Parameters such as brightness and/or colour of a trigger image could be varied or optimised.
  • tests to assess a user's brightness or colour sensitivity could be incorporated. These tests may be used, as with the visual acuity testing, to generate data which can be combined with the saccade test data to provide a more accurate basis for making assessments of a user's changing cognitive abilities.
  • colour and/or brightness tests for instance could be used to adjust the colour and/or brightness of objects presented on the display, so as to optimise any further tests which are conducted.
  • an eye movement monitoring method which acquires eye tracking data corresponding to a position and/or movement of a user's eye, the method comprising:
  • determining an initial gaze position of a user's eye at a first point in time controlling a display device to display a prompt image at a prompt image spatial location;
  • the method may further comprise facilitating a video telephony process, said video telephony process including controlling the display unit to display video telephony images.
  • the method may in accordance with these examples comprise controlling the display device to display the prompt image overlaid on top of the video telephony images.
  • the method may be implemented at least in part with software.
  • a computer program comprising computer program code means adapted to perform all the steps of the method when said computer program is run on a computer.
  • Figure 1 shows an exemplar graph illustrating four saccadic parameters
  • Figure 2 shows a first example eye movement monitoring system in accordance with embodiments of the invention
  • Figure 3 schematically illustrates a first example display output at two points in time in accordance with embodiments of the invention
  • Figure 4 schematically illustrates a second example display output at two points in time in accordance with embodiments of the invention
  • Figure 5 shows a second example eye movement monitoring system in accordance with embodiments of the invention.
  • Figure 6 schematically illustrates a third example display output at two points in time in accordance with embodiments of the invention.
  • Figure 7 schematically outlines a first example eye movement monitoring method in accordance with embodiments of the invention.
  • Figure 8 schematically outlines a second example eye movement monitoring method in accordance with embodiments of the invention.
  • Figure 9 shows a general computer architecture for implementing the method.
  • the invention provides eye movement monitoring systems and methods which may provide data for use in assessing and monitoring changes in cognitive capability of a user over time.
  • Embodiments are based on conducting a display-based test, designed to trigger a saccade of a user's eye(s). Eye tracking data captured throughout this test is received and used to determine one or more saccadic parameters, including saccade latency: the time delay between presentation of a prompt image and the turning of a user's eye to examine or look at the image. An output is generated based on the calculated saccadic parameters. This may be used to perform subsequent analysis of the results, and to infer cognitive progression.
  • Saccadic parameters are known to be associated with cognitive ability, and with certain age- related cognitive disorders such as Alzheimer's or dementia.
  • the test can be easily integrated or combined with any other ordinary or routine screen-based activity. In example
  • a video telephony unit is further provided such that the test may be integrated into a video call with a caregiver or relative.
  • Figure 2 schematically illustrates a first example eye movement monitoring system in accordance with one or more embodiments of the invention.
  • the system comprises a controller 46 operatively coupled to an input unit 42.
  • the input unit is configured to receive eye tracking data from an eye tracking unit 44.
  • the controller is further operatively coupled to a display unit 48, and configured to output one or more control signals for controlling the display unit to display images.
  • the displayed images are designed to trigger at least one saccade of the user's eye(s) 52 from a first (initial) position, to a second position.
  • the eye tracking unit 44 is configured to track or monitor the position, orientation and/or motion of a user's eyes 52 at least throughout this at least one saccade, so that the gaze of the user can be tracked, and to relay data reflecting this information to the input unit 42.
  • the controller is configured to receive the eye tracking data from the input unit and to determine one or more characterising properties or parameters of the saccade based on these data.
  • the controller may be configured to determine at least the saccade latency based on the eye tracking data received from the eye tracking unit.
  • the saccade latency represents the delay between a prompt image being displayed, and the initiation of the saccade.
  • An output may then by generated by the controller based on these determined parameters or properties.
  • system may be configured to be operatively coupled with an external memory, or an external computer or other processor for storage or analysis of data.
  • a dedicated storage medium may in examples be provided as part of the system.
  • the system may be operatively coupled to an external server or computer via a wired or wireless network link for instance, and generated parameter values output via this network link.
  • the eye tracking unit is adapted to track or monitor the position or orientation of a user's eye or eyes 52. It may for example track the position or movement of the pupil of one or more eyes. In all cases, the tracking unit is configured such it is able to detect, monitor and/or measure the turning of a user's eye. This may include the capacity to identify the angle through which an eye turns during a saccade, relative to the user's eye socket for instance.
  • the unit may be adapted to identify an absolute rotational position of the eye, or may be adapted to identify relative changes in rotational position. In examples, the eye tracking unit may be adapted to identify both angular position and angular motion (or velocity) of one or more eyes.
  • the eye tracking unit 44 may be configured to monitor or determine translational position or motion (velocity) of a user's eye, relative to the eye tracking device (as opposed to relative to the user's eye socket for instance).
  • the input unit may be connected to a dedicated eye tracking unit for providing dedicated eye tracking functionality.
  • the eye tracking data may be generated and provided to the input unit by a more generic sensing element such as a camera.
  • the eye monitoring system includes an integrated eye tracking unit for providing eye tracking data to the input unit.
  • the system may be configured to be operatively coupled with auxiliary or external eye tracking hardware. This may enable the system to be 'retro-fitted' into or onto pre-existing hardware already owned by a user.
  • the system might in examples for instance be configured to be operatively coupled with an existing videophone device.
  • the pre-existing sensing and display equipment can be utilised by the system to perform the saccade test and collect the necessary data.
  • an already provided camera may be utilised to perform eye tracking.
  • the eye tracking functionality might in further examples be facilitated by for example an integrated camera of a mobile device such as a mobile phone or tablet, or a camera attachment of a domestic computational device, such as a laptop or PC.
  • a mobile device such as a mobile phone or tablet
  • a camera attachment of a domestic computational device such as a laptop or PC.
  • the controller is adapted to control the display device to generate images intended to trigger a saccade.
  • Figure 3 schematically depicts a first set of example outputs which might be presented on a viewer-facing portion of a display device 48 at an initial (A) and later (B) time, in accordance with one or more embodiments.
  • a first image 55 is displayed near the centre of a screen of the display device. This initial image is intended to draw the gaze of the participant of the test toward the centre of the screen.
  • the first output (A) is replaced by second output (B), so that the initial image 55 disappears and a new (prompt) image 56 appears at one corner of the screen.
  • This shift is intended to trigger the participant to shift (saccade) their eye(s) from the centre of the screen to the corner, so as to focus their gaze on the new prompt image.
  • the eye tracking unit 44 is configured to track the user's eye movement throughout the saccade, such that dynamic parameters of the saccade may then be calculated by the controller 46, based on eye-tracking data output by the tracking unit 44.
  • the initial and later images are represented schematically in Figure 3 as circles. However, it will be understood that these may in general comprise any arbitrary image or visual presentation.
  • the initial 55 and/or prompt 56 images may in examples be moving images or static images.
  • the image may for example be a simple dot, the size for instance of a coin, but presented against a background of contrasting colour.
  • either the initial or prompt image may flash or blink so as to attract attention. This ensures that a prompt image otherwise too far from a user's centre of gaze to be noticed, nonetheless triggers a saccade.
  • the prompt image 56 is displayed at a location of the screen diagonally displaced from the initial image 55.
  • the prompt image 56 and initial image 55 may be either vertically or horizontally displaced from one another.
  • it may be beneficial that the prompt image and initial image are horizontally displaced, so that induced eye movements are essentially horizontal. This may be desirable for example where it is preferred that a user not engage in certain head movements (e.g. moving their head upwards) when changing the direction of their gaze toward the prompt image. Such movements might, for certain examples of the executed saccade test, interfere with the generated results, since the amplitude, velocity and/or latency of the saccade may be altered by the additional head movement.
  • Figure 4 schematically depicts a second set of example outputs for a display device for triggering a user saccade.
  • Figure 4(A) shows the display at an initial time, at which an initial image 55 is displayed near the centre of the screen (as in the previous example of Figure 3).
  • the prompt image 56 is again displayed in a corner of the screen.
  • the initial image does not disappear.
  • the user may, despite the continued presence of initial image 55, be triggered to turn their eyes to inspect the novel element on the screen.
  • initial image 55 may comprise any image, either static or moving. This embodiment may then be used where it is desired that the test be integrated with some ordinary routine screen-based activity, such as a video call or watching film or television.
  • Initial image 55 may be images associated with these activities, such as moving images of a phone call recipient. In examples, these images may typically cover a larger portion of the screen than is depicted in Figure 4, including for example filling the screen completely.
  • Prompt image 56 may then simply be overlaid atop these background images. Due to the positioning of the prompt image at an extremity of the screen, it is considered unlikely that a typical user focussing on elements of the initial or background image(s) 55 will be looking in the direction of said extremity at the time of the prompt image's appearance. Therefore, a saccade will be triggered and the saccadic parameters accordingly may be measured or calculated.
  • the controller may control the screen to display only the prompt image, without displaying any initial image. This has the advantage that the participant or user is not primed to expect the appearance of the prompt image, and so its appearance is more surprising or unanticipated. Consequently, the exhibited saccade latency may more accurately reflect the capabilities of the participant.
  • the input unit may be adapted to acquire data from the eye-tracking unit corresponding to an initial gaze of the user in advance of displaying the prompt image.
  • An ideal location for displaying the prompt image on the display device may then be determined based on the known location of the user's gaze, so that the prompt image appears at a set or determined distance away from the present target of the user's gaze.
  • the triggered saccade motion is monitored by an eye tracking element 44 and data representing eye position or motion is communicated to the controller 46, via input unit 42.
  • One or more characterising properties or parameters of the saccade may then be determined.
  • a saccade may be characterised by four main properties or parameters: latency 28, duration 32, amplitude 34 and peak velocity 30.
  • the controller may be configured to determine at least the saccade latency based on the eye tracking data received from the eye tracking unit.
  • Saccade latency may be determined by simply subtracting the time value at which the prompt image is displayed from the time value (as indicated by the eye-tracking data) at which the participant's eye first begins to move.
  • Duration of the saccade may be determined by subtracting the time value (as indicated by the eye-tracking data) at which the participant's eye begins to move following displaying of the prompt image, from the time value at which the participant's eye ceases to move.
  • Amplitude of the saccade may be calculated by 'subtracting' an initial position or displacement of the user's eye (before a prompt image is displayed) from a final position or displacement of the eye (after completion of the saccade).
  • the exact process by which such 'subtraction' is performed may vary. For example, where collected data comprises elements representing position vectors of a user's eye, the process may comprise first subtracting the initial position vector from the later vector, and then calculating a magnitude of the vector. Alternatively, an 'angular' amplitude may instead be desired.
  • peak velocity may be determined from either position or velocity data by well-known differentiation techniques for finding maximal values.
  • the system may comprise an integrated display unit for displaying images in accordance with the outputs generated by the controller.
  • the controller may be configured such that it may be operatively coupled to an external display device, provided separately from the system. As in the case of the eye-tracking unit, this may be preferable in cases where it is desirable that the system be retrofitted to an existing piece or set of display hardware.
  • the display device may in examples consist of any suitable display device for displaying images or other visual stimuli. It may include for instance a monitor or panel-based screen, such as a CRT or LCD display, or another kind of panel or screen-based display.
  • the display device may comprise a projector unit for projecting 2D images onto an incident surface. It may comprise a 3D display device, either a 3D panel-based display device or a 3D projection or hologram based device.
  • Other suitable visual output devices may include for example wearable output devices, such as electronic or multimedia smart glasses or smart watches.
  • an input unit is defined as a distinct component, separate from the controller.
  • this distinction may be purely functional or notional, and the
  • the system may be configured to monitor a single eye of a single user or to monitor both eyes of a single user. In further examples, the system may be configured to monitor one or both eyes of each of a plurality of users.
  • the eye movement monitoring system may comprise a video telephony unit.
  • FIG 5 An example of such a system is schematically illustrated in Figure 5.
  • the system comprises a controller 46, operatively coupled to input unit 42 and display device 48.
  • An eye tracking unit 44 is connected to the input unit and configured to provide eye tracking data relating to the motion or position of a user's eye 52.
  • the system further comprises a video telephony unit 62 operatively coupled to the controller, and configured to be in connection with a telephonic communication network.
  • the video telephony unit is configured to connect with a remote host or client telephony unit via the communication network, and to simultaneously transmit and receive audio-visual signals. Received video signals are communicated to the controller which is configured to control the display unit 48 to display video images based on these signals.
  • the eye-tracking unit may be a camera unit and function simultaneously to capture video for the video -telephone call, and to track motion and position of a user's eye(s).
  • the controller is configured to execute at a determined time a saccade test in accordance for instance with one or more of the examples described above.
  • the controller may be configured to display at a given time a prompt image 56 at a distal or extremal region of the display unit 48.
  • image (A) depicts a display output at a first (initial) time, during which video telephone images 57 alone are displayed on the screen, enabling a video telephone call to be conducted.
  • Image (B) shows the screen output, upon display by the controller of a prompt image 56.
  • Video telephony images continue to be displayed as before, and the prompt image is simply overlaid on top, or combined with the telephony images.
  • the appearance of the prompt image draws the attention of the user at least momentarily away from their focus on the video telephone images, which, as illustrated in Figure 6, may be displayed and composed in such a way as to draw the focus of an average user's eyes toward the centre of the display.
  • the telephony images may in examples be expansive, so as to fill the entirety of the screen, but may be framed such that the primary object(s) of interest in the images (i.e. the other participant of the phone call) are centred in the middle of the screen.
  • the prompt image, displayed at an extremal region of the screen requires in this case a shifting of the eyes in order to be examined, and hence its appearance triggers a saccadic motion.
  • a time of initiation of the test may in accordance with examples of this embodiment be controlled by the other phone call host (e.g. the carer or relative). This may be done by transmission of one or more control signals along the telecommunication network to the video telephony unit, where it is then communicated to and interpreted by the controller.
  • the other phone call host e.g. the carer or relative
  • the video telephony unit may further provide a mechanism or means for establishing a video call connection between a person A (the person that executes the test, e.g., the caregiver) and a person B (the person who will undergo the test, e.g., the elder).
  • the system may include a camera and microphone/speaker combination such that both person A and B can see and hear one another (for instance, similar to internet-based calling
  • the video telephony unit includes a mechanism to transmit and receive signals and data between person A and B.
  • the test may be performed locally by the system at the person B site.
  • a remote connection may in examples be used only send a trigger signal to start the test, and to send back the results after the test. As saccade latencies are in the order of 200 ms, the used camera should at least be able to capture this time resolution.
  • initiation of the test may be triggered by any of a number of other means.
  • the system may comprise one or more user interface units or elements, enabling user activation of the test, or enabling the user to control one or more parameters of the test.
  • the test may be initiated automatically by the controller, either at random times during a video telephone call or other screen-based activity, or at set, pre-determined times, for example at pre-scheduled times, scheduled to ensure regular repetition of the test in order to enable monitoring of cognitive progression (either decline or improvement) over time.
  • the system further comprises a unit configured to provide visual capability or acuity testing.
  • a subject's (saccade) reaction time may be influenced not only by the cognitive ability of the elder but also simply by the optical or visual ability. This means that in a case of progressively deteriorating vision (e.g. developing a cataract) reaction time would be expected to diminish over time independently of any cognitive factors. Not accounting for these effects may therefore lead to false positive diagnoses of cognitive pathologies, or lead to more acute diagnoses than are in fact the case.
  • a reliable method to remotely track visual acuity during the video call may therefore be an important optional feature, enabling differentiation of "cognitive" factors from "visual” factors.
  • FIG. 7 schematically outlines a first example method in accordance with embodiments of this aspect of the invention.
  • the method steps mirror the control steps performed by the controller 46 in previously described embodiments.
  • the method comprises the first step 80 of acquiring eye tracking data at a first time t 0 , followed by the second step 82 of determining, based on the acquired data, an initial gaze position ro of a user's eye at this first time to.
  • a display device is controlled 84 to display a prompt image at a prompt image spatial location r 1; and further eye tracking data is then acquired 86
  • a saccade latency period is then determined 88, the saccade latency period corresponding to a length of time between the displaying of the prompt image and the movement of the user's gaze to the displayed image.
  • the final step 90 of the method then comprises generating an output comprising data based on said saccade latency period.
  • FIG. 8 schematically outlines a second example method in accordance with one or more embodiments of the invention.
  • the second example method mirrors steps performed by one or more elements of the example system of Figure 5, comprising a video telephony unit.
  • the method is schematically divided into three main stages, a pre-test stage (steps 94 and 96), consisting of a first step 94 of initiating or beginning a video call with another video-call participant, and a second step 96 of initiating the saccade test.
  • initiation of the test may be performed either locally by the test participant themselves, or remotely by the other participant of the video call (e.g. caregiver or relative). Remote initiation may involve transmission by the initiator of one or more control signals, and receipt and subsequent interpretation by the local eye monitoring system of these commands.
  • Initiation 96 may also be an automated initiation process, wherein a local controller triggers the test at a random moment, or at a moment determined to be suitable by the controller. This may be based, for example on eye movement information. It may not be desirable for instance to initiate the test while the participant is not looking at the screen. It could also be based on audio data, for instance so as to ensure that the test does not interrupt a conversation.
  • Initiation of the test triggers execution of the second (test) stage 92 of the method, which comprises steps 80-88 of the method of Figure 7. These steps constitute the primary steps of performing the saccade test itself, monitoring a user's eyes during eye movement and then determining one or more parameters of the saccadic motion including for instance saccade latency.
  • a final stage of the method begins, which comprises a first step 90 of generating an output based on the determined saccadic parameters, and a final step 98 of terminating the video phone call.
  • the generated output may comprise saccadic parameter data.
  • This step may further comprise transmitting this data to a remote server via a wired or wireless communication network.
  • the data may be transmitted via the telecommunication network facilitating the video phone call.
  • the data may be transmitted to a server, host or receiver located at the site of the other participant of the phone call (e.g. the carer of relative).
  • the other participant may then perform analysis of the data. Alternatively they may be transmitted to a server at a different site, for example via an internet connection.
  • the data might be output or downloaded onto a local storage medium or computational or mobile device, this being configured to operatively couple (through wired or wireless connection) to the system facilitating the method.
  • the data collected enables subsequent analysis to be performed which can provide indications of cognitive decline.
  • certain saccadic parameters including saccade latency, are known to be strongly correlated with cognitive capacity. More particularly, changes in these parameters over time may provide a strong indication of changing cognitive ability. Data collected over many repetitions of the test performed over an extended period of time may then be trend analysed to determine any significant trends in the figures. Such trends may provide an indication of changes in cognitive health.
  • saccadic movements may be measured during reading.
  • the absolute reading capability of a user is not necessarily a relevant factor. However, changes in reading capability may be indicative of cognitive decline.
  • An advantage of such an approach is that a series of data is collected in a single session, so that possible system and connection delays are irrelevant. This additional functionality might be used in combination with the saccadic tests described above.
  • the reading test may be embedded unobtrusively in a video call, for example, by showing a question as text on the screen in combination with multiple-choice answers for instance.
  • eye blinks may also be detected and analysed.
  • Research shows that there is a correlation between the number of the eye blinks and the cognitive load. For example, arousal typically leads to an increase in eye blinks, whereas cognitive load typically leads to a decrease in eye blinks.
  • Monitoring such cues could be used to detect if questions presented to a user by for example a caregiver during a video call are too difficult for the subject to understand. Questions can then be adapted accordingly.
  • the type of questions asked could be adapted based on the person's arousal level. Information gathered about the arousal state and the difficulty of the questions could be used to better analyse the eye movements results, as slow saccadic movement may be a result of a low arousal state and not as a result of a cognitive decline.
  • audio may also be used to perform tests, results of which may enable subsequent evaluation the cognitive state of the person.
  • audio cues could be generated using audio originating from different directions relative to the subject.
  • the audio signal could be modified so that it appears as if it is delivered from the left or right side of the person. This could be done using two speakers located on either side of the display device, or using a single speaker and applying signal processing techniques to generate directional sound.
  • a camera or eye tracking unit may be used to track the head movements of the test participant. It is known that people are likely to follow (to turn to, and to look in) the direction from which a sound appears to be originating.
  • the head movements By monitoring the head movements, following parameters can be measured for example: (1) The time taken for the participant to notice differences in the direction of sound, determined for instance by the time taken to change the head or torso position. (2) The head position to check if one ear is hearing better than the other.
  • the video call output intensity could be decreased, and the user's ability to continue to respond to questions and engage in the conversation used to assess how well they are hearing the output audio. Sounds of different frequencies can be used to perform more formal hearing tests to determine the range of frequencies that the user is able to hear. Auditory and visual cues could be presented in congruent and incongruent manner to perform variety of attention and cognitive tests.
  • the video call's conversation itself may simply be used to monitor the care recipient's cognitive health state. Instead of measuring the response to a distracting stimulus, such as a visual or auditory cue, the reaction time in responding to a question posed by the care giver may be measured.
  • the question is a (natural) part of the call's interview or conversation.
  • the question used to monitor the reaction time is the same in every call.
  • the reaction time may be defined in several ways.
  • One form might be the time interval between the last spoken word of the question and the first word of the response.
  • the time of the first spoken word in the question might also be used as reference time. These times can be determined from raw audio data by identifying peaks in the captured audio signal levels, and matching these up to the respective question and response utterances, for example.
  • the question is posed at a standard, fixed time in the call, for instance as the final questions in the call.
  • a plurality of questions e.g. 3, could be posed.
  • An analysis may be performed on each question separately as well as on their combination, to determine a sum or average for instance.
  • the questions themselves may for example concern the health and wellbeing of the care recipient. Therefore, the actual response itself, i.e. its content, can also be used simultaneously in diagnosing the health state.
  • the duration of the response can be monitored.
  • both the speed of speaking as well as the extent of responding may serve as metric for the (cognitive) trend analysis.
  • the measured times may be compiled together with those measured at previous calls.
  • the so-obtained time-series may subsequently be analysed for its trends. For a healthy participant, the mean and variance will stay constant. When cognitive decline is present, a drift may be observable. Onset of the decline (drift) can be detected using techniques known as Quickest Detection. Other sequential analysis techniques may also be applied.
  • the trend analysis of the reaction times can be performed on the response times alone, or alternatively combined with the reaction times measured in the saccade or audio tests described above. In another embodiment, independent but identifiable events may be utilised in response time tests, such as a door bell or a ringing phone.
  • the systems or methods of the present invention may be provided of implemented by or as part of a mobile device such as a tablet or smartphone.
  • apps and games may be provided for monitoring eye movement in response to presented visual (and optionally auditory) stimuli.
  • Eye tracking may be facilitated by the front facing camera of the mobile device for instance. This data can then be used by a tracking engine to get more detailed insights into cognitive decline prediction.
  • a camera may also be used to detect the onset of pupil dilation to determine if a task becomes too difficult
  • the type, duration and frequency of the stimuli may be adapted automatically in response to the suspected nature of the cognitive decline and a flag or notification is issued for a remote video carer indicating this.
  • the system may be adapted to recommend certain apps and games to the user based, not only on the user's preference, but also on a
  • a particular eye response is indicative of decline (e.g. recommending crosswords or Sudoku when excessive pupil dilatations are detected during cognitive tasks, or recommending reaction-based games when the eye response time is deviating from an expected value over time).
  • memory related issues may also be detected in examples. It is known that distinct eye movements, pupil and medial temporal lobe responses take place during recollection.
  • Photo collections of friends or family and videos of known people can be used as a basis to detect a marked decline in memory by tagging said content with a familiarity metric (tagging means that the actual content can remain private).
  • a familiarity metric tagging means that the actual content can remain private.
  • the viewer's eye response may be analysed to determine a level of recognition. If a disparity is repeatedly detected, the remote carer system flags it for attention in a subsequent video call.
  • the system described in embodiments above makes use of a controller or processor for processing data.
  • Figure 9 illustrates an example of a computer 102 for implementing the controller or processor described above.
  • the computer 102 includes, but is not limited to, PCs, workstations, laptops, PDAs, palm devices, servers, storages, and the like.
  • the computer 102 may include one or more processors 104, memory 106, and one or more I/O devices 108 that are communicatively coupled via a local interface (not shown).
  • the local interface can be, for example but not limited to, one or more buses or other wired or wireless connections, as is known in the art.
  • the local interface may have additional elements, such as controllers, buffers (caches), drivers, repeaters, and receivers, to enable communications. Further, the local interface may include address, control, and/or data connections to enable appropriate communications among the aforementioned components.
  • the processor 104 is a hardware device for executing software that can be stored in the memory 106.
  • the processor 104 can be virtually any custom made or commercially available processor, a central processing unit (CPU), a digital signal processor (DSP), or an auxiliary processor among several processors associated with the computer 102, and the processor 104 may be a semiconductor based microprocessor (in the form of a microchip) or a microprocessor.
  • the memory 106 can include any one or combination of volatile memory elements (e.g. random access memory (RAM), such as dynamic random access memory (DRAM), static random access memory (SRAM), etc.) and non-volatile memory elements (e.g., ROM, erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), tape, compact disc read only memory (CD-ROM), disk, diskette, cartridge, cassette or the like, etc.).
  • RAM random access memory
  • DRAM dynamic random access memory
  • SRAM static random access memory
  • non-volatile memory elements e.g., ROM, erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), tape, compact disc read only memory (CD-ROM), disk, diskette, cartridge, cassette or the like, etc.
  • the memory 106 may incorporate electronic, magnetic, optical, and/or other types of
  • the software in the memory 106 may include one or more separate programs, each of which comprises an ordered listing of executable instructions for implementing logical functions.
  • the software in the memory 106 includes a suitable operating system (O/S) 110, compiler 112, source code 114, and one or more applications 116 in accordance with exemplary embodiments.
  • O/S operating system
  • compiler 112 compiler 112
  • source code 114 source code 114
  • applications 116 applications 116 in accordance with exemplary embodiments.
  • the application 116 comprises numerous functional components such as computational units, logic, functional units, processes, operations, virtual entities, and/or modules.
  • the operating system 110 controls the execution of computer programs, and provides scheduling, input-output control, file and data management, memory management, and communication control and related services.
  • Application 116 may be a source program, executable program (object code), script, or any other entity comprising a set of instructions to be performed.
  • a source program then the program is usually translated via a compiler (such as the compiler 112), assembler, interpreter, or the like, which may or may not be included within the memory 106, so as to operate properly in connection with the operating system 110.
  • the application 116 can be written as an object oriented programming language, which has classes of data and methods, or a procedure programming language, which has routines, subroutines, and/or functions, for example but not limited to, C, C++, C#, Pascal, BASIC, API calls, HTML, XHTML, XML, ASP scripts, JavaScript, FORTRAN, COBOL, Perl, Java, ADA, .NET, and the like.
  • the I/O devices 108 may include input devices such as, for example but not limited to, a mouse, keyboard, scanner, microphone, camera, etc. Furthermore, the I/O devices 108 may also include output devices, for example but not limited to a printer, display, etc. Finally, the I/O devices 108 may further include devices that communicate both inputs and outputs, for instance but not limited to, a network interface controller (NIC) or modulator/demodulator (for accessing remote devices, other files, devices, systems, or a network), a radio frequency (RF) or other transceiver, a telephonic interface, a bridge, a router, etc. The I/O devices 108 also include components for communicating over various networks, such as the Internet or intranet.
  • NIC network interface controller
  • modulator/demodulator for accessing remote devices, other files, devices, systems, or a network
  • RF radio frequency
  • the I/O devices 108 also include components for communicating over various networks, such as the Internet or intranet.
  • the processor 104 When the computer 102 is in operation, the processor 104 is configured to execute software stored within the memory 106, to communicate data to and from the memory 106, and to generally control operations of the computer 102 pursuant to the software.
  • the application 116 and the operating system 110 are read, in whole or in part, by the processor 104, perhaps buffered within the processor 104, and then executed.
  • a computer readable medium may be an electronic, magnetic, optical, or other physical device or means that can contain or store a computer program for use by or in connection with a computer related system or method.
  • a computer readable medium may be an electronic, magnetic, optical, or other physical device or means that can contain or store a computer program for use by or in connection with a computer related system or method.

Abstract

The invention provides eye movement monitoring systems and methods which provide data for use in assessing and monitoring changes in cognitive capability of a user over time. Embodiments are based on conducting a display-based visual test, designed to trigger a saccade of a user's eye(s). Eye tracking data captured throughout this test is received by embodiments and used to determine one or more saccadic parameters, including saccade latency: the time delay between presentation of a prompt image and the turning of a user's eye to examine or look at the image. An output is generated based on the calculated saccadic parameters. This may be used to perform subsequent analysis of the results, and to infer cognitive progression.

Description

Eye movement monitoring system
FIELD OF THE INVENTION
The invention relates to an eye movement monitoring system, in particular a system for providing saccadic eye movement data. BACKGROUND OF THE INVENTION
Remotely managed, at-home care services for the elderly are part of a fast growing market which enables the elderly to live independently in the comfort of their home, whilst still having access to medical support when needed, at a lower cost than hands-on care. These services may typically include a video call system to enable remote interaction between the caregiver and elderly patient. During a video call the caregiver asks targeted questions with the aim to get a complete overview of the patient's health status. For instance, topics that are being addressed include sleep, physical activity, nutrition intake, toileting behaviour, social interactions, cognition, and medication adherence.
One clinically interesting but difficult feature would be to detect early signs of cognitive impairments. Currently, cognitive functions are often assessed using questionnaires such as the Mini-Mental State Examination (MMSE) and the Montreal Cognitive Assessment (MoCA). However, these tests are performed maximally a few times a year and can be time consuming as they have to be administered by a professional caregiver. Furthermore, a test like the MMSE exhibits a learning effect and as such can become less effective over time. Also age and education level are known to influence the score.
One highly valuable development would be the use of algorithms and tests to infer some of these health related features automatically during for example a video call, making use of the available hardware (e.g., camera, microphone and speaker). This would be especially useful to detect gradual deteriorations which otherwise might go unnoticed by the caregiver.
US 2012/0059282 discloses algorithmic methods for diagnosing declarative memory loss using mouse tracking and/or eye tracking to follow the visual gaze of a subject during participation of the subject in a visual paired comparison text. Results of this test can be used in diagnosis of dementia, mild cognitive impairment and Alzheimer's disease. A visual paired comparison test involves displaying two images to a subject for a set period of time, before the images are delayed for a set delay time. Two new images then appear, one which is identical to a previously displayed image, and one which is different. A normal subject will tend to spend more time looking at a new image than an old one. A cognitively impaired patient may tend to spend equal amounts of time looking at both. By monitoring these relative lengths of time, cognitive impairment is estimated. Using eye tracking hardware, this can be done automatically.
One difficulty with these tests is that they require active participation on the part of the subject, which includes a prolonged period of focus and concentration. As a result they generally require some advance planning and co-ordination and they require setting aside a dedicated time in which to perform the test. Furthermore, they may cause undue stress or anxiety to a person taking them, in particular if the person is suffering from cognitive impairment. Due to these factors, it is also not easy to repeat these tests very regularly, for it may pose too great a burden on the patient to be required to repeat the test with great frequency. Infrequency of testing makes monitoring of gradual changes in cognitive performance difficult, and therefore makes spotting early signs of cognitive decline especially difficult.
Desirable would be a means of achieving automated or semi-automated monitoring of one or more cognitive performance indicators in a way that is non-intrusive, requires little active participation on the part of subject or user, and is therefore readily repeatable with relatively great frequency over an extended period of time.
SUMMARY OF THE INVENTION
The invention is defined by the claims.
According to an aspect of the invention, there is provided an eye movement monitoring system, comprising:
an input unit adapted to acquire eye tracking data corresponding to a position and/or movement of a user's eye; and
a controller, configured to
determine by means of the acquired eye tracking data an initial gaze position of a user's eye at a first point in time,
generate at a second point in time an output for controlling a display device to display a prompt image at a prompt image spatial location, different to the initial gaze position, acquire eye tracking data by means of the input unit corresponding to the position and/or movement of the user's eye following said displaying of the prompt image;
determine by means of the acquired input data a saccade latency period, the saccade latency period corresponding to the time interval between the displaying of the prompt image and the movement of the user's gaze to the displayed prompt image; and
generate an output comprising data based on said saccade latency period. The invention is based on the concept of measuring saccadic eye movements of users and using these measurements to monitor the progression of saccadic parameters over time. Certain saccadic parameters have been shown to be strongly correlated with cognitive functioning.
A saccade is a rapid simultaneous movement of the eyes which serves as a mechanism for fixation, and is controlled cortically by the frontal eye fields, or subcortically by the superior colliculus. Different types of saccades exist such as the visually guided reflexive saccade, which concerns movement of the eyes triggered upon the appearance of a visual stimulus.
A saccade may be characterized by four parameters: amplitude, peak velocity, duration, and latency. Figure 1 schematically illustrates these four parameters with reference to an exemplar graph. The graph shows eye (angular) position 22 and eye (angular) velocity 24 for a typical saccade over time. Arrows 28 illustrate saccade latency. This is defined as the time between the appearance of a visual stimulus (t=0ms) and the onset of saccadic eye movement. Arrow 30 depicts peak velocity - the maximal (angular) velocity of the eye during a saccade motion. Arrows 32 indicate saccade duration: the total length of time taken to complete the saccade. Arrows 34 show saccade amplitude: the total size or reach of the saccade (the total angular distance travelled by the eye during the saccade).
Saccade latency is known to have particular significance for cognitive functioning. For instance, Meyniel et al. observed a longer saccade latency in fronto-temporal dementia patients compared to controls (237±62ms vs. 183±25ms), when they were asked to follow a target that jumped from a central position to an unpredictable 25° right or left location (C. Meyniel, S. Rivaud-Pechoux, P. Damier, and B. Gaymard, "Saccade
impairments in patients with fronto-temporal dementia", J. Neurol. Neurosurg. Psychiatry, vol. 76, nr. 11, pp. 1581-1584, jan. 2005).
Mosimann et al. performed similar experiments and observed longer latencies in Parkinson's disease dementia (PDD) patients and dementia with Lewy bodies (DLB) patients (U. P. Mosimann, R. M. Miiri, D. J. Burn, J. Felblinger, J. T. O'Brien, and I. G. McKeith, "Saccadic eye movement changes in Parkinson's disease dementia and dementia with Lewy bodies", Brain, vol. 128, nr. 6, pp. 1267-1276, jun. 2005).
Hershey et al. also measured saccadic latencies in patients with Alzheimer's dementia and other types of dementia. The saccadic latencies for both groups were considerably longer than those for age-matched controls (Hershey LA, Whicker L, Jr, Abel LA, Dell'Osso LF, Traccis S, and Grossniklaus D, "Saccadic latency measurements in dementia", Arch. Neurol, vol. 40, nr. 9, pp. 592-593, sep. 1983).
The proposed invention concerns a saccade test for subjects such as elderly citizens. The test is based on triggering a saccade of a subject by displaying a prompt image or other graphic feature at a spatial region some distance away from the current target of the subject's focus (termed the user's 'gaze' in this document). This distance should nonetheless be close enough to the initial target of their gaze that the image falls within the subject's peripheral vision and is noticeable. On noticing the peripheral appearance of an object or image, a person's typical instinct is to shift the eyes to inspect the object or image - i.e. triggering a saccade.
The subject's eye(s) can be tracked by well-known eye-tracking technologies and information relating to the position and/or movement of the eyes fed into the controller of the current system. In particular examples, the system may comprise an integrated eye tracking unit.
Received position and movement data can be used to determine at least saccade latency, and may also be used to determine any or all of the above described saccade parameters.
The test demands no active participation, and can be integrated easily into many other display-based activities without necessarily interrupting them. These could be any of a range of enjoyable or routine activities which require use of a display device. It can for example be performed during regular video calls with a caregiver or relative. It might even be integrated with film or television viewing for instance. The test enables data to be gathered which can be used to identify potential progressive changes in saccadic parameters, which may indicate a decline in cognitive health.
Embodiments of the invention could in example applications form part of a battery of tests performed during video calls to test different cognitive aspects. It could be a small, almost unobtrusive addition in a video call if it is embedded well. Embodiments of the proposed invention have the advantage that they can be performed easily during recurring video calls with a caregiver, for example. Hence, the test can be performed at a regular basis in the comfort of a home environment. The opportunity of frequent testing also provides a means to detect gradual changes in saccadic parameters in an early stage. The test may in one or more examples be initiated by a remote professional caregiver and does not require active participation of the elder and takes little time.
Furthermore, the test is language independent and cannot be learned by the elder, i.e., the test does not become less effective over time.
Integration of the test with video calls represents just one example of a screen- based activity which the test could be combined with. As noted, in other examples, the test might even be combined with for example the watching of television or films. It might be initiated automatically or manually by a caregiver or even the subject/patient themselves.
In examples, the eye tracking data may relate to the movement and/or position of a user's eye pupil. In this case it may be the angular position or motion of the eye within the eye socket that is of interest. In other examples, translational position and/or movement of the eye may additionally or alternatively be tracked. Here, the absolute displacement of the eye relative to the display unit for instance may be the parameter of interest. This may enable embodiments to acquire data corresponding to movement (such as turning) of the head as well as the eye. Well-known eye-tracking apparatus and methods are capable of capturing data representing both of these.
The display output may be for controlling any variety of display device including, by way of non-limiting example, a screen-based display, or a 2D or 3D projector. The spatial location of the prompt image may be a spatial location on a screen for instance, or a spatial location on an incident surface in the case of a projector. It may be a spatial location in 3D space in the case of a 3D display device.
According to one or more examples, the controller may be adapted to generate at said first point in time an output for controlling the display device to display an initial image at an initial image spatial location, the initial image spatial location being different to the prompt image spatial location.
The purpose of the initial image is to draw the focus of the subject to a known spatial location, so that a prompt image may then subsequently be displayed at a different location having a precisely known displacement from the first image.
In alternative examples however, the prompt image may simply be overlaid on an arbitrary static or video image. Here it may be desirable to use the acquired information concerning the initial location of the user's eye to determine a suitable location for displaying the prompt image. In particular, the controller may be adapted to define the prompt image location based on the determined initial gaze position of a user's eye.
This approach may be more limited in that the range of displacements at which a prompt image may be displayed is constrained by the display location(s) of the arbitrary static or video image. However, the approach has the benefit of enabling the test to be combined with any enjoyable or routine video-based activity, wherein prompt images are overlaid atop the images displayed as part of this activity.
In accordance with one or more embodiments, the system may further comprise:
a display unit, in operative communication with the controller; and/or an eye tracking unit, in operative communication with the input unit, for tracking the position and/or movement of a user's eye.
The input unit may in embodiments be configured to acquire eye tracking data corresponding to position and/or movement of two eyes of a user and/or one or more eyes of a plurality of users. This may improve accuracy.
In examples of the invention, the controller may be further adapted to determine, by means of eye tracking data acquired by the input unit one or more of the following parameters:
a saccade peak velocity; and/or
a saccade amplitude; and/or
a total duration of a saccade;
and
generate an output comprising data representing said one or more determined parameters.
These terms are to be understood and interpreted in terms of the definitions outlined above with reference to Figure 1.
According to one or more embodiments, the system may further comprise a video telephony unit in operative communication with the controller, and adapted to output video telephony images for display on the display unit, wherein the controller is configured to overlay said prompt image on top of images output by the video telephony unit.
In examples of these embodiments, the video telephony unit may be configured to telephonically receive control commands for controlling the eye movement monitoring system, wherein the controller is configured to be responsive to said control commands.
The control commands may be used to trigger initiation of the saccade test for example. This may enable a carer or relative to choose an appropriate moment to initiate a test during a video telephone call. They might additionally or alternatively be used to control other parameters of the test, such as prompt image displacement from current gaze location (i.e. desired saccade amplitude).
In other examples, the controller may be adapted to initiate control steps at a random or semi-random time. The system may be configured to initiate the test automatically at one or more pre-determined or pre-set times. Test times could be set in advance by a carer or relative, or might be determined based on prior test results for instance.
In some examples, the input unit may be configured to receive one or more control commands, and the controller may be configured to initiate control steps in response to receipt of said one or more control commands. Control commands may be broadcast or otherwise communicated to the input unit via a remote connection. This might make use of an internet or telephone connection for instance, or any other network connection, either wired or wireless.
Additionally, or alternatively, the system may further comprise a user interface unit for generating control commands in response to user input. In this way a user themself may control the system, including for instance initiating a saccade test and/or controlling or varying one or more other parameters of the test, such as test saccade amplitude (as discussed above). A user may decide to initiate a saccade test for instance in response to noticing their own cognitive performance has changed.
In accordance with one or more embodiments, the controller may further be adapted to generate outputs for controlling the display device to display images for facilitating visual acuity or visual ability testing. It is known that deterioration in visual ability can also have a significant impact on reaction times (i.e. saccade latency). By incorporating a visual ability test into the saccade test, visual deterioration can be controlled for in any subsequent analysis of the test results. This may prevent mere visual decline being mistaken for cognitive deterioration.
In one or more embodiments, the controller may be configured to generate outputs for controlling the display device to display images for facilitating testing or assessment of one or more other properties or parameters of a user's vision. In particular, a peripheral vision or tunnel vision test could be incorporated into or combined with the saccade test. Parameters such as brightness and/or colour of a trigger image could be varied or optimised. Additionally or alternatively, tests to assess a user's brightness or colour sensitivity could be incorporated. These tests may be used, as with the visual acuity testing, to generate data which can be combined with the saccade test data to provide a more accurate basis for making assessments of a user's changing cognitive abilities. Alternatively colour and/or brightness tests for instance could be used to adjust the colour and/or brightness of objects presented on the display, so as to optimise any further tests which are conducted.
In accordance with a further aspect of the invention, there is provided an eye movement monitoring method which acquires eye tracking data corresponding to a position and/or movement of a user's eye, the method comprising:
determining an initial gaze position of a user's eye at a first point in time; controlling a display device to display a prompt image at a prompt image spatial location;
acquiring eye tracking data corresponding to the position and/or movement of the user's eye following said displaying of a prompt image;
determining by means of the acquired eye tracking data a saccade latency period, the saccade latency period corresponding to a length of time between the displaying of the prompt image and the movement of the user's gaze to the displayed image; and
generating an output comprising data based on said saccade latency period.
In examples, the method may further comprise facilitating a video telephony process, said video telephony process including controlling the display unit to display video telephony images.
The method may in accordance with these examples comprise controlling the display device to display the prompt image overlaid on top of the video telephony images.
The method may be implemented at least in part with software. In particular, according to an aspect of the invention, there is provided a computer program comprising computer program code means adapted to perform all the steps of the method when said computer program is run on a computer.
BRIEF DESCRIPTION OF THE DRAWINGS
Examples of the invention will now be described in detail with reference to the accompanying drawings, in which:
Figure 1 shows an exemplar graph illustrating four saccadic parameters; Figure 2 shows a first example eye movement monitoring system in accordance with embodiments of the invention;
Figure 3 schematically illustrates a first example display output at two points in time in accordance with embodiments of the invention;
Figure 4 schematically illustrates a second example display output at two points in time in accordance with embodiments of the invention;
Figure 5 shows a second example eye movement monitoring system in accordance with embodiments of the invention;
Figure 6 schematically illustrates a third example display output at two points in time in accordance with embodiments of the invention;
Figure 7 schematically outlines a first example eye movement monitoring method in accordance with embodiments of the invention;
Figure 8 schematically outlines a second example eye movement monitoring method in accordance with embodiments of the invention; and
Figure 9 shows a general computer architecture for implementing the method.
DETAILED DESCRIPTION OF THE EMBODIMENTS
The invention provides eye movement monitoring systems and methods which may provide data for use in assessing and monitoring changes in cognitive capability of a user over time. Embodiments are based on conducting a display-based test, designed to trigger a saccade of a user's eye(s). Eye tracking data captured throughout this test is received and used to determine one or more saccadic parameters, including saccade latency: the time delay between presentation of a prompt image and the turning of a user's eye to examine or look at the image. An output is generated based on the calculated saccadic parameters. This may be used to perform subsequent analysis of the results, and to infer cognitive progression. Saccadic parameters are known to be associated with cognitive ability, and with certain age- related cognitive disorders such as Alzheimer's or dementia. The test can be easily integrated or combined with any other ordinary or routine screen-based activity. In example
embodiments, a video telephony unit is further provided such that the test may be integrated into a video call with a caregiver or relative.
Figure 2 schematically illustrates a first example eye movement monitoring system in accordance with one or more embodiments of the invention. The system comprises a controller 46 operatively coupled to an input unit 42. The input unit is configured to receive eye tracking data from an eye tracking unit 44. The controller is further operatively coupled to a display unit 48, and configured to output one or more control signals for controlling the display unit to display images.
The displayed images are designed to trigger at least one saccade of the user's eye(s) 52 from a first (initial) position, to a second position. The eye tracking unit 44 is configured to track or monitor the position, orientation and/or motion of a user's eyes 52 at least throughout this at least one saccade, so that the gaze of the user can be tracked, and to relay data reflecting this information to the input unit 42. The controller is configured to receive the eye tracking data from the input unit and to determine one or more characterising properties or parameters of the saccade based on these data. In particular, the controller may be configured to determine at least the saccade latency based on the eye tracking data received from the eye tracking unit. The saccade latency represents the delay between a prompt image being displayed, and the initiation of the saccade. An output may then by generated by the controller based on these determined parameters or properties.
In particular examples, the system may be configured to be operatively coupled with an external memory, or an external computer or other processor for storage or analysis of data. A dedicated storage medium may in examples be provided as part of the system. Alternatively, the system may be operatively coupled to an external server or computer via a wired or wireless network link for instance, and generated parameter values output via this network link.
The eye tracking unit is adapted to track or monitor the position or orientation of a user's eye or eyes 52. It may for example track the position or movement of the pupil of one or more eyes. In all cases, the tracking unit is configured such it is able to detect, monitor and/or measure the turning of a user's eye. This may include the capacity to identify the angle through which an eye turns during a saccade, relative to the user's eye socket for instance. The unit may be adapted to identify an absolute rotational position of the eye, or may be adapted to identify relative changes in rotational position. In examples, the eye tracking unit may be adapted to identify both angular position and angular motion (or velocity) of one or more eyes.
In one or more examples, the eye tracking unit 44 may be configured to monitor or determine translational position or motion (velocity) of a user's eye, relative to the eye tracking device (as opposed to relative to the user's eye socket for instance).
According to one or more examples, the input unit may be connected to a dedicated eye tracking unit for providing dedicated eye tracking functionality. Alternatively, the eye tracking data may be generated and provided to the input unit by a more generic sensing element such as a camera.
In accordance with at least one set of embodiments, the eye monitoring system includes an integrated eye tracking unit for providing eye tracking data to the input unit. However, according to a further set of embodiments, the system may be configured to be operatively coupled with auxiliary or external eye tracking hardware. This may enable the system to be 'retro-fitted' into or onto pre-existing hardware already owned by a user. The system might in examples for instance be configured to be operatively coupled with an existing videophone device. Here the pre-existing sensing and display equipment can be utilised by the system to perform the saccade test and collect the necessary data. In the case of a video phone, an already provided camera may be utilised to perform eye tracking.
Software-based methods for achieving this are already known in the art, and the skilled person would readily understand how to integrate these into embodiments of the invention.
The eye tracking functionality might in further examples be facilitated by for example an integrated camera of a mobile device such as a mobile phone or tablet, or a camera attachment of a domestic computational device, such as a laptop or PC.
In embodiments of the invention, the controller is adapted to control the display device to generate images intended to trigger a saccade. A number of possible options exist for achieving this.
Figure 3 schematically depicts a first set of example outputs which might be presented on a viewer-facing portion of a display device 48 at an initial (A) and later (B) time, in accordance with one or more embodiments. At an initial time (illustrated by Figure 3(A)), a first image 55 is displayed near the centre of a screen of the display device. This initial image is intended to draw the gaze of the participant of the test toward the centre of the screen. After some arbitrary or pre-determined time delay, the first output (A) is replaced by second output (B), so that the initial image 55 disappears and a new (prompt) image 56 appears at one corner of the screen. This shift is intended to trigger the participant to shift (saccade) their eye(s) from the centre of the screen to the corner, so as to focus their gaze on the new prompt image. The eye tracking unit 44 is configured to track the user's eye movement throughout the saccade, such that dynamic parameters of the saccade may then be calculated by the controller 46, based on eye-tracking data output by the tracking unit 44.
The initial and later images are represented schematically in Figure 3 as circles. However, it will be understood that these may in general comprise any arbitrary image or visual presentation. The initial 55 and/or prompt 56 images may in examples be moving images or static images. The image may for example be a simple dot, the size for instance of a coin, but presented against a background of contrasting colour. In one or more examples, either the initial or prompt image may flash or blink so as to attract attention. This ensures that a prompt image otherwise too far from a user's centre of gaze to be noticed, nonetheless triggers a saccade.
In the example of Figure 3, the prompt image 56 is displayed at a location of the screen diagonally displaced from the initial image 55. This is shown by way of example only, and in alternative examples the prompt image 56 and initial image 55 may be either vertically or horizontally displaced from one another. In particular examples in particular, it may be beneficial that the prompt image and initial image are horizontally displaced, so that induced eye movements are essentially horizontal. This may be desirable for example where it is preferred that a user not engage in certain head movements (e.g. moving their head upwards) when changing the direction of their gaze toward the prompt image. Such movements might, for certain examples of the executed saccade test, interfere with the generated results, since the amplitude, velocity and/or latency of the saccade may be altered by the additional head movement.
Figure 4 schematically depicts a second set of example outputs for a display device for triggering a user saccade. Figure 4(A) shows the display at an initial time, at which an initial image 55 is displayed near the centre of the screen (as in the previous example of Figure 3). At the later time (shown in Figure 4(B)), the prompt image 56 is again displayed in a corner of the screen. However, unlike before, the initial image does not disappear. Upon presentation of a new image, the user may, despite the continued presence of initial image 55, be triggered to turn their eyes to inspect the novel element on the screen.
In the example of Figure 4, initial image 55 may comprise any image, either static or moving. This embodiment may then be used where it is desired that the test be integrated with some ordinary routine screen-based activity, such as a video call or watching film or television. Initial image 55 may be images associated with these activities, such as moving images of a phone call recipient. In examples, these images may typically cover a larger portion of the screen than is depicted in Figure 4, including for example filling the screen completely. Prompt image 56 may then simply be overlaid atop these background images. Due to the positioning of the prompt image at an extremity of the screen, it is considered unlikely that a typical user focussing on elements of the initial or background image(s) 55 will be looking in the direction of said extremity at the time of the prompt image's appearance. Therefore, a saccade will be triggered and the saccadic parameters accordingly may be measured or calculated.
In accordance with one or more further embodiments, the controller may control the screen to display only the prompt image, without displaying any initial image. This has the advantage that the participant or user is not primed to expect the appearance of the prompt image, and so its appearance is more surprising or unanticipated. Consequently, the exhibited saccade latency may more accurately reflect the capabilities of the participant.
According to one or more examples, the input unit may be adapted to acquire data from the eye-tracking unit corresponding to an initial gaze of the user in advance of displaying the prompt image. An ideal location for displaying the prompt image on the display device may then be determined based on the known location of the user's gaze, so that the prompt image appears at a set or determined distance away from the present target of the user's gaze.
As noted above, the triggered saccade motion is monitored by an eye tracking element 44 and data representing eye position or motion is communicated to the controller 46, via input unit 42. One or more characterising properties or parameters of the saccade may then be determined.
As discussed above, with reference to Figure 1 , a saccade may be characterised by four main properties or parameters: latency 28, duration 32, amplitude 34 and peak velocity 30. In embodiments of the invention, the controller may be configured to determine at least the saccade latency based on the eye tracking data received from the eye tracking unit.
Saccade latency may be determined by simply subtracting the time value at which the prompt image is displayed from the time value (as indicated by the eye-tracking data) at which the participant's eye first begins to move.
Duration of the saccade may be determined by subtracting the time value (as indicated by the eye-tracking data) at which the participant's eye begins to move following displaying of the prompt image, from the time value at which the participant's eye ceases to move.
Amplitude of the saccade may be calculated by 'subtracting' an initial position or displacement of the user's eye (before a prompt image is displayed) from a final position or displacement of the eye (after completion of the saccade). Depending on the particular mathematical character of the eye tracking data received or acquired by the input unit, the exact process by which such 'subtraction' is performed may vary. For example, where collected data comprises elements representing position vectors of a user's eye, the process may comprise first subtracting the initial position vector from the later vector, and then calculating a magnitude of the vector. Alternatively, an 'angular' amplitude may instead be desired. In this case, an angle between the two vectors may be calculated through, for example, determination of a scalar or vector product, and application of well-known formulae to derive the separating angle (e.g. a.b=|a||b|Cos9).
Finally, peak velocity may be determined from either position or velocity data by well-known differentiation techniques for finding maximal values.
According to one or more examples, the system may comprise an integrated display unit for displaying images in accordance with the outputs generated by the controller. However, in other examples, the controller may be configured such that it may be operatively coupled to an external display device, provided separately from the system. As in the case of the eye-tracking unit, this may be preferable in cases where it is desirable that the system be retrofitted to an existing piece or set of display hardware.
In either of these cases, the display device may in examples consist of any suitable display device for displaying images or other visual stimuli. It may include for instance a monitor or panel-based screen, such as a CRT or LCD display, or another kind of panel or screen-based display. The display device may comprise a projector unit for projecting 2D images onto an incident surface. It may comprise a 3D display device, either a 3D panel-based display device or a 3D projection or hologram based device. Other suitable visual output devices may include for example wearable output devices, such as electronic or multimedia smart glasses or smart watches.
It is noted that in above-described embodiments, an input unit is defined as a distinct component, separate from the controller. However, in practical embodiments, it is to be understood that this distinction may be purely functional or notional, and the
functionalities of the two units may in practice be provided by a single entity.
According to any contemplated embodiment, the system may be configured to monitor a single eye of a single user or to monitor both eyes of a single user. In further examples, the system may be configured to monitor one or both eyes of each of a plurality of users.
As briefly discussed above, in accordance with one or more example embodiments, the eye movement monitoring system may comprise a video telephony unit. An example of such a system is schematically illustrated in Figure 5. As in the example of Figure 2, the system comprises a controller 46, operatively coupled to input unit 42 and display device 48. An eye tracking unit 44 is connected to the input unit and configured to provide eye tracking data relating to the motion or position of a user's eye 52. As shown, the system further comprises a video telephony unit 62 operatively coupled to the controller, and configured to be in connection with a telephonic communication network. The video telephony unit is configured to connect with a remote host or client telephony unit via the communication network, and to simultaneously transmit and receive audio-visual signals. Received video signals are communicated to the controller which is configured to control the display unit 48 to display video images based on these signals.
According to these embodiments, the eye-tracking unit may be a camera unit and function simultaneously to capture video for the video -telephone call, and to track motion and position of a user's eye(s).
Further to controlling the display to present the video-telephone images, the controller is configured to execute at a determined time a saccade test in accordance for instance with one or more of the examples described above. In particular, the controller may be configured to display at a given time a prompt image 56 at a distal or extremal region of the display unit 48. This is illustrated schematically in Figure 6, wherein image (A) depicts a display output at a first (initial) time, during which video telephone images 57 alone are displayed on the screen, enabling a video telephone call to be conducted. Image (B) shows the screen output, upon display by the controller of a prompt image 56. Video telephony images continue to be displayed as before, and the prompt image is simply overlaid on top, or combined with the telephony images.
The appearance of the prompt image (in an ideal scenario) draws the attention of the user at least momentarily away from their focus on the video telephone images, which, as illustrated in Figure 6, may be displayed and composed in such a way as to draw the focus of an average user's eyes toward the centre of the display. The telephony images may in examples be expansive, so as to fill the entirety of the screen, but may be framed such that the primary object(s) of interest in the images (i.e. the other participant of the phone call) are centred in the middle of the screen. The prompt image, displayed at an extremal region of the screen requires in this case a shifting of the eyes in order to be examined, and hence its appearance triggers a saccadic motion.
A time of initiation of the test may in accordance with examples of this embodiment be controlled by the other phone call host (e.g. the carer or relative). This may be done by transmission of one or more control signals along the telecommunication network to the video telephony unit, where it is then communicated to and interpreted by the controller.
The video telephony unit may further provide a mechanism or means for establishing a video call connection between a person A (the person that executes the test, e.g., the caregiver) and a person B (the person who will undergo the test, e.g., the elder). The system may include a camera and microphone/speaker combination such that both person A and B can see and hear one another (for instance, similar to internet-based calling
applications such as Skype). The video telephony unit includes a mechanism to transmit and receive signals and data between person A and B. To avoid delays due to communication, the test may be performed locally by the system at the person B site. A remote connection may in examples be used only send a trigger signal to start the test, and to send back the results after the test. As saccade latencies are in the order of 200 ms, the used camera should at least be able to capture this time resolution.
In accordance with further examples of this or any other embodiment of the invention, initiation of the test may be triggered by any of a number of other means. The system may comprise one or more user interface units or elements, enabling user activation of the test, or enabling the user to control one or more parameters of the test. Alternatively, the test may be initiated automatically by the controller, either at random times during a video telephone call or other screen-based activity, or at set, pre-determined times, for example at pre-scheduled times, scheduled to ensure regular repetition of the test in order to enable monitoring of cognitive progression (either decline or improvement) over time.
According to one or more embodiments, the system further comprises a unit configured to provide visual capability or acuity testing. It is known that a subject's (saccade) reaction time may be influenced not only by the cognitive ability of the elder but also simply by the optical or visual ability. This means that in a case of progressively deteriorating vision (e.g. developing a cataract) reaction time would be expected to diminish over time independently of any cognitive factors. Not accounting for these effects may therefore lead to false positive diagnoses of cognitive pathologies, or lead to more acute diagnoses than are in fact the case. A reliable method to remotely track visual acuity during the video call may therefore be an important optional feature, enabling differentiation of "cognitive" factors from "visual" factors.
In accordance with one or more further aspects of the invention, there are provided eye monitoring methods for determining one or more saccadic parameters based on data corresponding to eye movements captured during a saccade-trigger test. Figure 7 schematically outlines a first example method in accordance with embodiments of this aspect of the invention. The method steps mirror the control steps performed by the controller 46 in previously described embodiments. The method comprises the first step 80 of acquiring eye tracking data at a first time t0, followed by the second step 82 of determining, based on the acquired data, an initial gaze position ro of a user's eye at this first time to. Following this, a display device is controlled 84 to display a prompt image at a prompt image spatial location r1; and further eye tracking data is then acquired 86
corresponding to the position and/or movement of the user's eye following said displaying of a prompt image. Based on this acquired data, a saccade latency period is then determined 88, the saccade latency period corresponding to a length of time between the displaying of the prompt image and the movement of the user's gaze to the displayed image. The final step 90 of the method then comprises generating an output comprising data based on said saccade latency period.
Optionally, the method of figure 7 may further comprise the step of displaying an initial image at an initial position r;, the initial image being intended to draw the gaze of the user to the initial position, so that the initial gaze position ro=ri. This then eliminates the need to determine an initial gaze position ro; the simplifying assumption may be made that the user is looking at the initial image before the prompt image is displayed. Remaining steps 84-90 may then be executed as in the preceding example.
Figure 8 schematically outlines a second example method in accordance with one or more embodiments of the invention. The second example method mirrors steps performed by one or more elements of the example system of Figure 5, comprising a video telephony unit. The method is schematically divided into three main stages, a pre-test stage (steps 94 and 96), consisting of a first step 94 of initiating or beginning a video call with another video-call participant, and a second step 96 of initiating the saccade test. As in previously described embodiments, initiation of the test may be performed either locally by the test participant themselves, or remotely by the other participant of the video call (e.g. caregiver or relative). Remote initiation may involve transmission by the initiator of one or more control signals, and receipt and subsequent interpretation by the local eye monitoring system of these commands.
Initiation 96 may also be an automated initiation process, wherein a local controller triggers the test at a random moment, or at a moment determined to be suitable by the controller. This may be based, for example on eye movement information. It may not be desirable for instance to initiate the test while the participant is not looking at the screen. It could also be based on audio data, for instance so as to ensure that the test does not interrupt a conversation.
Initiation of the test triggers execution of the second (test) stage 92 of the method, which comprises steps 80-88 of the method of Figure 7. These steps constitute the primary steps of performing the saccade test itself, monitoring a user's eyes during eye movement and then determining one or more parameters of the saccadic motion including for instance saccade latency.
Following completion of the test, a final stage of the method begins, which comprises a first step 90 of generating an output based on the determined saccadic parameters, and a final step 98 of terminating the video phone call. The generated output may comprise saccadic parameter data. This step may further comprise transmitting this data to a remote server via a wired or wireless communication network. The data may be transmitted via the telecommunication network facilitating the video phone call. The data may be transmitted to a server, host or receiver located at the site of the other participant of the phone call (e.g. the carer of relative). The other participant may then perform analysis of the data. Alternatively they may be transmitted to a server at a different site, for example via an internet connection. In further examples the data might be output or downloaded onto a local storage medium or computational or mobile device, this being configured to operatively couple (through wired or wireless connection) to the system facilitating the method.
The data collected enables subsequent analysis to be performed which can provide indications of cognitive decline. As discussed above, certain saccadic parameters, including saccade latency, are known to be strongly correlated with cognitive capacity. More particularly, changes in these parameters over time may provide a strong indication of changing cognitive ability. Data collected over many repetitions of the test performed over an extended period of time may then be trend analysed to determine any significant trends in the figures. Such trends may provide an indication of changes in cognitive health.
It is emphasised that the subsequent analysis of data acquired and output by the embodiments of the present invention, at least to a point of identifying or determining any likely pathology, is not a process contemplated as being performed locally by any of the systems or methods of the present invention. Rather, any discussion of possible diagnostic analysis which might be performed on the data collected is to be understood as analysis which might be performed subsequently to and separately from to the methods or systems of the presently claimed invention. In accordance with any of the above described embodiments, a number of additional capabilities may also be achievable.
In one set of examples, saccadic movements may be measured during reading. The absolute reading capability of a user is not necessarily a relevant factor. However, changes in reading capability may be indicative of cognitive decline. An advantage of such an approach is that a series of data is collected in a single session, so that possible system and connection delays are irrelevant. This additional functionality might be used in combination with the saccadic tests described above. In examples, the reading test may be embedded unobtrusively in a video call, for example, by showing a question as text on the screen in combination with multiple-choice answers for instance.
According to a further set of examples embodiments, eye blinks may also be detected and analysed. Research shows that there is a correlation between the number of the eye blinks and the cognitive load. For example, arousal typically leads to an increase in eye blinks, whereas cognitive load typically leads to a decrease in eye blinks. Monitoring such cues could be used to detect if questions presented to a user by for example a caregiver during a video call are too difficult for the subject to understand. Questions can then be adapted accordingly.
In addition, the type of questions asked could be adapted based on the person's arousal level. Information gathered about the arousal state and the difficulty of the questions could be used to better analyse the eye movements results, as slow saccadic movement may be a result of a low arousal state and not as a result of a cognitive decline.
In another set of example embodiments, audio may also be used to perform tests, results of which may enable subsequent evaluation the cognitive state of the person. In one or more examples, audio cues could be generated using audio originating from different directions relative to the subject. For example, the audio signal could be modified so that it appears as if it is delivered from the left or right side of the person. This could be done using two speakers located on either side of the display device, or using a single speaker and applying signal processing techniques to generate directional sound. A camera or eye tracking unit may be used to track the head movements of the test participant. It is known that people are likely to follow (to turn to, and to look in) the direction from which a sound appears to be originating.
By monitoring the head movements, following parameters can be measured for example: (1) The time taken for the participant to notice differences in the direction of sound, determined for instance by the time taken to change the head or torso position. (2) The head position to check if one ear is hearing better than the other.
In addition to the directionality of the sounds, its intensity (or perceived loudness) can also be altered. For instance, the video call output intensity could be decreased, and the user's ability to continue to respond to questions and engage in the conversation used to assess how well they are hearing the output audio. Sounds of different frequencies can be used to perform more formal hearing tests to determine the range of frequencies that the user is able to hear. Auditory and visual cues could be presented in congruent and incongruent manner to perform variety of attention and cognitive tests.
Furthermore, in accordance with any described embodiment, the video call's conversation itself may simply be used to monitor the care recipient's cognitive health state. Instead of measuring the response to a distracting stimulus, such as a visual or auditory cue, the reaction time in responding to a question posed by the care giver may be measured. The question is a (natural) part of the call's interview or conversation. Preferably, the question used to monitor the reaction time is the same in every call.
The reaction time may be defined in several ways. One form might be the time interval between the last spoken word of the question and the first word of the response. The time of the first spoken word in the question might also be used as reference time. These times can be determined from raw audio data by identifying peaks in the captured audio signal levels, and matching these up to the respective question and response utterances, for example.
Preferably, the question is posed at a standard, fixed time in the call, for instance as the final questions in the call. Instead of one question, a plurality of questions, e.g. 3, could be posed. An analysis may be performed on each question separately as well as on their combination, to determine a sum or average for instance. The questions themselves may for example concern the health and wellbeing of the care recipient. Therefore, the actual response itself, i.e. its content, can also be used simultaneously in diagnosing the health state.
Instead of measuring the response time, the duration of the response can be monitored. In this case both the speed of speaking as well as the extent of responding may serve as metric for the (cognitive) trend analysis. The measured times may be compiled together with those measured at previous calls.
The so-obtained time-series may subsequently be analysed for its trends. For a healthy participant, the mean and variance will stay constant. When cognitive decline is present, a drift may be observable. Onset of the decline (drift) can be detected using techniques known as Quickest Detection. Other sequential analysis techniques may also be applied. The trend analysis of the reaction times can be performed on the response times alone, or alternatively combined with the reaction times measured in the saccade or audio tests described above. In another embodiment, independent but identifiable events may be utilised in response time tests, such as a door bell or a ringing phone.
According to one or more examples, the systems or methods of the present invention may be provided of implemented by or as part of a mobile device such as a tablet or smartphone. In this case, apps and games may be provided for monitoring eye movement in response to presented visual (and optionally auditory) stimuli. Eye tracking may be facilitated by the front facing camera of the mobile device for instance. This data can then be used by a tracking engine to get more detailed insights into cognitive decline prediction. A camera may also be used to detect the onset of pupil dilation to determine if a task becomes too difficult
The type, duration and frequency of the stimuli may be adapted automatically in response to the suspected nature of the cognitive decline and a flag or notification is issued for a remote video carer indicating this. The system may be adapted to recommend certain apps and games to the user based, not only on the user's preference, but also on a
determination by external analysis that a particular eye response is indicative of decline (e.g. recommending crosswords or Sudoku when excessive pupil dilatations are detected during cognitive tasks, or recommending reaction-based games when the eye response time is deviating from an expected value over time).
As well as cognitive decline detection, memory related issues may also be detected in examples. It is known that distinct eye movements, pupil and medial temporal lobe responses take place during recollection.
Photo collections of friends or family and videos of known people can be used as a basis to detect a marked decline in memory by tagging said content with a familiarity metric (tagging means that the actual content can remain private). Thus, when any familiar content is consumed (as for example detected by the front-facing camera), the viewer's eye response may be analysed to determine a level of recognition. If a disparity is repeatedly detected, the remote carer system flags it for attention in a subsequent video call.
The system described in embodiments above makes use of a controller or processor for processing data.
Figure 9 illustrates an example of a computer 102 for implementing the controller or processor described above. The computer 102 includes, but is not limited to, PCs, workstations, laptops, PDAs, palm devices, servers, storages, and the like. Generally, in terms of hardware architecture, the computer 102 may include one or more processors 104, memory 106, and one or more I/O devices 108 that are communicatively coupled via a local interface (not shown). The local interface can be, for example but not limited to, one or more buses or other wired or wireless connections, as is known in the art. The local interface may have additional elements, such as controllers, buffers (caches), drivers, repeaters, and receivers, to enable communications. Further, the local interface may include address, control, and/or data connections to enable appropriate communications among the aforementioned components.
The processor 104 is a hardware device for executing software that can be stored in the memory 106. The processor 104 can be virtually any custom made or commercially available processor, a central processing unit (CPU), a digital signal processor (DSP), or an auxiliary processor among several processors associated with the computer 102, and the processor 104 may be a semiconductor based microprocessor (in the form of a microchip) or a microprocessor.
The memory 106 can include any one or combination of volatile memory elements (e.g. random access memory (RAM), such as dynamic random access memory (DRAM), static random access memory (SRAM), etc.) and non-volatile memory elements (e.g., ROM, erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), tape, compact disc read only memory (CD-ROM), disk, diskette, cartridge, cassette or the like, etc.). Moreover, the memory 106 may incorporate electronic, magnetic, optical, and/or other types of storage media. Note that the memory 106 can have a distributed architecture, where various components are situated remote from one another, but can be accessed by the processor 104.
The software in the memory 106 may include one or more separate programs, each of which comprises an ordered listing of executable instructions for implementing logical functions. The software in the memory 106 includes a suitable operating system (O/S) 110, compiler 112, source code 114, and one or more applications 116 in accordance with exemplary embodiments.
The application 116 comprises numerous functional components such as computational units, logic, functional units, processes, operations, virtual entities, and/or modules. The operating system 110 controls the execution of computer programs, and provides scheduling, input-output control, file and data management, memory management, and communication control and related services.
Application 116 may be a source program, executable program (object code), script, or any other entity comprising a set of instructions to be performed. When a source program, then the program is usually translated via a compiler (such as the compiler 112), assembler, interpreter, or the like, which may or may not be included within the memory 106, so as to operate properly in connection with the operating system 110. Furthermore, the application 116 can be written as an object oriented programming language, which has classes of data and methods, or a procedure programming language, which has routines, subroutines, and/or functions, for example but not limited to, C, C++, C#, Pascal, BASIC, API calls, HTML, XHTML, XML, ASP scripts, JavaScript, FORTRAN, COBOL, Perl, Java, ADA, .NET, and the like.
The I/O devices 108 may include input devices such as, for example but not limited to, a mouse, keyboard, scanner, microphone, camera, etc. Furthermore, the I/O devices 108 may also include output devices, for example but not limited to a printer, display, etc. Finally, the I/O devices 108 may further include devices that communicate both inputs and outputs, for instance but not limited to, a network interface controller (NIC) or modulator/demodulator (for accessing remote devices, other files, devices, systems, or a network), a radio frequency (RF) or other transceiver, a telephonic interface, a bridge, a router, etc. The I/O devices 108 also include components for communicating over various networks, such as the Internet or intranet.
When the computer 102 is in operation, the processor 104 is configured to execute software stored within the memory 106, to communicate data to and from the memory 106, and to generally control operations of the computer 102 pursuant to the software. The application 116 and the operating system 110 are read, in whole or in part, by the processor 104, perhaps buffered within the processor 104, and then executed.
When the application 116 is implemented in software it should be noted that the application 116 can be stored on virtually any computer readable medium for use by or in connection with any computer related system or method. In the context of this document, a computer readable medium may be an electronic, magnetic, optical, or other physical device or means that can contain or store a computer program for use by or in connection with a computer related system or method. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.

Claims

CLAIMS:
1. An eye movement monitoring system, comprising:
an input unit (42) adapted to acquire eye tracking data corresponding to a position and/or movement of a user's eye (52) ; and
a controller (46), configured to
determine by means of the acquired eye tracking data an initial gaze position of a user's eye (52) at a first point in time,
generate at a second point in time an output for controlling a display device (48) to display a prompt image (56) at a prompt image spatial location, different to the initial gaze position, wherein the prompt image spatial location is based on the determined initial gaze position,
acquire eye tracking data by means of the input unit (42) corresponding to the position and/or movement of the user's eye (52) following said displaying of the prompt image (56);
determine by means of the acquired input data a saccade latency period, the saccade latency period corresponding to the time interval between the displaying of the prompt image (56) and the movement of the user's gaze to the displayed prompt image; and generate an output comprising data based on said saccade latency period.
2. An eye movement monitoring system as claimed in claim 1, wherein the controller (46) is adapted to generate at said first point in time an output for controlling the display device (48) to display an initial image (55) at an initial image spatial location, the initial image spatial location being different to the prompt image spatial location.
3. An eye movement monitoring system as claimed in any preceding claim, further comprising:
a display unit (48), in operative communication with the controller (46);
and/or
an eye tracking unit (44), in operative communication with the input unit (42), for tracking the position and/or movement of a user's eye (52).
4. An eye movement monitoring system as claimed in claim 3, wherein the eye tracking unit (44) comprises a camera.
5. An eye movement monitoring system as claimed 2 to 4, wherein the controller is further configured to control the screen to display the prompt image, without displaying any initial image.
6. An eye movement monitoring system as claimed in any preceding claim wherein the input unit (42) is configured to acquire eye tracking data corresponding to the position and/or movement of two eyes of a user and/or one or more eyes of a plurality of users.
7. An eye movement monitoring system as claimed in any preceding claim, wherein the controller is further adapted to determine, by means of eye tracking data acquired by the input unit (42) one or more of the following parameters:
a saccade peak velocity; and/or
a saccade amplitude; and/or
a total duration of a saccade; and
generate an output comprising data representing said one or more determined parameters.
8. An eye movement monitoring system as claimed in any preceding claim further comprising a video telephony unit (62) in operative communication with the controller (46), and adapted to output video telephony images for display on the display unit (48), wherein the controller is configured to overlay said prompt image (56) on top of images output by the video telephony unit.
9. An eye movement monitoring system as claimed in claim 8, wherein the video telephony unit (62) is configured to telephonically receive control commands for controlling the eye movement monitoring system, and wherein the controller (46) is configured to be responsive to said control commands.
10. An eye movement monitoring system as claimed in any preceding claim wherein the controller (46) is further adapted to generate outputs for controlling the display device (48) to display images for facilitating visual acuity testing.
11. An eye movement monitoring system as claimed in any preceding claim, wherein the controller (46) is adapted to initiate control steps at a random or semi-random time and/or wherein the input unit (42) is configured to receive one or more control commands, and the controller is configured to initiate control steps in response to receipt of said one or more control commands.
12. An eye movement monitoring system as claimed in any preceding claim further comprising a user interface unit for generating control commands in response to user input.
13. An eye movement monitoring method which acquires eye tracking data corresponding to a position and/or movement of a user's eye (52), the method comprising:
determining an initial gaze position of a user's eye at a first point in time; controlling a display device (48) to display a prompt image (56) at a prompt image spatial location, different to the initial gaze position, wherein the prompt image spatial location based on the determined initial gaze position;
acquiring eye tracking data corresponding to the position and/or movement of the user's eye (52) following said displaying of a prompt image (56);
determining by means of the acquired eye tracking data a saccade latency period, the saccade latency period corresponding to a length of time between the displaying of the prompt image (56) and the movement of the user's gaze to the displayed image; and generating an output comprising data based on said saccade latency period.
14. An eye movement monitoring method as claimed in claim 13, further comprising facilitating a video telephony process, said video telephony process including controlling the display unit (48) to display video telephony images.
15. A computer program comprising computer program code means adapted to perform all the steps of claim 13 or 14 when said computer program is run on a computer.
PCT/EP2017/059803 2016-04-25 2017-04-25 Eye movement monitoring system WO2017186721A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP16166770.4 2016-04-25
EP16166770 2016-04-25

Publications (1)

Publication Number Publication Date
WO2017186721A1 true WO2017186721A1 (en) 2017-11-02

Family

ID=55913463

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/059803 WO2017186721A1 (en) 2016-04-25 2017-04-25 Eye movement monitoring system

Country Status (1)

Country Link
WO (1) WO2017186721A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108714020A (en) * 2018-05-18 2018-10-30 中国人民解放军陆军军医大学第三附属医院(野战外科研究所) Portable adaptive psychological adjustment instrument, real-time eye movement delay acquiring method and the computer readable storage medium for being stored with computer program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008139137A1 (en) * 2007-05-16 2008-11-20 University Court Of The University Of Edinburgh Testing vision
US20120059282A1 (en) * 2009-03-17 2012-03-08 Emory University Internet-based cognitive diagnostics using visual paired comparison task
JP2012085746A (en) * 2010-10-18 2012-05-10 Panasonic Corp Attentional state determination system, method, computer program, and attentional state determination device
WO2013102768A1 (en) * 2012-01-05 2013-07-11 University Court Of The University Of Aberdeen An apparatus and a method for psychiatric evaluation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008139137A1 (en) * 2007-05-16 2008-11-20 University Court Of The University Of Edinburgh Testing vision
US20120059282A1 (en) * 2009-03-17 2012-03-08 Emory University Internet-based cognitive diagnostics using visual paired comparison task
JP2012085746A (en) * 2010-10-18 2012-05-10 Panasonic Corp Attentional state determination system, method, computer program, and attentional state determination device
WO2013102768A1 (en) * 2012-01-05 2013-07-11 University Court Of The University Of Aberdeen An apparatus and a method for psychiatric evaluation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MARCUS WILMS ET AL: "It's in your eyes-using gaze-contingent stimuli to create truly interactive paradigms for social cognitive and affective neuroscience", SOCIAL COGNITIVE AND AFFECTIVE NEUROSCIENCE, vol. 5, no. 1, 17 March 2010 (2010-03-17), pages 98 - 107, XP055394479, ISSN: 1749-5016, DOI: 10.1093/scan/nsq024 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108714020A (en) * 2018-05-18 2018-10-30 中国人民解放军陆军军医大学第三附属医院(野战外科研究所) Portable adaptive psychological adjustment instrument, real-time eye movement delay acquiring method and the computer readable storage medium for being stored with computer program

Similar Documents

Publication Publication Date Title
Niehorster et al. The impact of slippage on the data quality of head-worn eye trackers
US10888222B2 (en) System and method for visual field testing
US20150282705A1 (en) Method and System of Using Eye Tracking to Evaluate Subjects
US20200073476A1 (en) Systems and methods for determining defects in visual field of a user
JP2018508254A (en) Method and system for automatic vision diagnosis
KR101983279B1 (en) Nerve disprder diagnosis apparatus and method using virtual reality
US11937877B2 (en) Measuring dark adaptation
AU2019239531A1 (en) Visual testing using mobile devices
US10709328B2 (en) Main module, system and method for self-examination of a user's eye
Ivanchenko et al. A low-cost, high-performance video-based binocular eye tracker for psychophysical research
WO2017186721A1 (en) Eye movement monitoring system
US20230282080A1 (en) Sound-based attentive state assessment
CN109464151B (en) Attention suppression capability acquisition method, device, equipment and storage medium
US20220230749A1 (en) Systems and methods for ophthalmic digital diagnostics via telemedicine
Varela et al. Looking at faces in the wild
JP6911034B2 (en) Devices and methods for determining eye movements using a tactile interface
KR20170087863A (en) Method of testing an infant and suitable device for implementing the test method
RU2454166C2 (en) Device of interactive assessment of visual, perceptive and cognitive abilities of person
WO2022232414A9 (en) Methods, systems, and related aspects for determining a cognitive load of a sensorized device user
JP2024512045A (en) Visual system for diagnosing and monitoring mental health
US20210353208A1 (en) Systems and methods for automated passive assessment of visuospatial memory and/or salience
WO2022239792A1 (en) Subject analysis device
US20230259203A1 (en) Eye-gaze based biofeedback
EP4056101A1 (en) Method and device for determining a visual performance
JP2017184996A (en) Device and program for determining brain activity amount by enlargement of pupil diameter

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17720077

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17720077

Country of ref document: EP

Kind code of ref document: A1