GB2599417A - Evaluation method and system - Google Patents

Evaluation method and system Download PDF

Info

Publication number
GB2599417A
GB2599417A GB2015542.0A GB202015542A GB2599417A GB 2599417 A GB2599417 A GB 2599417A GB 202015542 A GB202015542 A GB 202015542A GB 2599417 A GB2599417 A GB 2599417A
Authority
GB
United Kingdom
Prior art keywords
input
user
display
input area
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB2015542.0A
Other versions
GB202015542D0 (en
Inventor
Bojan Kartheka
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Brainberry Ltd
Original Assignee
Brainberry Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Brainberry Ltd filed Critical Brainberry Ltd
Priority to GB2015542.0A priority Critical patent/GB2599417A/en
Publication of GB202015542D0 publication Critical patent/GB202015542D0/en
Publication of GB2599417A publication Critical patent/GB2599417A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1121Determining geometric values, e.g. centre of rotation or angular range of movement
    • A61B5/1122Determining geometric values, e.g. centre of rotation or angular range of movement of movement trajectories
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1124Determining motor skills
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1124Determining motor skills
    • A61B5/1125Grasping motions of hands
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4088Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7475User input or interface means, e.g. keyboard, pointing device, joystick

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Neurology (AREA)
  • Educational Technology (AREA)
  • Social Psychology (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Neurosurgery (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A system and method for measuring cognitive behaviour or dexterity of a user comprises: a task module presenting 42 one or more challenges to the user, solvable by a manual input; a capture module to receive 44 a manual input during user performance of the challenge(s); and an evaluation module to establish 46 a correlation between the manual input and the challenge(s). The capture module has a display to present 32 an input area, configured to display one or more input fields capable of receiving a manual input. A calibration module sets 38 an input area size to a predetermined physical size other than the size of the display. The input area size may be altered to maintain the size during the user performance. The display may include a frame surrounding the input area. The challenge(s) may comprise instructions for the user to follow a path, or to move an object to a target area, indicated in the input area. The system may measure a user input profile, the duration of user input, a number of discrete user input events, or a cessation of a user input, on the input area. The system may generate an assessment output based on the correlation.

Description

Evaluation method and system
Field of the Invention
The present invention relates to an assessment method and system for measuring cognitive behaviour in a user. More specifically, the present invention relates to a tool for measuring the dexterity and hand-eye coordination of a user during a task requiring manual input, particularly with a touch display, wherein dexterity may be used as a surrogate measure for hand-eye coordination and/or for cognitive behaviour.
Background
Assessment of cognitive function may involve user tasks presented via a computer screen to a user. The user's performance of the task may be indicative of the cognitive function. An evaluation of user performance may be used in the assessment, monitoring and scoring of cognitive performance, and may allow the presence of cognitive decline to be determined.
International Patent Application No PCT/GB2019/050987 by the present applicant discloses a system suitable for monitoring cognitive performance during assessment tasks presenting user-specific data, such as personal photographs.
The present invention seeks to provide additional options for use in cognitive health monitoring.
Summary of the Invention
In accordance with a first aspect of the invention, there is provided a system as defined in claim 1, for measuring cognitive behaviour or dexterity of a user. The system comprises a task module presenting one or more challenges to the user, the challenges solvable by a manual user input; a capture module configured to receive a manual user input during user performance of the one or more challenges; and an evaluation module configured to establish a correlation between the manual user input and the one or more challenges. The capture module comprises a display having a display size, wherein the system is configured to use the display to present an input area having an input area size, the input area configurable to display one or more input fields capable of receiving a manual user input. The system further comprises a calibration module comprising a configuration allowing it to set the input area size to a predetermined physical size other than the display size.
It will be understood that, by "display size", a physical size of a display is meant that can be used for displaying content. As one example, a display size of a typical tablet computer may be 19.7 cm x 14.8 cm. Conventionally, many computer applications are displayed at full screen size, i.e. for the given example at 19.7 cm x 14.8 cm.
Computer applications may also be displayed at a portion of the available display size, for instance to allow two windows to be displayed side-by-side, or to display legacy applications, e.g. at a fixed size of 640 x 480 pixel. Such applications may be displayed with a fixed pixel size, however the physical size occupied by the pixel area will depend on the physical display size and pixel density. As another example, a display size may be the size of a display projected onto a surface. Such projected displays may include distance measuring ability to be able to estimate a size and shape on a surface.
By pre-determined physical size, it is meant that the system is configured to display an input area in a portion of the display area measuring fixed distances. For instance, the fixed physical size may be an area of 10 cm x 18 cm.
The fixed, pre-determined physical size may be determined from a resolution or from a display density of the display. To provide an illustrative example, a display may have a resolution of 2048 pixel x 1536 pixel at 264 pixel per inch. In the example, in order to provide a physical input area of 18 cm x 10 cm, the display may be controlled to utilise a part of the full display area, in that case an area of 1870 pixel x 1039 pixel, as the input area, providing an area of 18 cm x 10 cm at 264 pixel per inch (1 inch = 2.54 cm). As such, a pre-determined physical size is understood to be implemented by setting an input area size, taking into account the display resolution. In other words, a higher resolution display may be controlled to display an area comprised of more pixels to provide the same physical size input area as a lower resolution display with fewer pixels.
Systems for measuring cognitive behaviour or dexterity in a user may comprise software presenting a task or challenge to a user that can be solved by a user input. It was an observation underlying the present invention that, particularly when measuring cognitive performance, there may be an influence on the cognitive performance measurement by the type of device used for presenting the task or challenge. In other words, the same person with the same cognitive behaviour may obtain a different score when carrying out the same tests on different devices.
A further observation underlying the present invention was that a variation in user performance could be reduced by standardising the size of the input field. For instance, a tablet computer with a display area of 19.7 cm x 14.8 cm (a 9.7 inch display) may lead to different performance compared to a tablet computer with a display area of 15.96 cm x 11.97 cm (a 7.9 inch display). However, by using the 9.7 inch display to present a task area dimensioned according to the physical size of the 7.9 inch display, the inventor observed a decrease in the difference between cognitive scores, such as duration-to-task-completion.
Underlying the present invention is the appreciation that, by setting the input area to a predetermined physical size, this will allow a user to perform cognitive tests on different devices without this creating a disproportionate influence on the performance score.
This facilitates updating devices and providing software in the form of server-based applications, reducing the dependence on the type of end user device. The invention also allows a user to perform challenges or tests on different devices they may own, in particular devices with different screen sizes, for instance to use a secondary device while a primary device is charging or otherwise unavailable.
The predetermined physical size also allows a standardised input field size to be provided. A standardised input field size is believed to facilitate the use of data from different target patient populations, to use target population data as reference data and baseline data. This in turn is believed to better differentiate user input from different target patient populations with a higher degree of granularity. For instance, early test results show that a standardised input field size, when used on devices with different screen sizes and aspect ratios, may allow diagnosis and behavioural analysis -i.e. a distinction to be made between the response patterns -for users with mild cognitive impairment (MCI) and early dementia.
In some embodiments, the system comprises a configuration for altering the input area size to maintain the input area size constant during the user performance.
The system may be configured to check in regular intervals, and/or at certain steps during the presentation of a challenge, whether or not the area size corresponds to the predetermined physical size. Alternatively, the system may set the input area size upon start (launch) of an application, or upon start (launch) of a challenge within the application.
In some embodiments, the system is configured to receive as a configuration input the display size and to set the predetermined physical size in proportion to the configuration input, wherein, optionally, the configuration input is determined from system data of a device providing the display.
In systems that store a display configuration, the resolution and/or available maximum display area may be derived from system data, to be used as a configuration input for the display size. Alternatively or concurrently, the system may be configured to receive the display size as a user input. For instance, the display may be projected onto a surface, and a user may be asked to confirm the size of the projected area. The system may be configured to display a calibration symbol, such as an 'L shaped angle. A user may be asked to confirm the size of the calibration symbol.
In some embodiments, the system comprises a configuration to record the input area size set during the manual user input.
This allows a record of a user performance to be standardised or calibrated against a particular display size at which the user performance was carried out. The input area size may be recorded as part of other data collected during user performance of the challenge.
In some embodiments, the calibration module is configured to alter the input area size without altering an aspect ratio of the input area size by more than 5%, 2%, 1%, or without altering the aspect ratio.
This allows a certain aspect ratio, e.g. 4:3 or 16:10, to be used on different devices. It will be appreciated that some display types may have fixed pixel density and so it may not be possible to provide a mathematically exact input area size mapped to a given pixel array. However, in that case it may be acceptable to tolerate a small deviation from the aspect ratio, e.g. up to 5%. Alternatively or concurrently, it may be acceptable to pad one or more sides of an input area with rows of pixels.
In some embodiments, the system is configured to display a frame surrounding the input area, the frame being visually distinguishable from the input area.
In some embodiments, one or more sides of the frame are padded to alter an aspect ratio of the input area.
By padding, it is meant that one or more edges of the input area are used to present an inactive area, for instance black pixels, to allow a more precise control over the input area size. The padding may utilise sub-pixel resolution techniques.
The presentation of a frame is believed to assist the human eye and/or cognition to focus on the input area.
The input area may be arranged in the centre of the display. Typical displays are rectangular, with a longer side and a shorter side. The input area may centred along a longer side of the display, i.e. horizontally centred in landscape orientation. The input area may be centred along a shorter side of the display, i.e. vertically centred in landscape orientation.
In some embodiments, the system is configured to deactivate, or to let appear disabled, an area of the display outside the input area.
Depending on the type of display used, the display is deactivated, or controlled to appear deactivated, for instance black, or displaying a pattern or image.
In some embodiments, the one or more challenges comprise instructions for a user to follow a path indicated in the input area.
The path may be indicated visually by a traceable line or pattern. The path may be indicated by one or more start points and one or more end points only, challenging a user to trace a shortest distance between a start point and an end point.
In some embodiments, the one or more challenges comprise instructions for a user to move an object to a target area indicated in the input area.
The target area may be constituted by one of the input fields within the input area. The instructions may be constituted by one or more objects located in a start field region and/or by one or more regions identifiable as a target field.
In some embodiments, the one or more challenges comprise instructions to move the object to one or more of a plurality of target areas displayed in the input area.
In some embodiments, the display is a touch display.
This allows drag-and-drop operations as user input.
In some embodiments, the system is configured to measure a user input profile of the user input on the input area In some embodiments, the system is configured to measure a duration of the user input on the input area.
The user input profile may be measured in the form of location coordinates within the input area. For instance, the user input profile may be measured relative to a distance from the starting point of a user input.
The user input profile may be measured in relation to the duration of a user input. For instance, the user input profile may be measured as a function of time since the start of a user input.
The pressure profile may be measured as a function of the location and the duration of a user input.
The user input profile may be measured by a surrogate pressure indicator or by a surrogate touch indicator, such as the size of a user input on a touch field. For instance, a touch display may measure a capacitive response. A change in size of a touch area may be indicative of a user applying a different level of pressure, e.g. a larger fingertip touch area may be indicative of stronger pressure.
A variation in the user input profile may be indicative of varying dexterity.
In some embodiments, the system is configured to measure simultaneously a number of discrete user input events on the input area.
In some embodiments, the system is configured to measure a cessation of a user input in the input area.
This allows a user input profile to be obtained corresponding to a drag-and-drop input.
For instance, the system may measure the removal of a touch input from a touch display. This allows a user input profile (or pressure profile) to be obtained corresponding to a drag-and-drop input.
In some embodiments, the system is configured to measure a user input profile and cessation of a user input during user performance on an input field within the input area.
The user input profile may be a touch profile, e.g. a profile of changes in touch strength or a surrogate pressure measure, such as finger tip area.
The input field may be the target area. This configuration provides an indication how long a user requires to end their input once a target has been reached, e.g. to remove a finger from a touch display at the end of a path. A remain time within a target area may be indicative of hesitation or disorientation.
In some embodiments, the system is configured to generate an assessment output based on the correlation between the manual user input and the one or more challenges.
The assessment output may be a score, classification, grouping, performance indicator, or other measurement.
In some embodiments, the assessment output takes into account one or more of a user input profile of the user input, a duration of the user input, a number of successive user inputs, a number of simultaneous user inputs, a cessation of a user input, and/or linearity of a user input.
The system may be configured to take into account other factors, such as linearity, bias, velocity, direction, complexity.
By linearity it is meant that the system may be configured to determine how closely a user input (e.g., a drag-and-drop route) matches a pre-defined path between a starting point and a target. The pre-defined path may be a shortest path. The pre-defined path may be a route presented to a user as part of the challenge. For instance, a direct straight line may correspond to a high degree of linearity. As another example, a meandering, or zig-zag' line along a direct path may correspond to a lower degree of linearity.
A bias may be observed over the course of several challenges, for instance a preference to drag an item over a side (left or right) of the screen, which may be indicative of a more active brain hemisphere. For instance, a task may involve dragging a symbol located at a starting area in the top centre of a screen to a target area in the bottom centre of a screen. A direct path for the symbol might be a straight line from the top centre starting area to the bottom centre target area. A user may consistently drag the symbol in a curved stroke over towards the left. Such behaviour may be indicative of a stronger activity of the right brain hemisphere. It will be appreciated that such a behaviour may become observable after a series of different challenges that include starting areas on opposite sides of target field.
By velocity, the stroke speed is meant, wherein the system may be configured to take into account whether a user input, such as a drag-and-drop stroke, is carried out at a monotonous speed, at increasing or decreasing speed, or at alternafingly increasing and decreasing speed.
The direction of a user input may take into account if a user moves their input figure or gesture forward and backward during a user input.
Complexity may take into account other patterns, e.g. if a user traces the screen edges, thereby avoiding a central area of a screen, instead of choosing a direct path.
In some embodiments, the system is configured to evaluate the user input captured from the same user and/or from different users from a plurality of challenges.
The system may capture the user performance during the same challenge or over the course of different challenges. This allows a pattern of performance to be observed. The system may also capture the user performance of different users, such as from a user cohort or target patient group. This allows a comparison to be made between different users. The different users may be from different cohorts of participants, such as healthy users, healthy users from different age groups, healthy users under the influence of distractions such as music, users with different known conditions that may affect performance, and users from different stages of known conditions, such as mild, early, or progressive dementia. Data from different cohorts may be used to establish baseline levels for other measurements.
In accordance with a second aspect of the invention, there is provided a method as defined in claim 21, the method comprising using a display having a display size to display an input area having an input area size; displaying in the input area one or more input fields capable of receiving a manual user input; presenting one or more challenges to the user, the challenges solvable by a manual user input; receiving a manual user input during user performance of the one or more challenges; and establishing a correlation between the manual user input and the one or more challenges. The method further comprises setting the input area size to a predetermined physical size if the display size differs from the predetermined physical size.
Embodiments of the second aspect may carry out steps to implement a configuration of any one or more of the embodiments of the first aspect.
In some embodiments, the method comprises a step of altering the input area size to maintain the input area size constant during the user performance.
In some embodiments, the method comprises a step of receiving as a configuration input the display size and setting the predetermined physical size in proportion to the configuration input.
In some embodiments, the method comprises a step of determining the configuration input from system data of a device providing the display.
In some embodiments, the method comprises a step of recording the input area size set during the manual user input.
In some embodiments, the method comprises a step of altering the input area size without altering an aspect ratio of the input area size by more than 5%, 2%, 1%, or without altering the aspect ratio.
In some embodiments, the method comprises a step of displaying a frame surrounding the input area, the frame being visually distinguishable from the input area.
In some embodiments, the method comprises a step of padding one or more sides of the frame to alter an aspect ratio of the input area.
In some embodiments, the method comprises a step of deactivating, or causing to appear disabled, an area of the display outside the input area.
In some embodiments, the method comprises a step of indicating in the input area a path for a user to follow as part of the one or more challenges.
In some embodiments, the method comprises a step of presenting instructions for a user to move an object to a target area indicated in the input area.
In some embodiments, the method comprises a step of presenting instructions to move the object to one or more of a plurality of target areas displayed in the input area.
In some embodiments, the method comprises providing a touch display as the display.
In some embodiments, the method comprises a step of measuring a user input profile of the user input on the input area.
In some embodiments, the method comprises a step of measuring a duration of the user input on the input area.
In some embodiments, the method comprises a step of measuring simultaneously a number of discrete user input events on the input area.
In some embodiments, the method comprises a step of measuring a cessation of a user input in the input area.
In some embodiments, the method comprises a step of measuring a user input profile and cessation of a user input during user performance on an input field within the input area.
In some embodiments, the method comprises a step of generating an assessment output based on the correlation between the manual user input and the one or more challenges.
In some embodiments, the method comprises, when generating the assessment output, taking into account one or more of a user input profile of the user input, a duration of the user input, a number of successive user inputs, a number of simultaneous user inputs, a cessation of a user input, and/or linearity of a user input.
One or more steps of the method may be implemented in the form of software instructions. The software may be incorporated in the system of the first aspect. The system may comprise a processor and software instructions implemented by the processor.
Description of the Ficiures
Exemplary embodiments of the invention will now be described with reference to the Figures, in which: Figure 1 shows a schematic top view of a display device; Figure 2 shows a schematic top view of another display device; Figure 3 shows a schematic top view of the Figure 2 device in use; Figure 4 shows a schematic top view of an input area; and Figure 5 shows steps of an exemplary method.
Description
Figure 1 shows a system 1 comprising a touch screen display 2 surrounded by a bezel 3. The touch screen display 2 comprises a display area 4 in the form of a pixel array within the bezel 3. As shown in Figure 1, the display area 4 is identical to the touch screen display 2. The system 1 is configured to present via the touch screen display 2 an input area 5 having an input area size 6, indicated in Figure 1 as a dashed line, for receiving a touch input by a user. The touch screen display 2 also shows an inactive area 7 outside the input area 5 that is deactivated or controlled to appear inactive. For instance, the inactive area 7 may not respond to a touch input by a user. Alternatively, the inactive area 7 may display an application title.
The system 1 comprises a system setting storing properties of the display, such as size, resolution, and others. The system 1 further comprises as a parameter of a predetermined physical size for the input area 5. The input area size 6 of the input area 5 is set according to the predetermined physical size and may differ from the display area 4.
Figure 2 shows a second system 11, comprising a touch screen display 12 comprising a bezel 13 and a display area 14 that is larger and comprises a different aspect ratio than the display area 4. The touch screen display 12 provides an input area 15 having an input area size 16 for receiving a touch input by a user. The system 11 is provided with a parameter of a predetermined physical size for the input area 15. The predetermined physical size is the same as that used for system 1. The input area size 16 is set according to the predetermined physical size. The second system 11 also comprises an inactive area 17 outside the input area 15 that is deactivated or controlled to appear inactive. For instance, the inactive area 17 may not respond to a touch input by a user. Alternatively, the inactive area 17 may be responsive to a touch input by a user but ignore inputs received in the inactive area during performance of a user challenge.
The display area 4 of system 1 and the display area 14 of system 11 differ in size and aspect ratio, and may differ in pixel density. However, the input area size 6 and the input area size 16 are the same. For instance, each input area size 6 and 16 may have physical dimensions of 6 x 10 centimetres.
Figure 3 shows the second system 11 used to display a user challenge in the form of an interactive task on the touch screen display 12. The user challenge can be solved by manual user input. The input area size 16 is dimensioned to specific physical parameters, and so the input area size 16 occupies only a part of the touch screen display 12.
The user challenge comprises two input fields, one first input field 22 and a target input field 26 located within the input area 16. To provide an illustrative example, the user challenge may involve asking a user to drag-and-drop an icon from the first input field 22 to the target input field 26. Alternatively or concurrently, the user challenge may involve asking a user to drag an icon from the first input field 22 along a proposed path 24 to the target input field 26. In other words, the proposed path 24 may be displayed or may not be displayed to the user. The second system 11 may be programmed to measure several performance indicators during performance of the challenge by a user, for instance overall time to complete the task, reaction to from presentation of the challenge to first touch contact of a user with the first input field 22, deviation of the user drag-and-drop path from the proposed path 24, pressure profile during the performance of the task, and others.
As one example of a deviation from a proposed path (whether or not the proposed path 24 is displayed to be visible to the user), the user may drag-and-drop the icon from the first input field 22 along the dotted path 28a, which follows generally the direction of the proposed path 24. However the dotted path 28a meanders, in a zig-zag line, thereby showing some degree of deviation from a straight direct line The dotted path 28a exhibits a lower degree of linearity than a direct straight line. The deviation may be indicative of an underlying condition such as motoric difficulties.
As another example, the user may drag-and-drop the icon from the first input field 22 along the dotted path 28b, which is a curved line deviating from the shortest direct path 24. A bias, e.g. a repeated deviation in a particular direction (repeated preference of the left screen side, or repeated preference of the right screen side), may be indicative of a preferential use of a brain hemisphere.
The user challenge presented in Figure 3 is exemplary. Other challenges that may be more or less complex may be presented. For instance, a user challenge may comprise matching corresponding or identical icons, and/or complete one or more exercises within a predetermined period of time, and/or repeat one or more exercises within a predetermined period of time.
During the challenge, the second system 11 utilises the input area 15 as if it was the available screen area. For instance, if a user moves an object as the first input field 22 towards the boundary of the user input area 16, the object behaves as if it reaches the screen border, for instances the object may bounce back, and/or the system 11 may provide a haptic feedback such as a vibration signal.
As shown in Figures 2 and 3, the second system 11 is configured not to use the inactive area 17 for the display of the input area 16. By setting the size of the input area 16 to a predetermined physical size, the input area 16 can be maintained of a constant, or practically constant, size across different devices.
The input area 16 is illustrated approximately centred on the second system 11. The input area 16 may be located in a different position relative to the screen, for instance at an edge (top, bottom, or side) or partway between the edges of the screen display 12, for instanced centred at about one third of the screen height.
Figure 4 shows another example of a user input area 16a. The user input area 16a comprises an input area 15a in which several (here: ten) input fields 22a-22j and 26a are displayed. The input fields 22a, 22b 22c, 22d, 22e, 22f, 22g, 22h, 22i, 22j may constitute start fields each of which are configured to display one of a plurality of symbols. The input field 26a constitutes a target field. A user may be presented with a task of moving, via a drag-and-drop action, a symbol from one of the start input fields 22a-22i towards the target input field 26a. For instance, only one of the ten start input fields may display a symbol matching a corresponding symbol in the target field.
The dotted lines 29f, 29j and 29i show examples of different user input patterns that may be observed. The first dotted line 29f shows a movement from the start input field 22f to the target input field 26a. The first dotted line 29f is curved yet close to a direct line. The first dotted line 29f may be consistent with a healthy behaviour. The second dotted line 29j extends from the start input field 22j to the target input field 26a. The second dotted line 29j extends from one side of the input field 15a (here, the right side) over the centre line into the opposite (here: left) side of the input field 15a before reaching the target input field 26a. A repeated pattern of a user preferring a path via one side of the input field, particularly when this is observed from different starting points, may be indicative of a bias, which may be indicative of an underlying condition.
The third dotted line 29i shows a third example of a line originating from the start input field 22i partway towards the target input field 26a. The third dotted line 29a extends in a straight line but fails to reach, or fully connect, to the target input field 26a. Such a behaviour may be indicative of motoric difficulties.
The examples of the dotted lines 29f, 29j and 29i discuss the path pattern of a user input. Other properties, such as the drag-and-drop speed, user input profile, response time from presentation of the start field to touch, time to completion etc may be recorded in addition to the path pattern.
Although no icons are displayed in the start input fields 22a to 22j, a challenge may include different icons or colour fields. For instance, each input field may present a different colour (e.g., red, orange, blue, etc), and the target field may present an object in one of the colours (e.g., an orange cup), and the user may be asked to match the colour. In the given example, an expected user input might be to drag an orange start field towards the orange cup in the target field. The input fields may present other content, for instance photographs of different people of which only one is known to the user. It has been observed that the presentation of known images, e.g. a family member, may affect the performance of a user. For instance, matching a colour to an object (e.g. an orange colour to an orange cup) may not trigger an emotional response in a user, whereas matching a photograph of a person to a familiar event may trigger an emotional response. The emotional response may be subtle, e.g. result in a slower response time compared to a non-emotional response, but may be observable in repeated performances of a task. Some patients may recognise an older image of themselves but not a more recent image of themselves. As such, the input fields may display photographs of the same person, e.g. the user, each at a different age.
Figure 5 shows an exemplary sequence of steps of a method 30 for measuring cognitive function, or cognitive behaviour, in a user. The method 30 may involve providing a display having a display size. In step 32, the display is used to present a user input area within the display. In step 34, the display properties are obtained. The display properties may be obtained from system settings, and/or from a user input. As one example, a user may enter the physical screen size. As another example, the system may have access to a database of known display devices allowing the system to look up a display size for a specific display device. In step 36, the user input area is set to a predetermined physical size. Step 36 may involve determining the physical dimensions (in other words, the physical size) of the user input area. In optional step 38, the physical dimensions of the user input are adjusted. The optional step 38 may be carried out recurrently to ensure the physical dimensions are maintained.
In step 40, the display is used to present one or more input fields. The one or more input fields are displayed within the user input area. In step 42, the display is used to present one or more tasks or challenges solvable by a manual user input. In step 44, a manual user input is recorded during user performance of a challenge. The tasks or challenges are displayed within the user input area and manual user input is recorded in the user input area only. For instance, if the user input area as set in step 36 is smaller than a full-screen display, user inputs outside the user input area are not recorded during performance of the task. Several properties of the user input may be recorded during performance of the task in step 44. In step 46, a correlation is established between the user input and the challenge.
In optional step 48, a correlation is established between a user input and other results.
The other results may include a performance of the same user at a different time, for instance user performance of the same task undertaken at an early stage of disease, or at an early stage of an intervention, to monitor progress. The other results may include a performance of the same user under different performance conditions, for instance carrying out the tests under stress, with loud background music, or others.
The other results may include the performance of different users, in particular cohorts of users. The cohorts of users may be users from different age groups, and/or cohorts of users presenting with different types of cognitive conditions and/or different stages of cognitive decline. By creating a correlation of a user input with data from different results, an inference can be made that a user input corresponds to the responses of a particular cohort, for instance a cohort of 50-60 year old individuals with mild cognitive impairment. The correlation is facilitated by the invention which allows the challenges to be presented on different device types with different screen sizes.
The system may allow several parameters to be recorded, such as time taken to complete a task, reaction time between presentation of a task, and others.
The system may be linked to an evaluation module comprising a database. The evaluation module and/or database may be located on a server or distributed computing solution such as a cloud. User performance of a task may be compared to other performances by the same user and/or to the performance of other users.
The trail path captured during the drag and drop, and/or the user input profile (or pressure profile) of the trail, may be indicative of motor skills, dexterity, activation of left or right hemisphere of the brain, and other indicators of cognitive behaviour. A pattern, such as whether the drag-and-drop trail is smooth and steady, or meandering relative to a direct path, may be indicative of the cognitive or emotional status of a user. The emotional status of a user may have an effect on a test result.
As such the invention may be used directly in the assessment of cognitive behaviour, dexterity, emotional status, as well as a pre-test calibration tool for subsequent tests.
By standardising the input field to a predetermined physical size, e.g. 10cm x 15cm, it is also believed that the sensitivity and specificity of results is improved, thereby opening up the possibility of better classifying and/or better distinguishing different disease types and different disease stages, which may allow the present system to be used as a diagnostic tool. Furthermore, standardising the input field is believed to facilitate the comparison of results. The comparison may involve, for instance, a series of results obtained from multiple tests carried out by a user over a prolonged period of time, during which period of time the user may have used different devices. The comparison may involve data obtained from different cohorts of users and/or patients with different types of cognitive conditions, including healthy, healthy and stressed, healthy and excited, as well as different stages of disease.

Claims (25)

  1. CLAIMS: 1. A system for measuring cognitive behaviour or dexterity of a user, the system comprising: a task module presenting one or more challenges to the user, the challenges solvable by a manual user input; a capture module configured to receive a manual user input during user performance of the one or more challenges; and an evaluation module configured to establish a correlation between the manual user input and the one or more challenges; wherein the capture module comprises a display having a display size, wherein the system is configured to use the display to present an input area having an input area size, the input area configurable to display one or more input fields capable of receiving a manual user input; and wherein the system further comprises a calibration module comprising a configuration allowing it to set the input area size to a predetermined physical size other than the display size.
  2. 2. The system according to claim 1, comprising a configuration for altering the input area size to maintain the input area size constant during the user performance.
  3. 3. The system according to claim 1 or 2, configured to receive as a configuration input the display size and to set the predetermined physical size in proportion to the configuration input, wherein, optionally, the configuration input is determined from system data of a device providing the display.
  4. 4. The system according to any one of the preceding claims, comprising a configuration to record the input area size set during the manual user input.
  5. 5. The system according to any one of the preceding claims, wherein the calibration module is configured to alter the input area size without altering an aspect ratio of the input area size by more than 5%, 2%, 1%, or without altering the aspect ratio.
  6. 6. The system according to any one of the preceding claims, configured to display a frame surrounding the input area, the frame being visually distinguishable from the input area.
  7. 7. The system according to claim 6, wherein one or more sides of the frame are padded to alter an aspect ratio of the input area.
  8. 8. The system according to any one of the preceding claims, configured to deactivate, or to let appear disabled, an area of the display outside the input area.
  9. 9. The system according to any one of the preceding claims, wherein the one or more challenges comprise instructions for a user to follow a path indicated in the input area.
  10. 10. The system according to any one of the preceding claims, wherein the one or more challenges comprise instructions for a user to move an object to a target area indicated in the input area.
  11. 11. The system according to claim 10, wherein the one or more challenges comprise instructions to move the object to one or more of a plurality of target areas displayed in the input area.
  12. 12. The system according to any one of the preceding claims, wherein the display is a touch display.
  13. 13. The system according to any one of the preceding claims, configured to measure a user input profile of the user input on the input area.
  14. 14. The system according to any one of the preceding claims, configured to measure a duration of the user input on the input area.
  15. 15. The system according to any one of the preceding claims, configured to measure simultaneously a number of discrete user input events on the input area.
  16. 16. The system according to any one of the preceding claims, configured to measure a cessation of a user input in the input area
  17. 17. The system according to any one of the preceding claims, configured to measure a user input profile and cessation of a user input during user performance onan input field within the input area.
  18. 18. The system according to any one of the preceding claims, configured to generate an assessment output based on the correlation between the manual user input and the one or more challenges.
  19. 19. The system according to claim 18, wherein assessment output takes into account one or more of a user input profile of the user input, a duration of the user input, a number of successive user inputs, a number of simultaneous user inputs, a cessation of a user input, and/or linearity of a user input.
  20. 20. The system according to any one of the preceding claims, configured to evaluate the user input captured from the same user and/or from different users from a plurality of challenges.
  21. 21. A method for measuring cognitive behaviour or dexterity of a user, the method comprising: using a display having a display size to display an input area having an input area size; displaying in the input area one or more input fields capable of receiving a manual user input; presenting one or more challenges to the user, the challenges solvable by a manual user input; receiving a manual user input during user performance of the one or more challenges; and establishing a correlation between the manual user input and the one or more challenges; wherein the method further comprises setting the input area size to a predetermined physical size if the display size differs from the predetermined physical size.
  22. 22. The method according to claim 21, comprising a step of deactivating or causing to appear disabled an area of the display outside the input area.
  23. 23. The method according to 21 or 22, comprising a step of presenting instructions for a user to follow a path indicated in the input area.
  24. 24. The method according to any one of claims 21 to 23, comprising a step of presenting instructions for a user to move an object to a target area indicated in the input area
  25. 25. The method according to any one of claims 21 to 25, comprising a step of measuring a user input profile of the user input on the input area.
GB2015542.0A 2020-09-30 2020-09-30 Evaluation method and system Withdrawn GB2599417A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2015542.0A GB2599417A (en) 2020-09-30 2020-09-30 Evaluation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2015542.0A GB2599417A (en) 2020-09-30 2020-09-30 Evaluation method and system

Publications (2)

Publication Number Publication Date
GB202015542D0 GB202015542D0 (en) 2020-11-11
GB2599417A true GB2599417A (en) 2022-04-06

Family

ID=73197398

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2015542.0A Withdrawn GB2599417A (en) 2020-09-30 2020-09-30 Evaluation method and system

Country Status (1)

Country Link
GB (1) GB2599417A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120330182A1 (en) * 2009-11-19 2012-12-27 The Cleveland Clinic Foundation System and method for motor and cognitive analysis
US20140249447A1 (en) * 2013-03-04 2014-09-04 Anne Bibiana Sereno Touch sensitive system and method for cognitive and behavioral testing and evaluation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120330182A1 (en) * 2009-11-19 2012-12-27 The Cleveland Clinic Foundation System and method for motor and cognitive analysis
US20140249447A1 (en) * 2013-03-04 2014-09-04 Anne Bibiana Sereno Touch sensitive system and method for cognitive and behavioral testing and evaluation

Also Published As

Publication number Publication date
GB202015542D0 (en) 2020-11-11

Similar Documents

Publication Publication Date Title
US20220044824A1 (en) Systems and methods to assess cognitive function
JP5367693B2 (en) Inspection and training of visual cognitive ability and cooperative behavior
Goldberg et al. Computer interface evaluation using eye movements: methods and constructs
US8342685B2 (en) Testing/training visual perception speed and/or span
CN107224262A (en) System and method for quickly measuring visual contrast sensitivity function
US10390696B2 (en) Dynamic computer images for improving visual perception
CN112535479B (en) Method for determining emotion processing tendency and related products
CN109145782A (en) Visual cognition Research on differences method based on interface task
US20210303066A1 (en) Display device, display method, and computer-readable storage medium
JP4765059B2 (en) Cognitive task response measurement system and cognitive task response measurement method
CN106667426B (en) A kind of maxicell access function detection method and system
TWI813329B (en) Cognitive assessment system
Kosovicheva et al. Looking ahead: When do you find the next item in foraging visual search?
GB2599417A (en) Evaluation method and system
KR20170087863A (en) Method of testing an infant and suitable device for implementing the test method
Lara-Garduno et al. 3D-Trail-Making Test: A Touch-Tablet Cognitive Test to Support Intelligent Behavioral Recognition.
CN109875498B (en) Dynamic vision measuring system based on reaction time
WO2004112598A1 (en) Method of testing and corresponding vision aid
Akshay et al. Exploring Eye Gaze Patterns in Three Dimensions: An Innovative Visualization Dashboard
NL1041913B1 (en) Self measurement and monitoring method and system for motorically and mentally impaired persons
JP2020198921A (en) Brain function state measuring device and program for discriminating brain function state
JP2020192226A (en) Ability determination device, ability determination method, and computer readable storage medium
CN114746017A (en) Methods and systems for testing cognition by treating a subject's response to a stimulus
CN116602680A (en) Objective and quantitative measurement method for cognitive ability by combining finger flexibility and span

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)