WO2019145907A1 - Method aimed at patients with motor disabilities for selecting a command by means of a graphic interface, and corresponding system and computer program product - Google Patents

Method aimed at patients with motor disabilities for selecting a command by means of a graphic interface, and corresponding system and computer program product Download PDF

Info

Publication number
WO2019145907A1
WO2019145907A1 PCT/IB2019/050631 IB2019050631W WO2019145907A1 WO 2019145907 A1 WO2019145907 A1 WO 2019145907A1 IB 2019050631 W IB2019050631 W IB 2019050631W WO 2019145907 A1 WO2019145907 A1 WO 2019145907A1
Authority
WO
WIPO (PCT)
Prior art keywords
cursor
command
function
given number
sequence
Prior art date
Application number
PCT/IB2019/050631
Other languages
French (fr)
Inventor
Febo CINCOTTI
Dario Giuseppe FERRIERO
Alessandro GIUSEPPI
Antonio PIETRABISSA
Lorenzo RICCIARDI CELSI
Cecilia POLI
Original Assignee
Universita' Degli Studi Di Roma "La Sapienza"
Istituto Superiore Di Sanita'
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Universita' Degli Studi Di Roma "La Sapienza", Istituto Superiore Di Sanita' filed Critical Universita' Degli Studi Di Roma "La Sapienza"
Publication of WO2019145907A1 publication Critical patent/WO2019145907A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04801Cursor retrieval aid, i.e. visual aspect modification, blinking, colour changes, enlargement or other visual cues, for helping user do find the cursor in graphical user interfaces

Definitions

  • the present description relates in general to solutions for selecting a command by means of a graphic interface.
  • Various embodiments of the present description have been developed for assisting a patient during movement of a cursor on the graphic interface.
  • HAI human -computer interfaces
  • the HCI proposed in the above document uses as input device an eye- tracker, i.e., a device for oculometry (namely, ocular monitoring/eye-tracking).
  • This device is used for controlling displacement of the cursor of the mouse on a computer.
  • the displacements of the eye above all saccades, may generate sporadic movements of the cursor.
  • Even though such displacements of the eye are frequently imperceptible by a human eye, they may have an adverse effect on the accuracy of an HCI that uses the position of the eye for interacting with a computer.
  • the document cited proposes an HCI capable of adapting to each user, also compensating for the noise, which may vary in time.
  • the document proposes an artificial neural network (ANN), such as a network of a multilayer perceptron type, which, appropriately configured and trained, minimises flutter of the cursor.
  • ANN artificial neural network
  • the user carries out a brief training session, and the neural network learns the relation between the raw co-ordinates of the eye (as supplied by the eye-tracker) and the required position of the cursor.
  • a small graphic interface is used, which hence enables generation of different user profiles that have respective configurations for the neural network.
  • a head-tracking device is used as input device, i.e., a device for monitoring the position and/or the displacement of a user’s head.
  • a head-tracking device typically such a device comprises a triaxial accelerometer and/or gyroscope and appropriate fixing means for fixing the sensors to the user’s head.
  • the object of the present description is to provide new solutions for moving a cursor on a monitor on the basis of input data, for example provided by a subject by means of an eye-tracker or a head-tracker.
  • the foregoing object is achieved through a method having the distinctive elements set forth specifically in the ensuing claims.
  • the embodiments moreover regard a corresponding system, as well as a corresponding computer program product that may be loaded into the memory of at least one computer and comprises portions of software code for executing the steps of the method when the product is run on a computer.
  • reference to such a computer program product is intended as being equivalent to reference to a computer-readable medium, which contains instructions for controlling a processing system in order to co-ordinate execution of the method.
  • Reference to“at least one computer” is evidently intended to highlight the possibility of the present disclosure being implemented in a distributed/modular way.
  • a control unit provides a screenful to be represented on a display, where the screenful comprises a cursor and a plurality of areas.
  • the screenful comprises a cursor and a plurality of areas.
  • associated to each area is a respective command, and a command is selected by displacing the cursor into the area associated to the respective command.
  • the processing unit stores a plurality of non- assisted trajectories during a training step. For this purpose, the processing unit selects a target command and indicates the target command to a user. Next, by means of one or more input devices, the user displaces the cursor in the direction of the target command. Consequently, the processing unit receives from the input device (raw) data that identify a requested position of the cursor, and the processing unit moves the cursor as a function of said data.
  • the input device may supply (raw) data that identify a requested absolute position or a requested displacement of the cursor, e.g. in both a vertical direction and a horizontal direction (in the coordinate system of the display/screenful). Then, by storing the data, the processing unit records one or more non-assisted trajectories for each target command.
  • the processing unit processes the non- assisted trajectories for training a classifier configured for estimating a target command as a function of data that identify a sequence of positions.
  • the classifier is an artificial neural network, preferably a network of a feed-forward type.
  • the processing unit may also extract from each non-assisted trajectory one or more features, such as the initial position of the cursor, the final position of the cursor, the average speed of movement towards each command, and the standard deviation of the speed of movement towards each command. For instance, these features may be supplied as input to the artificial neural network.
  • the processing unit may hence assist the user during displacement of the cursor.
  • the processing unit again receives from the input device (raw) data that identify a requested position of the cursor. However, in this case, the processing unit estimates, by means of the classifier, a target command. Knowing the estimate of the target command, the processing unit then estimates a sequence of future values for the movement of the cursor (with respect to the current position of the cursor) and moves the cursor sequentially as a function of these future values.
  • the estimation procedure comprises calculation of a cost function determined as a function of a given number of future values for the movement of the cursor and the estimated target function.
  • the processing unit selects the sequence of future values that minimises the cost function.
  • the cost function may take into account also a given number of past values of the requested position.
  • the cursor is identified with absolute-position data on the display and orientation data, which indicate, for example, the angle of rotation of the cursor with respect to the horizontal axis on the display (i.e. the horizontal axis of the screenful).
  • the processing unit hence calculates, for each future value of the movement of the cursor, a respective future value of the position of the cursor.
  • the cost function may comprise a first term that takes into account the distance between each future value of the position of the cursor and the position of the target command.
  • the processing unit may calculate, for each future value of the movement of the cursor, a respective future value of the orientation of the cursor.
  • the cost function may comprise a second term that takes into account the future values of the orientation of the cursor with respect to the target command.
  • the processing unit moreover estimates, for example by means of interpolation/extrapolation of a given number of past values of the requested position, for each future value of the movement of the cursor a respective future value of the requested position.
  • the cost function may comprise a third term that takes into account the distance between each future value of the position of the cursor and a respective future value of the estimated requested position.
  • MPC Model Predictive Control
  • MPC may also use one or more constraints for each future value of the movement of the cursor, such as constant maximum limits and/or maximum limits determined dynamically for the variations of the speed of displacement and/or the speed of rotation of the cursor, constant maximum limits and/or maximum limits determined dynamically for the absolute value of the speed of displacement and/or the speed of rotation of the cursor, and/or limits for the position data of the cursor.
  • FIG. 1 is a schematic representation of a system that enables a user to control one or more functions by means of displacement of a cursor represented on a display;
  • FIG. 2 shows a block diagram of the connection between the various components of the system of Figure 1 ;
  • FIG. 3 and 4 show two possible screenful that may be represented on the display of the system of Figure 1 ;
  • Figures 5 and 6 show two input devices that may be used for displacing the cursor in the system of Figure 1 ;
  • Figures 7 and 8 show possible displacements of the cursor on the display of the system of Figure 1 ;
  • FIG. 9 shows an embodiment of a system capable of learning the characteristics of the data supplied by the input device
  • FIG. 11 shows an embodiment of a method for displacing the cursor on the display of the system of Figure 1, taking into account also the learning data, in such a way as to reduce the noise of the displacement of the cursor on the display of the system of Figure 1 ;
  • FIG. 12 shows a block diagram that summarises the operations executed by the system of Figure 1.
  • references to“an embodiment” or“one embodiment” in the framework of the present description is intended to indicate that a particular configuration, structure, or characteristic described in relation to the embodiment is comprised in at least one embodiment.
  • phrases such as“in an embodiment” or“in one embodiment” that may be present in various points of this description do not necessarily refer to one and the same embodiment.
  • particular conformations, structures, or characteristics may be combined in any adequate way in one or more embodiments.
  • various embodiments of the present description regard solutions for providing a human-computer interface, for example for subjects with motor disabilities, such as patients with neurodegenerative diseases (e.g., multiple sclerosis and ALS) or with lesions (caused by traumas, ischaemia, or cancer) to the first or second motor neuron, who are affected by paralysis or paresis.
  • motor disabilities such as patients with neurodegenerative diseases (e.g., multiple sclerosis and ALS) or with lesions (caused by traumas, ischaemia, or cancer) to the first or second motor neuron, who are affected by paralysis or paresis.
  • Figures 1 and 2 show the general architecture of a system according to the present description.
  • the system comprises a processing unit PU, a display D, and at least one input device I.
  • the processing unit PU presents on the display D a screenful that represents a plurality of functions/actions F.
  • Figure 3 shows a screenful that comprises eight functions Fl, ..., F8.
  • associated to each function Fl, ..., F8 is a respective area on the display D, where the areas are arranged in different positions on the display D.
  • the processing unit PU moreover represents on the display D a cursor C.
  • the processing unit PU is configured for moving the cursor C on the display D as a function of the data received from the input device or devices I until the cursor reaches a given area associated to one of the functions Fl, ..., F8.
  • the processing unit PU selects an action only if the cursor C stops within the perimeter of the area associated to the action.
  • the processing unit PU may also check whether the cursor C remains in the area for a pre-set time.
  • the area may have any shape, such as, rectangular, square, circular, etc.
  • the processing unit PU may, for example, be a desktop computer, a portable computer, or an embedded board, such as Raspberry PI®.
  • the aforesaid processing unit PU may comprise a microprocessor and a memory that contains the software, which is run by the microprocessor.
  • such software may comprise an operating system (e.g., Microsoft Windows®, Linux®, iOS®, Android®, etc.) and a computer program that manages the device or devices I and the user interface represented on the display D.
  • the processing unit PU comprises:
  • a first interface such as a PS/2, RS-232, or USB (Universal Serial Bus) port, and/or a Bluetooth ® transceiver for connection to the device or devices I; and
  • a second interface such as a VGA (Video Graphics Array), DVI (Digital Visual Interface), or HDMI (High-Definition Multimedia Interface) port, for connection to the display D.
  • VGA Video Graphics Array
  • DVI Digital Visual Interface
  • HDMI High-Definition Multimedia Interface
  • by“patient” or“user” is hence meant the user U of the system, who is typically affected by a motor dysfunction.
  • by“graphic interface” is meant the computer tool (i.e., the computer program run on the processing unit PU) whereby the patient U is able to interact with the system, in particular for displacing the cursor C in such a way as to select a function F.
  • the input device I supplies data that identify a requested absolute position or a requested displacement of the cursor C in both a vertical direction and a horizontal direction.
  • displacement is meant a displacement of the cursor C with respect to the previous position of the cursor C, which hence enables recalculation of a requested absolute position.
  • the device I may directly supply data that identify a vertical and horizontal displacement of the cursor C (for example, a number of pixels). Consequently, in the simplest case, the device I may comprise a mouse, a touchpad, a track-ball, and/or a joystick.
  • the input device I may also comprise other sensors configured for supplying at least two-dimensional data that may be used to determine a displacement of the cursor C.
  • the input device I may be configured to monitor the position/movement of a part of the body of the user U, such as his head, hand, and/or eyes. Consequently, the input device I may also comprise a head-tracker and/or an eye-tracker.
  • the input device I may comprise an inertial platform II, which includes a (preferably triaxial) accelerometer and/or gyroscope and possibly a magnetometer configured to detect data that enable identification of the orientation and/or movement of the head of the user U.
  • the aforesaid sensors that implement a head-tracker may be fixed directly to the head of the user U, for example by means of a band, a helmet, or a headset.
  • commercial devices may also be used as head-tracker, which supply already processed data, typically inclination data (for example, pitch, roll, and yaw).
  • the processing unit may determine an angle of roll a and an angle of pitch b (in the coordinate system of the user).
  • the device II or processing unit PU is configured for converting the angle of roll a of the head into a horizontal position or displacement (in the coordinate system of the display/screenful) and the angle of pitch b of the head into a vertical position or displacement (in the coordinate system of the display).
  • the input device I may comprise one or more video cameras 12.
  • a video camera 12 may detect the inclination and/or position of the head of the user U.
  • the device 12 may be set in a fixed position, for example fixed to the display D.
  • the video camera 12 may also supply data that identify the position/movement of the pupils of the user U. For instance, this solution is advantageous in the case where the patient were incapable of moving his neck.
  • the video camera 12 (which hence implements an eye-tracker) may be fixed also to the head of the user U.
  • commercial devices such as eye-trackers that supply already processed data may also be used.
  • the co-ordinates of the eye may be the cartesian co-ordinates of the centre of the pupil (or of each pupil) with respect to given reference co ordinates (i.e. in the coordinate system of the user), which for example correspond to the centre of the eye.
  • Figure 6 shows an example in which the device 12 supplies polar co-ordinates, i.e., an angle g and a distance S of the centre of the pupil (or of each pupil) with respect to the reference co-ordinates.
  • the device 12 or the processing unit is configured for converting a vertical position of the eye with respect to its central position into a vertical position of the cursor C (in the coordinate system of the display), and a horizontal position of the eye with respect to its central position into a horizontal position of the cursor C.
  • the processing unit PU, the display D and the device or devices I, such as the video camera 12 may hence be integrated also in a single integrated device, such as a portable computer provided with webcam, a smartphone, or a tablet.
  • the processing unit PU hence processes the data supplied by the device or devices I (for example, the displacement/inclination of the head and/or the position of the eye or eyes) in such a way as to determine a requested horizontal position X and vertical position Y rof the cursor C on the display D.
  • the processing unit PU is configured for moving the cursor C until the position ( X,Y) of the cursor C reaches the position of one of the functions F on the display D.
  • the graphic interface may also support a configuration mode, where an operator may configure the functions F that are represented, for example in order to configure the number of the functions, arrangement of the functions on the display, and/or type of action to be performed and possible configuration parameters for the functions.
  • one or more of the functions F may be used for controlling operation of one or more actuators A operatively connected to the processing unit PU.
  • the processing unit PU comprises one or more interfaces for connection to the actuator or actuators A.
  • a function Fl could be used for controlling operation of a television set.
  • the actuators A may comprise a transmitter or a transceiver T/R, such as an infrared transmitter, which enables infrared signals to be sent to the television set.
  • a function F2 may be used for sending an emergency signal.
  • an actuator A may comprise transmission means EC for transmitting the emergency signal, such as:
  • an auto-dialler for example a GSM auto-dialler, for making a call to a pre-set telephone number
  • a module such as a mobile -radio telephone module (GSM, GPRS, 3G, LTE, etc.), for sending a message of the SMS (Short-Message Service) or another messaging service, such as an e-mail.
  • GSM mobile -radio telephone module
  • GPRS GPRS
  • 3G 3G
  • LTE Long Term Evolution
  • e-mail a short-Message Service
  • one or more functions may be configured for driving actuators for a domotic control, such as for switching on/switching off one or more lamps L, an HVAC (Heating, ventilation, and air conditioning) system, a door-opener, a surveillance camera, etc.
  • the processing unit PU may comprise an interface for controlling, for example, switching of one or more switches.
  • each function F may even not drive an actuator A directly, but (once the corresponding position is reached), the processing unit PU may present a further selection screenful in which the user may select from among one or more subfunctions associated to the function selected in the main screenful (see Figure 3).
  • the screenful may comprise four functions Fl l, ..., F14, which are arranged in different positions, respectively, for selecting the next channel, for selecting the previous channel, for raising the volume, or for lowering the volume.
  • the screenful may also comprise a further function F15 to return to the previous screenful.
  • one or more functions may also be represented in each screenful, such as the function F2 for sending an emergency signal.
  • Figure 4 moreover shows that the area associated to the functions F may also have different shapes and/or dimensions, for example for facilitating sending of the emergency signal.
  • the solutions described herein may assist patients affected by paresis and capable of moving (even limitedly) their neck or eyes for issuing commands in a domotic system insofar as the patients who require interfaces of this type generally need assistance for performing autonomously everyday actions, such as switching on lights, regulating the temperature of the room, or activating a pager.
  • the system forming the subject of the present description aims at controlling in an assisted way displacement of the cursor C on the graphic interface via which the user/patient U is able to activate the functions F that he or she wishes.
  • it may make use of an inertial platform fixed to the patient’s head to obtain data regarding the inclination and/or movement of the head of the user U.
  • the functions are not dependent upon the use of this inertial platform; in fact, in the case of patients affected by total paralysis, the system may be interfaced with sensors capable of detecting eye movement in such a way as to move the cursor C on the interface with the same modalities.
  • the use of other sensors is supported, provided that the data supplied may be correlated to a requested two-dimensional absolute position ( X,Y) of a cursor C on a graphic interface.
  • Figure 12 shows an embodiment of a method that controls the position of the cursor C as a function of the data supplied by the device or devices I.
  • an optional step 1002 is performed of pre-processing of the data supplied by the device or devices I.
  • a step 1004 the cursor C is moved as a function of the data supplied by the device or devices I.
  • the processing unit PU could, in step 1004, displace the cursor C directly to the requested position (X,Y), which in turn is determined as a function of the data supplied by the device or devices I.
  • these data may also comprise noise, in particular when devices of the eye-tracker or head-tracker type are used.
  • the processing unit PU is configured for displacing the cursor C, taking into account also previous values for the parameters X and Y, thus determining a trajectory that is less noisy.
  • Various embodiments do not use just one personalised method for reduction of the noise of the displacement of the cursor C, but also use a process capable of predicting the action F chosen by the user.
  • the method envisages a training step, in step 1008, and a personalisation step, in step 1010, where the processing unit PU modifies control of functionality of the system, adapting to the different needs and problems of the patient that uses it, irrespective of the pathological condition by which the patient is afflicted and of the severity thereof.
  • the system uses the data acquired for controlling the movement of the cursor C.
  • the HCI may also be able to predict/estimate automatically which action F the user/patient U wishes to select and assists him in his selection, thus requiring of the patient the least effort possible.
  • the processing unit PU determines, in a step 1006, whether the training step or else normal operation should be activated.
  • the training step may be activated when the program is first launched, periodically and/or by selecting an appropriate function within the program.
  • the processing unit PU may execute, in step 1002, one or more operations on data supplied by the input device or devices I.
  • the processing unit PU may convert the co-ordinates of the requested positionfrom the reference system of the input device I to that of the graphic interface. For instance, typically it is sufficient to scale the data supplied on the basis of a coefficient of proportionality. For example, in various embodiments, the following equations are used for conversion between the original data ( X,Y) and the converted data (X,K’):
  • L x and L y correspond to the maximum measurable distances on the respective axis x or y
  • the parameters with prime sign are in the reference system of the graphic interface.
  • the reference system of the graphic interface has origin in its centre, and hence I x and I y are equal to half of the respective resolution used expressed in pixels.
  • the processing unit PU may filter the co-ordinates ( X,Y) or preferably directly the data coming from the input devices I that are used for determining the co-ordinates (X,Y).
  • the aforesaid filtering may be carried out also already in the device I.
  • excessively brusque movements of the cursor C (also during the training step) are undesired and may be caused by noise due to measurement or trembling or shaking of the patient.
  • the processing unit PU and/or the device I may use a lowpass filter, for example with a 50-Hz passband.
  • this filtering is preferably applied to the data acquired by the devices I, for example to the acceleration/inclination signals, and hence temporally precedes calculation of the position of the cursor
  • the processing unit PU does not move the cursor C directly as a function of the current requested position ( X,Y) (which, as described previously, is determined as a function of the (raw or pre-processed) data supplied by one or more input devices I), but take into account also the previous requested positions.
  • the sampling instants are selected in a uniform way with a constant sampling frequency fc.
  • the sampling frequency fc may be between 1 and 50 Hz, preferably between 5 and 20 Hz, for example 10 Hz.
  • the trajectory comprises the following sequence of requested positions:
  • the processing unit PU uses two distinct types of trajectories: non-assisted trajectories and assisted trajectories.
  • non-assisted trajectories correspond to the trajectories of the requested positions X and 7 determined on the basis of the (raw or pre-processed) data sampled.
  • the processing unit PU is configured for acquiring and storing in a database DB (see also Figure 2) one or more of these trajectories for the purposes of learning and possibly personalisation of the system.
  • DB database DB
  • These trajectories represent the knowledge base of the system insofar as, being measured in the total absence of supporting action, contain information regarding the difficulties of the patient in controlling the cursor C. In fact, patients affected by different conditions have different difficulties in use of the interface.
  • the assisted trajectories instead, represent the sequence of the positions followed by the cursor C in the presence of the supporting action.
  • Represented in Figures 8a and 8b are two possible assisted trajectories for the same patients as those of Figures 7a and 7b, where, in addition to the reduction of disturbance, especially for the left-hand trajectory, it is possible to perceive also the predictive aspect, in particular in the right-hand trajectory.
  • the cursor C continues to move, albeit more slowly, in the desired direction also in the time intervals during which the patient presented jerks that would have moved the cursor C in the opposite direction.
  • the graphic interface envisages a training step 1008, during which the system collects non-assisted trajectories and the respective target command F.
  • the trajectory-command pairs are entered in the database DB, i.e., the knowledge base of the system.
  • the amount of data to be collected for completing personalisation may be modifiable.
  • the program may automatically launch the training step, in step 1006, at initial start-up of the program and until the learning operation is completed.
  • the program may envisage a specific option for configuration of the program.
  • the processing unit PU is configured for communicating (visually on the display D and/or acoustically by driving a speaker) to the user U to select given commands F by setting the cursor C in different initial positions (e.g., the cursor at the centre of the interface, the cursor far from or close to the target command, close to another command, etc.) so as to guarantee enough information of the knowledge base of the system.
  • the processing unit PU periodically monitors the data supplied by the device or devices I and determines the requested co-ordinates (X F).
  • the processing unit PU directly displaces the cursor C, during this step, to the position ( X,Y) currently requested.
  • the processing unit PU During each operation of selection requested, the processing unit PU records one or more trajectories.
  • the database DB comprises a set of non-assisted trajectories (each made up of a sequence of positions requested by means of the device I) with respective command F, which may be grouped, for example, in the form of a table, within the database DB:
  • the program will record a first trajectory of twelve samples from the initial position of the learning process to an intermediate position, and a second trajectory of twelve samples from the intermediate position up to the final position.
  • a time out or a maximum limit of trajectories acquired may be envisaged for a given selected command.
  • typically the number of trajectories acquired for a selected command may be between three and fifteen.
  • the number of commands selectable on the interface may be envisaged so that the collection may be completed within just a few minutes.
  • the corresponding parameters may be modifiable, for example by the physician or by the operator who carries out installation of the system according to the requirements of the patient.
  • the data of the system collected and stored in the database DB may be updated periodically, even only partially, so as to be able to adapt the data to the evolution, whether towards improvement or towards aggravation, of the condition of the patient.
  • the processing unit proceeds to the personalisation step 1008.
  • the interface forming the subject of the present description bases its predictive capacities on identification of the command F desired/requested by the user.
  • the system is configured for identifying, on the basis of the study of the signals coming from the input sensors I, towards which command F from among the various ones available the user would wish to move the cursor C.
  • the processing unit PU uses the knowledge base collected in the previous step for training a multi-class classifier based upon machine-learning techniques.
  • the analysis of the data stored in the database DB that constitutes the knowledge base starts from a process of extraction of the characterizing information, the so-called features , contained in the data collected.
  • the processing unit PU is configured to determine one or more of the following information/features each non-assisted trajectory: - initial position of the cursor;
  • the features extracted are used for training a classifier capable of estimating the target command desired by the user as a function of a trajectory supplied to the classifier.
  • an artificial neural network is used for this purpose.
  • the features are supplied at input to the artificial neural network, such as a network of a feed-forward type, for classification.
  • the neural network comprises a first level (input layer), which comprises a number of neurons equal to the number of features, and a last level (output layer), which comprises a number of neurons equal to the number of possible selectable commands F.
  • a number of classifiers may also be envisaged (one for each screenful) since the number of the functions and their positions may vary.
  • these two (input and output) levels are connected to one or more hidden layers. For instance, in various embodiments, two hidden layers are used.
  • the output of the neural network is hence an estimate of the target command F that the user wishes to activate.
  • each neuron of the output layer supplies a value (typically comprised between zero and one) that represents the confidence in classification of the respective target, i.e., of the respective command/function.
  • the processing unit PU selects as target function the function with the highest output value.
  • the neural network is then trained in the classification using as input the features extracted from the non-assisted trajectories stored in the database DB, and the respective target command as requested result.
  • the backpropagation method is used for a supervised learning of the network, providing the network with a feedback between its output and the correct requested/target command contained in the knowledge base.
  • classifiers may also be used, such as the SVM (Support Vector Machine) method.
  • SVM Small Vector Machine
  • the processing unit PU may then start up, in step 1012, the assisted-operation mode.
  • the processing unit PU is now able to perceive/determine which target the patient wishes to reach, and hence moves the cursor on the graphic interface so as to facilitate the task for the user.
  • the processing unit identifies the position of the cursor C on the display D with:
  • the cursor C is oriented in a direction along a main axis x v that passes through the co-ordinate (x,y) and that has an angle Q with respect to the horizontal axis.
  • the main axis x v may correspond to the axis of symmetry of the cursor C.
  • the position (x,y) may correspond to the centroid of the cursor C.
  • the state of the cursor C is thus represented at each instant t by the triad [jc(i), y(f), 0(/)].
  • the processing unit PU could also:
  • the processing unit PU models the displacement of the cursor C during the assisted operation so as to simulate an ideal unicycle.
  • the processing unit moreover associates to the cursor C a linear speed v in the direction of the axis x v and an angular velocity w, i.e., the velocity with which the cursor C turns around the position (x,y).
  • w angular velocity
  • the dynamics of the cursor C, in the reference system of the graphic interface is hence defined by the following differential equations:
  • the processing unit PU is thus configured for determining the parameters v and w and hence determining the assisted displacement of the cursor C on the display D.
  • the processing unit PU works purely on trajectories, whether assisted or not, irrespective of the device I that has generated them. As long as the device I is able to supply data that may be associated to a requested position ( X,Y) of the cursor C, for example in terms of pixels, the processing unit PU may determine the assisted displacement of the cursor C, as will be described hereinafter.
  • step 1012 the processing unit PU moves the cursor C on the display D.
  • Figure 11 shows a possible embodiment of step 1014. Basically, in the embodiment considered, the following operations are included:
  • a cursor-control operation 1022 for determining the parameters (c,g,q) of the cursor C as a function of the displacement parameters v and w.
  • the classification operation 1014 uses the classifier trained in steps 1008 and 1012 as a function of the data stored in the database DB.
  • the classifier receives at input data that identify the same features as those used for training of the classifier, for example:
  • these data may be extracted from the assisted trajectory comprising the positions (x,y) that the cursor follows on the display D.
  • the classifier should use a trajectory with the same number of positions as those used during the training step. For example, considering trajectories with twelve positions, the classifier determines the aforesaid features as a function of a trajectory comprising the current position [jc(i), )'(/)] of the cursor C and the last eleven before it:
  • the optimisation step 1018 hence receives at input:
  • the data supplied by the device or devices I for example a requested absolute position ( X,Y) or displacement data of the cursor C, for example determined as a function of the acceleration data supplied by the device II, in which the data may have been pre-processed in a step 1002;
  • optimisation 1016 is based upon a trajectory model 1018 and an optimiser 1020.
  • the cursor C may be modeled as a unicycle.
  • the optimiser 1020 may hence calculate the control to be imparted on the cursor in terms of v and w in such a way as to minimise a cost function that will be described hereinafter.
  • step 1022 may determine the next parameters ( c(i+l), (i+l), q ⁇ + ⁇ )) of the cursor C as a function of the displacement data v(i+l) and w( ⁇ +1) supplied by the optimiser 1020.
  • the optimiser 1020 may supply not only a single pair v(i+l) and w( ⁇ +1), but also data for one or more subsequent instants t+ 2, t+ 3, ...
  • the control instants i.e., the instants in which step 1020 is executed
  • the optimisation process 1018 will be repeated again.
  • the control instants i.e., the instants in which step 1020 is executed
  • the optimiser 1020 hence supplies m pairs v and w, and the block 1022 is executed m times faster, thus applying the m pairs v and w sequentially to the cursor C.
  • the classifier 1014 may hence also be executed with a frequency that corresponds to the frequency of execution of the optimiser 1020. Consequently, in various embodiments, every m (e.g., six) instants the target F is estimated, and at least the next m values are determined for v and w. For the next m instants, in step 1022 the cursor C is then moved sequentially on the basis of the m values for v and w, until a new execution is activated.
  • the classification 1014 may also be executed at the same frequency as that of step 1022. For instance, this may possibly be advantageous when the time of execution of the optimisation 1020 is not constant and potentially longer than the sampling time of step 1022. In fact, in this case, optimisation cannot be executed at each sampling step with the guarantee that the result will be accessible for assisting the very next displacement. Hence, the value of m should be selected for including the maximum execution time of the optimiser 1020.
  • the frequency at which updating of the cursor C is carried out in step 1022 may correspond to the sampling frequency fc of the data supplied by the device or devices I. However, in general, these frequencies may be different. This strategy provides high performance, at the same time maintaining the robustness of the system in regard to disturbance (such as quivering of the patient) or errors of the classifier since the optimiser takes into account a plurality of data supplied by the sensor before updating the movement data of the cursor C.
  • the optimiser 1020 implemented within the processing unit PU selects m values v and w (with n > 1 ) in such a way as to minimise a cost function.
  • the optimiser 1020 uses the following cost function /(v, w ):
  • the function d ⁇ x, y, F supplies the distance between the respective position (x,y) of the cursor and the respective position of the target F; in particular the values x(t+ 1), y(/+ 1 ) etc., are calculated according to the model of the cursor as a function of the values to be optimised, i.e., v(f+l), w( ⁇ +1), etc.; and
  • the function A supplies a value indicating the respective alignment of the cursor C with respect to the target F, such as the difference between the angle Q and an angle f ⁇ c, y, f) that corresponds to the angle of the straight line that extends from the position (x,y) of the cursor C to the target F; in particular, the values of the angle 0(/+ 1), ..., are calculated according to the model of the cursor as a function of the values to be optimised, namely, w( ⁇ + ⁇ ), ...
  • the coefficients a 1 and a 2 are relative weights to be applied to the respective objectives:
  • the first objective is to reduce the distance d between the position of the cursor and that of the target indicated by the classifier
  • the second term of the objective function regards the orientation of the cursor: once the target is identified, it is desirable to maintain a path that is as rectilinear as possible towards it, preventing the cursor from oscillating on account of quivering of the patient.
  • the cost function /(v, w) may take into account also further objectives.
  • a third term may penalise the deviation of the assisted trajectory from the non-assisted one.
  • the processing unit PU may estimate further m positions ( X , Y) for the cursor using this time only the data of the non-assisted trajectory; i.e., the processing unit PU estimates future values [X(/+l), Y ⁇ t+ 1); ... X(/+m), K(/+m)J of the non-assisted trajectory on the basis of the previous positions of the non-assisted trajectory
  • the processing unit PU may estimate the sequence of future positions [X(i+l), Y ⁇ t+ 1); ... X(i+m), K(/+m)J by means of an extrapolation of a given number of the positions of the non-assisted trajectory [X(i), YU) XU- 1 ), Y(t- 1); ...].
  • the processing unit PU may calculate the mean value of the speed of a plurality of past instants (for example the last six instants).
  • the next non-assisted positions ( X , Y) of the cursor C may be calculated using as speed the aforesaid mean value.
  • the angle Q may be obtained as a function of one or more previous positions ( X , Y), for example, using the last value, or using an interpolation.
  • the cost function /(v, w) may be modified as follows: F)
  • the physician and/or the installer may select the parameters (oq, a 2 ) or (oq, a 2 , a 3 ), for example from a list of predefined profiles.
  • the parameters oq, a 2
  • patients who are imprecise in the commands on account of trembling of low intensity, but constant, will draw benefit from receiving assistance more focussed on orientation of the cursor ( a 2 )
  • patients whose involuntary movements were such as to oppose the target command will find advantage in receiving greater assistance as regards approach of the cursor to the target command (a 1) .
  • the processing unit may also process the data stored in the database DB for selecting automatically the coefficients (oq, a 2 ) or (oq, a 2 , a 3 ), or propose default values for the coefficients (oq, a 2 ) or (oq, a 2 , a 3 ).
  • the coefficients (oq, a 2 ) or (oq, a 2 , a 3 ) are selected in such a way that their sum is one.
  • the optimiser 1020 is configured for selecting a sequence of future values [n( ⁇ +1), w( ⁇ + ⁇ ); ... v(/+m), m(/+m)] in such a way as to minimise the cost function /(v, w).
  • the optimiser 1020 uses MPC for solving the aforesaid optimisation problem.
  • the MPC method also enables use of one or more constraints.
  • a constraint (in addition or as an alternative to the first constraint) may also set a maximum variable limit for the speed v on the basis of the distance d of the cursor C from the target F and a minimum value v min , thus allowing for higher speeds when the cursor C is distant from the target F, for example:
  • K v is a pre-determined coefficient.
  • the above constraint hence imposes that the linear speed v be decreased as the cursor gets nearer to the target F.
  • the latter constraint increases the precision and facilitates selection of the target insofar as selection occurs only after the cursor has been kept over the target for a few seconds.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • User Interface Of Digital Computer (AREA)
  • Control Of Electric Motors In General (AREA)

Abstract

Described herein is a method for selecting a command (F) by means of a graphic interface. The method provides a screenful to be represented on a display, wherein the screenful comprises a cursor and a plurality of areas, wherein associated to each area is a respective command (F), and wherein a command (F) is selected by displacing the cursor into the area associated to the respective command. In particular, a user may supply data that identify a requested position (X,Y) of the cursor by means of one or more input devices (I). During a training step, the method records one or more non-assisted trajectories for each command, wherein each non-assisted trajectory is made up of a sequence of a given number of requested positions (X,Y). During a personalisation step, the method processes the non-assisted trajectories for training a classifier, such as an artificial neural network, configured for estimating a target command ( F̂ ) as a function of a sequence of positions. During an operating step, the method then assists (1022) the user during displacement of the cursor. For this purpose, the method estimates (1014), by means of the classifier, a target command ( F̂ ), and then estimates (1020) a sequence of a given number of future values (v, ω) of the movement of the cursor. In particular, the estimation procedure (1020) comprises optimisation, for example by means of MPC (Model Predictive Control), of a cost function determined as a function o the estimated target function.

Description

“Method aimed at patients with motor disabilities for selecting a command by means of a graphic interface, and corresponding system and computer program product”
***
Field of the invention
The present description relates in general to solutions for selecting a command by means of a graphic interface. Various embodiments of the present description have been developed for assisting a patient during movement of a cursor on the graphic interface.
Background
Patients with neurodegenerative diseases, such as multiple sclerosis and ALS (Amyotrophic Lateral Sclerosis), or with lesions (caused by traumas, ischaemia, or cancer) to the first motor neuron or the second motor neuron, who are affected by paralysis or paresis, commonly use human -computer interfaces (HCI) for interacting with a computer and performing pre-set functions.
For instance, the following documents may be cited:
- Raya R., et al ., “A Robust Kalman Algorithm to Facilitate Human-
Computer Interaction for People with Cerebral Palsy, Using a New Interface Based on Inertial Sensors”, Sensors 2012, 12, pp. 3049-3067, DOI:
10.3390/S120303049;
- Raya R., et al .,“ Design of Input/Output Mapping for a Head-Mounted Interface According to Motor Signs Caused by Cerebral Palsy”, Assistive Technology Research Series, 33, pp. 1039-1044. 2013, DOI: 10.3233/978-1- 61499-304-9;
- Lopez- Vicente A., et al.,“ Adaptive inputs in an interface for people with Dyskinetic Cerebral Palsy: Learning and usability”, Technology and Disability, 28, pp. 79-89, 2016, DOI: 10.3233/TAD- 160446;
- Velasco M.A., et al. , “ Human-computer interaction for users with cerebral palsy based on head orientation may cursor’s movement be modeled by Fitts’s law?”, International Journal of Human-Computer Studies, Volume 106, 2017, pp. 1-9, DOI: l0.l0l6/j.ijhcs.20l7.05.002;
- Raya R., et al.“ Empowering the autonomy of children with cognitive and physical impairments by inertial head tracking Procedia Chemistry, Volume 1, Issue 1, September 2009, pp. 726-729, DOI: l0.l0l6/j.proche.2009.07.l8l;
- Raya R., et al.“ Wearable inertial mouse for children with physical and cognitive impairments”, Sensors and Actuators, A: Physical, 2010, 162(2), pp. 248-259; and
- Raya R., et al.f Characterizing head motor disorders to create novel interfaces for people with cerebral palsy: creating an alternative communication channel by head motion’ IEEE Int. Conf. Rehabil. Robot, 2011; 2011 :5975409, DOI: 10.1109/ICORR.2011.5975409.
In particular, the document Sesin A., et al, " Adaptive eye-gaze tracking using neural-network-based user profiles to assist people with motor disability", Electrical and Computer Engineering Faculty Publications, 28, 2008, DOI: 10.1682/JRRD.2007.05.0075, describes an adaptive HCI capable of assisting persons with serious motor disabilities.
The HCI proposed in the above document uses as input device an eye- tracker, i.e., a device for oculometry (namely, ocular monitoring/eye-tracking). This device is used for controlling displacement of the cursor of the mouse on a computer. However, the displacements of the eye, above all saccades, may generate sporadic movements of the cursor. Even though such displacements of the eye are frequently imperceptible by a human eye, they may have an adverse effect on the accuracy of an HCI that uses the position of the eye for interacting with a computer.
Consequently, the document cited proposes an HCI capable of adapting to each user, also compensating for the noise, which may vary in time. In particular, the document proposes an artificial neural network (ANN), such as a network of a multilayer perceptron type, which, appropriately configured and trained, minimises flutter of the cursor. For instance, during a training step, the user carries out a brief training session, and the neural network learns the relation between the raw co-ordinates of the eye (as supplied by the eye-tracker) and the required position of the cursor. In the document cited, for this purpose a small graphic interface is used, which hence enables generation of different user profiles that have respective configurations for the neural network.
Similar problems are posed also when a head-tracking device is used as input device, i.e., a device for monitoring the position and/or the displacement of a user’s head. For instance, typically such a device comprises a triaxial accelerometer and/or gyroscope and appropriate fixing means for fixing the sensors to the user’s head.
Also other solutions are known, which enable reduction of the noise/flutter of the cursor. For instance, the following documents may be cited: US 2007/0216641 Al, US 2011/0050563 Al, US 8,566,696 Bl, or Ziebart B., et al, “ Probabilistic Pointing Target Prediction via Inverse Optimal Control”, Proceedings International Conference on Intelligent User Interfaces, 2012, DOI: 10.1145/2166966.2166968. Use of an HCI of this sort has been addressed also in other documents, such as US 9,563,740 B2, US 9,325,799 B2, US 8,702,629 B2, US 8,441,356 Bl, and US RE46310 El.
Summary of the invention
The object of the present description is to provide new solutions for moving a cursor on a monitor on the basis of input data, for example provided by a subject by means of an eye-tracker or a head-tracker.
According to one or more embodiments, the foregoing object is achieved through a method having the distinctive elements set forth specifically in the ensuing claims. The embodiments moreover regard a corresponding system, as well as a corresponding computer program product that may be loaded into the memory of at least one computer and comprises portions of software code for executing the steps of the method when the product is run on a computer. As used herein, reference to such a computer program product is intended as being equivalent to reference to a computer-readable medium, which contains instructions for controlling a processing system in order to co-ordinate execution of the method. Reference to“at least one computer” is evidently intended to highlight the possibility of the present disclosure being implemented in a distributed/modular way.
The claims form an integral part of the technical teaching provided herein in relation to the invention.
As explained previously, the present description relates to solutions for selecting a command by means of a graphic interface. In particular, a control unit provides a screenful to be represented on a display, where the screenful comprises a cursor and a plurality of areas. In particular, associated to each area is a respective command, and a command is selected by displacing the cursor into the area associated to the respective command.
In various embodiments, the processing unit stores a plurality of non- assisted trajectories during a training step. For this purpose, the processing unit selects a target command and indicates the target command to a user. Next, by means of one or more input devices, the user displaces the cursor in the direction of the target command. Consequently, the processing unit receives from the input device (raw) data that identify a requested position of the cursor, and the processing unit moves the cursor as a function of said data. Generally, the input device may supply (raw) data that identify a requested absolute position or a requested displacement of the cursor, e.g. in both a vertical direction and a horizontal direction (in the coordinate system of the display/screenful). Then, by storing the data, the processing unit records one or more non-assisted trajectories for each target command.
During a personalisation step, the processing unit processes the non- assisted trajectories for training a classifier configured for estimating a target command as a function of data that identify a sequence of positions. For instance, in various embodiments, the classifier is an artificial neural network, preferably a network of a feed-forward type. In various embodiments, the processing unit may also extract from each non-assisted trajectory one or more features, such as the initial position of the cursor, the final position of the cursor, the average speed of movement towards each command, and the standard deviation of the speed of movement towards each command. For instance, these features may be supplied as input to the artificial neural network. During an operating step, the processing unit may hence assist the user during displacement of the cursor. For instance, for this purpose, the processing unit again receives from the input device (raw) data that identify a requested position of the cursor. However, in this case, the processing unit estimates, by means of the classifier, a target command. Knowing the estimate of the target command, the processing unit then estimates a sequence of future values for the movement of the cursor (with respect to the current position of the cursor) and moves the cursor sequentially as a function of these future values.
In particular, in various embodiments, the estimation procedure comprises calculation of a cost function determined as a function of a given number of future values for the movement of the cursor and the estimated target function. For instance, typically, the processing unit selects the sequence of future values that minimises the cost function. In various embodiments, the cost function may take into account also a given number of past values of the requested position.
For example, in various embodiments, the cursor is identified with absolute-position data on the display and orientation data, which indicate, for example, the angle of rotation of the cursor with respect to the horizontal axis on the display (i.e. the horizontal axis of the screenful).
In various embodiments, the processing unit hence calculates, for each future value of the movement of the cursor, a respective future value of the position of the cursor. In this case, the cost function may comprise a first term that takes into account the distance between each future value of the position of the cursor and the position of the target command.
Likewise, the processing unit may calculate, for each future value of the movement of the cursor, a respective future value of the orientation of the cursor. In this case, the cost function may comprise a second term that takes into account the future values of the orientation of the cursor with respect to the target command.
In various embodiments, the processing unit moreover estimates, for example by means of interpolation/extrapolation of a given number of past values of the requested position, for each future value of the movement of the cursor a respective future value of the requested position. In this case, the cost function may comprise a third term that takes into account the distance between each future value of the position of the cursor and a respective future value of the estimated requested position.
For instance, in various embodiments, MPC (Model Predictive Control) is used for solving the above optimization problem of the cost function. MPC may also use one or more constraints for each future value of the movement of the cursor, such as constant maximum limits and/or maximum limits determined dynamically for the variations of the speed of displacement and/or the speed of rotation of the cursor, constant maximum limits and/or maximum limits determined dynamically for the absolute value of the speed of displacement and/or the speed of rotation of the cursor, and/or limits for the position data of the cursor.
Brief description of the drawings
Various embodiments will now be described, purely by way of non limiting example, with reference to the annexed drawings, in which:
- Figure 1 is a schematic representation of a system that enables a user to control one or more functions by means of displacement of a cursor represented on a display;
- Figure 2 shows a block diagram of the connection between the various components of the system of Figure 1 ;
- Figures 3 and 4 show two possible screenful that may be represented on the display of the system of Figure 1 ;
- Figures 5 and 6 show two input devices that may be used for displacing the cursor in the system of Figure 1 ;
- Figures 7 and 8 show possible displacements of the cursor on the display of the system of Figure 1 ;
- Figure 9 shows an embodiment of a system capable of learning the characteristics of the data supplied by the input device;
- Figure 10 shows an embodiment of the model of the cursor;
- Figure 11 shows an embodiment of a method for displacing the cursor on the display of the system of Figure 1, taking into account also the learning data, in such a way as to reduce the noise of the displacement of the cursor on the display of the system of Figure 1 ; and
- Figure 12 shows a block diagram that summarises the operations executed by the system of Figure 1.
Detailed description of preferred embodiments
In the ensuing description, various specific details are illustrated aimed at enabling an in-depth understanding of examples of one or more embodiments. The embodiments may be provided without one or more of the specific details, or with other methods, components, materials, etc. In other cases, known structures, materials, or operations are not illustrated or described in detail so that various aspects of the embodiments will not be obscured.
Reference to“an embodiment” or“one embodiment” in the framework of the present description is intended to indicate that a particular configuration, structure, or characteristic described in relation to the embodiment is comprised in at least one embodiment. Hence, phrases such as“in an embodiment” or“in one embodiment” that may be present in various points of this description do not necessarily refer to one and the same embodiment. Moreover, particular conformations, structures, or characteristics may be combined in any adequate way in one or more embodiments.
The references used herein are provided merely for convenience and hence do not define the sphere of protection or the scope of the embodiments.
As explained previously, various embodiments of the present description regard solutions for providing a human-computer interface, for example for subjects with motor disabilities, such as patients with neurodegenerative diseases (e.g., multiple sclerosis and ALS) or with lesions (caused by traumas, ischaemia, or cancer) to the first or second motor neuron, who are affected by paralysis or paresis.
Figures 1 and 2 show the general architecture of a system according to the present description. In the embodiment considered, the system comprises a processing unit PU, a display D, and at least one input device I.
In various embodiments, the processing unit PU presents on the display D a screenful that represents a plurality of functions/actions F. For instance, Figure 3 shows a screenful that comprises eight functions Fl, ..., F8. In particular, associated to each function Fl, ..., F8 is a respective area on the display D, where the areas are arranged in different positions on the display D. The processing unit PU moreover represents on the display D a cursor C. In particular, in various embodiments, the processing unit PU is configured for moving the cursor C on the display D as a function of the data received from the input device or devices I until the cursor reaches a given area associated to one of the functions Fl, ..., F8.
In particular, in various embodiments, the processing unit PU selects an action only if the cursor C stops within the perimeter of the area associated to the action. Preferably, the processing unit PU may also check whether the cursor C remains in the area for a pre-set time. In general, the area may have any shape, such as, rectangular, square, circular, etc.
Consequently, the processing unit PU may, for example, be a desktop computer, a portable computer, or an embedded board, such as Raspberry PI®. For instance, the aforesaid processing unit PU may comprise a microprocessor and a memory that contains the software, which is run by the microprocessor. For instance, such software may comprise an operating system (e.g., Microsoft Windows®, Linux®, iOS®, Android®, etc.) and a computer program that manages the device or devices I and the user interface represented on the display D. For this purpose, the processing unit PU comprises:
- a first interface, such as a PS/2, RS-232, or USB (Universal Serial Bus) port, and/or a Bluetooth® transceiver for connection to the device or devices I; and
- a second interface, such as a VGA (Video Graphics Array), DVI (Digital Visual Interface), or HDMI (High-Definition Multimedia Interface) port, for connection to the display D.
In general, by“patient” or“user” is hence meant the user U of the system, who is typically affected by a motor dysfunction. Instead, by“graphic interface” is meant the computer tool (i.e., the computer program run on the processing unit PU) whereby the patient U is able to interact with the system, in particular for displacing the cursor C in such a way as to select a function F.
Consequently, in various embodiments, the input device I supplies data that identify a requested absolute position or a requested displacement of the cursor C in both a vertical direction and a horizontal direction. In particular, by “displacement” is meant a displacement of the cursor C with respect to the previous position of the cursor C, which hence enables recalculation of a requested absolute position. For instance, in various embodiments, the device I may directly supply data that identify a vertical and horizontal displacement of the cursor C (for example, a number of pixels). Consequently, in the simplest case, the device I may comprise a mouse, a touchpad, a track-ball, and/or a joystick. However, the input device I may also comprise other sensors configured for supplying at least two-dimensional data that may be used to determine a displacement of the cursor C. For instance, in various embodiments, the input device I may be configured to monitor the position/movement of a part of the body of the user U, such as his head, hand, and/or eyes. Consequently, the input device I may also comprise a head-tracker and/or an eye-tracker.
For instance, the input device I may comprise an inertial platform II, which includes a (preferably triaxial) accelerometer and/or gyroscope and possibly a magnetometer configured to detect data that enable identification of the orientation and/or movement of the head of the user U. For instance, the aforesaid sensors that implement a head-tracker may be fixed directly to the head of the user U, for example by means of a band, a helmet, or a headset. In general, commercial devices may also be used as head-tracker, which supply already processed data, typically inclination data (for example, pitch, roll, and yaw). For instance, as shown also in Figure 5, using the data supplied by the device II, the processing unit may determine an angle of roll a and an angle of pitch b (in the coordinate system of the user). For example, in various embodiments, the device II or processing unit PU is configured for converting the angle of roll a of the head into a horizontal position or displacement (in the coordinate system of the display/screenful) and the angle of pitch b of the head into a vertical position or displacement (in the coordinate system of the display).
In addition or as an alternative, the input device I may comprise one or more video cameras 12. For instance, by means of appropriate image processing, also a video camera 12 may detect the inclination and/or position of the head of the user U. In this case, the device 12 may be set in a fixed position, for example fixed to the display D.
Instead, by focusing the video camera 12 on the eyes of the user U, the video camera 12 may also supply data that identify the position/movement of the pupils of the user U. For instance, this solution is advantageous in the case where the patient were incapable of moving his neck. In this case, the video camera 12 (which hence implements an eye-tracker) may be fixed also to the head of the user U. Instead of a video camera 12 with respective image processing, commercial devices such as eye-trackers that supply already processed data may also be used.
For instance, the co-ordinates of the eye may be the cartesian co-ordinates of the centre of the pupil (or of each pupil) with respect to given reference co ordinates (i.e. in the coordinate system of the user), which for example correspond to the centre of the eye. Instead, Figure 6 shows an example in which the device 12 supplies polar co-ordinates, i.e., an angle g and a distance S of the centre of the pupil (or of each pupil) with respect to the reference co-ordinates.
For instance, in various embodiments, the device 12 or the processing unit is configured for converting a vertical position of the eye with respect to its central position into a vertical position of the cursor C (in the coordinate system of the display), and a horizontal position of the eye with respect to its central position into a horizontal position of the cursor C.
In general, the processing unit PU, the display D and the device or devices I, such as the video camera 12, may hence be integrated also in a single integrated device, such as a portable computer provided with webcam, a smartphone, or a tablet.
In various embodiments, the processing unit PU hence processes the data supplied by the device or devices I (for example, the displacement/inclination of the head and/or the position of the eye or eyes) in such a way as to determine a requested horizontal position X and vertical position Y rof the cursor C on the display D.
As explained previously, in various embodiments, the processing unit PU is configured for moving the cursor C until the position ( X,Y) of the cursor C reaches the position of one of the functions F on the display D. For instance, to each function F there may be associated a respective software function. In various embodiments, the graphic interface may also support a configuration mode, where an operator may configure the functions F that are represented, for example in order to configure the number of the functions, arrangement of the functions on the display, and/or type of action to be performed and possible configuration parameters for the functions.
For instance, as illustrated in Figure 2, one or more of the functions F may be used for controlling operation of one or more actuators A operatively connected to the processing unit PU. For this purpose, the processing unit PU comprises one or more interfaces for connection to the actuator or actuators A.
For example, in various embodiments, a function Fl could be used for controlling operation of a television set. For instance, for this purpose, the actuators A may comprise a transmitter or a transceiver T/R, such as an infrared transmitter, which enables infrared signals to be sent to the television set.
In various embodiments, a function F2 may be used for sending an emergency signal. For instance, for this purpose, an actuator A may comprise transmission means EC for transmitting the emergency signal, such as:
- an auto-dialler, for example a GSM auto-dialler, for making a call to a pre-set telephone number; and
- a module, such as a mobile -radio telephone module (GSM, GPRS, 3G, LTE, etc.), for sending a message of the SMS (Short-Message Service) or another messaging service, such as an e-mail.
In various embodiments, one or more functions (for example, the functions F3, ..., F8) may be configured for driving actuators for a domotic control, such as for switching on/switching off one or more lamps L, an HVAC (Heating, ventilation, and air conditioning) system, a door-opener, a surveillance camera, etc. For this purpose, the processing unit PU may comprise an interface for controlling, for example, switching of one or more switches.
As shown in Figure 4, each function F may even not drive an actuator A directly, but (once the corresponding position is reached), the processing unit PU may present a further selection screenful in which the user may select from among one or more subfunctions associated to the function selected in the main screenful (see Figure 3). For instance, with reference to control of a television set, the screenful may comprise four functions Fl l, ..., F14, which are arranged in different positions, respectively, for selecting the next channel, for selecting the previous channel, for raising the volume, or for lowering the volume. The screenful may also comprise a further function F15 to return to the previous screenful.
In general, one or more functions may also be represented in each screenful, such as the function F2 for sending an emergency signal.
Figure 4 moreover shows that the area associated to the functions F may also have different shapes and/or dimensions, for example for facilitating sending of the emergency signal.
Hence, using, for example, a head-tracker or eye-tracker, the solutions described herein may assist patients affected by paresis and capable of moving (even limitedly) their neck or eyes for issuing commands in a domotic system insofar as the patients who require interfaces of this type generally need assistance for performing autonomously everyday actions, such as switching on lights, regulating the temperature of the room, or activating a pager.
The system forming the subject of the present description aims at controlling in an assisted way displacement of the cursor C on the graphic interface via which the user/patient U is able to activate the functions F that he or she wishes. For instance, in the case of patients affected by paresis, it may make use of an inertial platform fixed to the patient’s head to obtain data regarding the inclination and/or movement of the head of the user U. The functions, however, are not dependent upon the use of this inertial platform; in fact, in the case of patients affected by total paralysis, the system may be interfaced with sensors capable of detecting eye movement in such a way as to move the cursor C on the interface with the same modalities. The use of other sensors is supported, provided that the data supplied may be correlated to a requested two-dimensional absolute position ( X,Y) of a cursor C on a graphic interface.
Figure 12 shows an embodiment of a method that controls the position of the cursor C as a function of the data supplied by the device or devices I.
In the embodiment considered, after a starting step 1000, an optional step 1002 is performed of pre-processing of the data supplied by the device or devices I.
In a step 1004, the cursor C is moved as a function of the data supplied by the device or devices I. In general, the processing unit PU could, in step 1004, displace the cursor C directly to the requested position (X,Y), which in turn is determined as a function of the data supplied by the device or devices I. However, as explained previously, these data may also comprise noise, in particular when devices of the eye-tracker or head-tracker type are used. For this reason, in various embodiments, the processing unit PU is configured for displacing the cursor C, taking into account also previous values for the parameters X and Y, thus determining a trajectory that is less noisy. Various embodiments do not use just one personalised method for reduction of the noise of the displacement of the cursor C, but also use a process capable of predicting the action F chosen by the user.
In particular, in various embodiments, the method envisages a training step, in step 1008, and a personalisation step, in step 1010, where the processing unit PU modifies control of functionality of the system, adapting to the different needs and problems of the patient that uses it, irrespective of the pathological condition by which the patient is afflicted and of the severity thereof.
Next, during a normal operating step 1012, the system uses the data acquired for controlling the movement of the cursor C. As will be described hereinafter, thanks to the steps 1008 and 1010, the HCI may also be able to predict/estimate automatically which action F the user/patient U wishes to select and assists him in his selection, thus requiring of the patient the least effort possible.
In the embodiment considered, the processing unit PU determines, in a step 1006, whether the training step or else normal operation should be activated. For instance, the training step may be activated when the program is first launched, periodically and/or by selecting an appropriate function within the program.
Data pre-processing
As mentioned previously, in various embodiments, the processing unit PU may execute, in step 1002, one or more operations on data supplied by the input device or devices I.
For instance, on the basis of the type of data supplied by the input device I, the processing unit PU may convert the co-ordinates of the requested positionfrom the reference system of the input device I to that of the graphic interface. For instance, typically it is sufficient to scale the data supplied on the basis of a coefficient of proportionality. For example, in various embodiments, the following equations are used for conversion between the original data ( X,Y) and the converted data (X,K’):
Figure imgf000016_0001
V - ~—Ly VL y
where Lx and Ly correspond to the maximum measurable distances on the respective axis x or y, and the parameters with prime sign are in the reference system of the graphic interface. In particular, in various embodiments, the reference system of the graphic interface has origin in its centre, and hence Ix and Iy are equal to half of the respective resolution used expressed in pixels.
In various embodiments, the processing unit PU may filter the co-ordinates ( X,Y) or preferably directly the data coming from the input devices I that are used for determining the co-ordinates (X,Y). In general, the aforesaid filtering may be carried out also already in the device I. In fact, excessively brusque movements of the cursor C (also during the training step) are undesired and may be caused by noise due to measurement or trembling or shaking of the patient. For instance, the processing unit PU and/or the device I may use a lowpass filter, for example with a 50-Hz passband. As has been mentioned, this filtering is preferably applied to the data acquired by the devices I, for example to the acceleration/inclination signals, and hence temporally precedes calculation of the position of the cursor
(x,
Movement of the cursor
As explained previously, in various embodiments, the processing unit PU does not move the cursor C directly as a function of the current requested position ( X,Y) (which, as described previously, is determined as a function of the (raw or pre-processed) data supplied by one or more input devices I), but take into account also the previous requested positions. In this context, the processing unit PU uses a trajectory of length n made up of pairs/vectors (X,Y) that correspond to the positions determined at instants i = 1,2, n. In various embodiments, the sampling instants are selected in a uniform way with a constant sampling frequency fc. For instance, the sampling frequency fc may be between 1 and 50 Hz, preferably between 5 and 20 Hz, for example 10 Hz.
For instance, assuming n = 5, the trajectory comprises the following sequence of requested positions:
[X(l), 7(1); X(2), 7(2); X(3), 7(3); X(4), 7(4); X(5), 7(5)]
In various embodiments, the processing unit PU uses two distinct types of trajectories: non-assisted trajectories and assisted trajectories.
In particular, non-assisted trajectories correspond to the trajectories of the requested positions X and 7 determined on the basis of the (raw or pre-processed) data sampled. In various embodiments, the processing unit PU is configured for acquiring and storing in a database DB (see also Figure 2) one or more of these trajectories for the purposes of learning and possibly personalisation of the system. These trajectories represent the knowledge base of the system insofar as, being measured in the total absence of supporting action, contain information regarding the difficulties of the patient in controlling the cursor C. In fact, patients affected by different conditions have different difficulties in use of the interface. Represented by way of example in Figures 7a and 7b are two trajectories for one and the same command, namely,“ shift the cursor upwards to the righf . It is evident that the patient who has generated the left-hand trajectory presents a slight quiver perpendicular to the direction of displacement, hence requiring a less invasive supporting action, whereas the patient of the right-hand trajectory presents a slight jerkiness also in a direction opposite to that of the command, thus requiring a greater assistance.
The assisted trajectories, instead, represent the sequence of the positions followed by the cursor C in the presence of the supporting action. Represented in Figures 8a and 8b are two possible assisted trajectories for the same patients as those of Figures 7a and 7b, where, in addition to the reduction of disturbance, especially for the left-hand trajectory, it is possible to perceive also the predictive aspect, in particular in the right-hand trajectory. In particular, in the right-hand trajectory it may in fact be noted how the awareness of the direction desired by the patient, which, as will be explained hereinafter, may be obtained on the basis of the study of the non-assisted trajectories, the cursor C continues to move, albeit more slowly, in the desired direction also in the time intervals during which the patient presented jerks that would have moved the cursor C in the opposite direction.
Training step
As explained previously, in various embodiments, in order to be able to personalise the assistance, the graphic interface envisages a training step 1008, during which the system collects non-assisted trajectories and the respective target command F. The trajectory-command pairs are entered in the database DB, i.e., the knowledge base of the system. In various embodiments, the amount of data to be collected for completing personalisation may be modifiable.
For instance, to carry out learning, the program may automatically launch the training step, in step 1006, at initial start-up of the program and until the learning operation is completed. In addition or as an alternative, the program may envisage a specific option for configuration of the program.
In various embodiments, collection of the non-assisted trajectories occurs in a guided way. For instance, as shown in Figure 9, the processing unit PU is configured for communicating (visually on the display D and/or acoustically by driving a speaker) to the user U to select given commands F by setting the cursor C in different initial positions (e.g., the cursor at the centre of the interface, the cursor far from or close to the target command, close to another command, etc.) so as to guarantee enough information of the knowledge base of the system. Next, the processing unit PU periodically monitors the data supplied by the device or devices I and determines the requested co-ordinates (X F). Preferably, the processing unit PU directly displaces the cursor C, during this step, to the position ( X,Y) currently requested.
During each operation of selection requested, the processing unit PU records one or more trajectories. Hence, in various embodiments, the database DB comprises a set of non-assisted trajectories (each made up of a sequence of positions requested by means of the device I) with respective command F, which may be grouped, for example, in the form of a table, within the database DB:
Figure imgf000019_0001
In various embodiments, the typical length of the trajectories may be twelve samples selected with a sampling time Tc = l/fc of 0.1 s.
For instance, in the case where the user uses twenty-four samplings to reach the position requested, the program will record a first trajectory of twelve samples from the initial position of the learning process to an intermediate position, and a second trajectory of twelve samples from the intermediate position up to the final position. In general, it is not important whether the user also reaches the final position since the purpose of the training step is the collection of trajectories directed towards a given target. For instance, for this purpose, a time out or a maximum limit of trajectories acquired may be envisaged for a given selected command. For instance, typically the number of trajectories acquired for a selected command may be between three and fifteen. Moreover, for an average patient collection of a total number of non-assisted trajectories equal to fifty times the number of commands selectable on the interface may be envisaged so that the collection may be completed within just a few minutes.
In various embodiments, the corresponding parameters (length of the trajectories stored in the database DB, number of trajectories to be acquired and/or sampling time/frequency Tc/fc) may be modifiable, for example by the physician or by the operator who carries out installation of the system according to the requirements of the patient.
In various embodiments, the data of the system collected and stored in the database DB may be updated periodically, even only partially, so as to be able to adapt the data to the evolution, whether towards improvement or towards aggravation, of the condition of the patient.
Personalisation step
At the end of the step of collection of information, the processing unit proceeds to the personalisation step 1008. The interface forming the subject of the present description bases its predictive capacities on identification of the command F desired/requested by the user. In particular, in various embodiments, the system is configured for identifying, on the basis of the study of the signals coming from the input sensors I, towards which command F from among the various ones available the user would wish to move the cursor C.
In various embodiments, to obtain this result, the processing unit PU uses the knowledge base collected in the previous step for training a multi-class classifier based upon machine-learning techniques.
In particular, in various embodiments, the analysis of the data stored in the database DB that constitutes the knowledge base starts from a process of extraction of the characterizing information, the so-called features , contained in the data collected. For this purpose, in various embodiments, the processing unit PU is configured to determine one or more of the following information/features each non-assisted trajectory: - initial position of the cursor;
- final position of the cursor;
- average speed of movement towards each target/function F; and
- standard deviation of the speed of movement towards each target/function F.
Hence, in general, also other data that identify the non-assisted trajectories may be stored in the database DB provided that these data enable determination of the aforesaid parameters of the trajectory, such as an initial position of the cursor and displacement data, or directly the aforesaid parameters.
In various embodiments, the features extracted are used for training a classifier capable of estimating the target command desired by the user as a function of a trajectory supplied to the classifier. For instance, in various embodiments, an artificial neural network is used for this purpose. In this case, the features are supplied at input to the artificial neural network, such as a network of a feed-forward type, for classification. In particular, in the embodiment considered, the neural network comprises a first level (input layer), which comprises a number of neurons equal to the number of features, and a last level (output layer), which comprises a number of neurons equal to the number of possible selectable commands F. Hence, in the case where a number of screenfuls are envisaged, a number of classifiers may also be envisaged (one for each screenful) since the number of the functions and their positions may vary. In the case where a network of the feed-forward type is used, these two (input and output) levels are connected to one or more hidden layers. For instance, in various embodiments, two hidden layers are used.
The output of the neural network is hence an estimate of the target command F that the user wishes to activate. In particular, each neuron of the output layer supplies a value (typically comprised between zero and one) that represents the confidence in classification of the respective target, i.e., of the respective command/function. Hence, in various embodiments, the processing unit PU selects as target function the function with the highest output value.
In various embodiments, the neural network is then trained in the classification using as input the features extracted from the non-assisted trajectories stored in the database DB, and the respective target command as requested result. For instance, in various embodiments, the backpropagation method is used for a supervised learning of the network, providing the network with a feedback between its output and the correct requested/target command contained in the knowledge base.
In general, other classifiers may also be used, such as the SVM (Support Vector Machine) method.
At the end of the training step, the processing unit PU may then start up, in step 1012, the assisted-operation mode. In particular, the processing unit PU is now able to perceive/determine which target the patient wishes to reach, and hence moves the cursor on the graphic interface so as to facilitate the task for the user.
Cursor model
In various embodiments, as represented in Figure 10, the processing unit identifies the position of the cursor C on the display D with:
- absolute-position data x and y, for example expressed with respective numbers of pixels on the display D; and
- a given angle of rotation Q, which indicates, for example, the orientation of the cursor C with respect to the horizontal axis on the display D.
Hence, in general, the cursor C is oriented in a direction along a main axis xv that passes through the co-ordinate (x,y) and that has an angle Q with respect to the horizontal axis. For instance, considering a cursor shaped like a triangle, the main axis xv may correspond to the axis of symmetry of the cursor C. Instead, the position (x,y) may correspond to the centroid of the cursor C.
In various embodiments, the state of the cursor C is thus represented at each instant t by the triad [jc(i), y(f), 0(/)].
For instance, in the case of non-assisted trajectories, the processing unit PU may position the cursor C in the position x{t) = X(t), y(l) = Y{t) requested and may calculate the angle 0(/) on the basis of previous values, for example in such a way that the main axis xv passes through the points [A(/j, K(/)J (current position requested) and [X(/-l), YU- 1 )J (previous position requested). In general, the processing unit PU could also:
- determine the angle 0(/) as a function of a larger number of past values, for example using an interpolation;
- use an angle 9(1) in such a way that the main axis xv passes through the current position requested and the target, i.e., the cursor C points in the direction of the target; or
- not modify the angle 9(t).
Instead, in various embodiments, the processing unit PU models the displacement of the cursor C during the assisted operation so as to simulate an ideal unicycle. In particular, in various embodiments, the processing unit moreover associates to the cursor C a linear speed v in the direction of the axis xv and an angular velocity w, i.e., the velocity with which the cursor C turns around the position (x,y). In the embodiment considered, the dynamics of the cursor C, in the reference system of the graphic interface, is hence defined by the following differential equations:
x = v · cos (0)
y = v · sin (0)
w = 0
In various embodiments, the processing unit PU is thus configured for determining the parameters v and w and hence determining the assisted displacement of the cursor C on the display D.
As explained previously, regardless of the specific device I used, the processing unit PU works purely on trajectories, whether assisted or not, irrespective of the device I that has generated them. As long as the device I is able to supply data that may be associated to a requested position ( X,Y) of the cursor C, for example in terms of pixels, the processing unit PU may determine the assisted displacement of the cursor C, as will be described hereinafter.
Assisted mode
As explained previously, during this step 1012, the processing unit PU moves the cursor C on the display D. Figure 11 shows a possible embodiment of step 1014. Basically, in the embodiment considered, the following operations are included:
- a classification operation 1014 for estimating, as a function of the trajectory followed by the cursor, the target F that the patient U wishes to reach;
- an optimisation operation 1016 for determining the displacement parameters v and w of the cursor C as a function of the parameters (c,g,q) of the cursor C, the estimate F, and the data supplied by the input device or devices I, for example the requested positions ( X,Y ); and
- a cursor-control operation 1022 for determining the parameters (c,g,q) of the cursor C as a function of the displacement parameters v and w.
In particular, the classification operation 1014 uses the classifier trained in steps 1008 and 1012 as a function of the data stored in the database DB. Hence, the classifier receives at input data that identify the same features as those used for training of the classifier, for example:
- the initial position of the cursor;
- the final position of the cursor;
- the average speed towards each target/function F; and
- the standard deviation of the speed towards each target/function F.
For instance, these data may be extracted from the assisted trajectory comprising the positions (x,y) that the cursor follows on the display D. In particular, during this step, the classifier should use a trajectory with the same number of positions as those used during the training step. For example, considering trajectories with twelve positions, the classifier determines the aforesaid features as a function of a trajectory comprising the current position [jc(i), )'(/)] of the cursor C and the last eleven before it:
Figure imgf000024_0001
l l)]
The optimisation step 1018 hence receives at input:
- the estimated target F;
- the data supplied by the device or devices I, for example a requested absolute position ( X,Y) or displacement data of the cursor C, for example determined as a function of the acceleration data supplied by the device II, in which the data may have been pre-processed in a step 1002; and
- the current position (consisting of the position (x,y) and the angle Q ) of the cursor C.
In particular, optimisation 1016 is based upon a trajectory model 1018 and an optimiser 1020. For instance, as described previously, the cursor C may be modeled as a unicycle. In this case, the optimiser 1020 may hence calculate the control to be imparted on the cursor in terms of v and w in such a way as to minimise a cost function that will be described hereinafter.
Finally, knowing the current parameters
Figure imgf000025_0001
q(ί)) of the cursor C, step 1022 may determine the next parameters ( c(i+l), (i+l), q{ί+\ )) of the cursor C as a function of the displacement data v(i+l) and w(ί+1) supplied by the optimiser 1020. In various embodiments, the optimiser 1020 may supply not only a single pair v(i+l) and w(ί+1), but also data for one or more subsequent instants t+ 2, t+ 3, ... For example, in various embodiments, the control on v and w consists of m = 6 values to be applied to the cursor C for the next m sampling instants. For example, this proves an advantage in the case where execution of the optimiser in the step 1020 occurs at a frequency lower than the frequency of updating of the position of the cursor in step 1022. In general, even just some pairs v and w may actually be used in step 1022 insofar as at the next control instant the optimisation process 1018 will be repeated again. For example, in various embodiments, the control instants (i.e., the instants in which step 1020 is executed) are spaced apart by a time equal to m times the sampling time for the trajectories (i.e., the instants in which step 1022 is executed). In this case, the optimiser 1020 hence supplies m pairs v and w, and the block 1022 is executed m times faster, thus applying the m pairs v and w sequentially to the cursor C.
In various embodiments, the classifier 1014 may hence also be executed with a frequency that corresponds to the frequency of execution of the optimiser 1020. Consequently, in various embodiments, every m (e.g., six) instants the target F is estimated, and at least the next m values are determined for v and w. For the next m instants, in step 1022 the cursor C is then moved sequentially on the basis of the m values for v and w, until a new execution is activated. In general, the classification 1014 may also be executed at the same frequency as that of step 1022. For instance, this may possibly be advantageous when the time of execution of the optimisation 1020 is not constant and potentially longer than the sampling time of step 1022. In fact, in this case, optimisation cannot be executed at each sampling step with the guarantee that the result will be accessible for assisting the very next displacement. Hence, the value of m should be selected for including the maximum execution time of the optimiser 1020.
In various embodiments, the frequency at which updating of the cursor C is carried out in step 1022 may correspond to the sampling frequency fc of the data supplied by the device or devices I. However, in general, these frequencies may be different. This strategy provides high performance, at the same time maintaining the robustness of the system in regard to disturbance (such as quivering of the patient) or errors of the classifier since the optimiser takes into account a plurality of data supplied by the sensor before updating the movement data of the cursor C.
As explained previously, in various embodiments, the optimiser 1020 implemented within the processing unit PU selects m values v and w (with n > 1 ) in such a way as to minimise a cost function. For instance, in various embodiments, the optimiser 1020 uses the following cost function /(v, w ):
Figure imgf000026_0001
where:
- the function d{x, y, F) supplies the distance between the respective position (x,y) of the cursor and the respective position of the target F; in particular the values x(t+ 1), y(/+ 1 ) etc., are calculated according to the model of the cursor as a function of the values to be optimised, i.e., v(f+l), w(ί+1), etc.; and
- the function A supplies a value indicating the respective alignment of the cursor C with respect to the target F, such as the difference between the angle Q and an angle f{c, y, f) that corresponds to the angle of the straight line that extends from the position (x,y) of the cursor C to the target F; in particular, the values of the angle 0(/+ 1), ..., are calculated according to the model of the cursor as a function of the values to be optimised, namely, w(ΐ+ \ ), ...
The coefficients a1 and a2 are relative weights to be applied to the respective objectives:
- the first objective is to reduce the distance d between the position of the cursor and that of the target indicated by the classifier; and
- the second term of the objective function regards the orientation of the cursor: once the target is identified, it is desirable to maintain a path that is as rectilinear as possible towards it, preventing the cursor from oscillating on account of quivering of the patient.
In various embodiments, the cost function /(v, w) may take into account also further objectives.
For instance, in various embodiments, a third term may penalise the deviation of the assisted trajectory from the non-assisted one. For this purpose, the processing unit PU may estimate further m positions ( X , Y) for the cursor using this time only the data of the non-assisted trajectory; i.e., the processing unit PU estimates future values [X(/+l), Y{t+ 1); ... X(/+m), K(/+m)J of the non-assisted trajectory on the basis of the previous positions of the non-assisted trajectory
[X( ( ; X(M) (M); ... ].
For instance, in various embodiments, the processing unit PU may estimate the sequence of future positions [X(i+l), Y{t+ 1); ... X(i+m), K(/+m)J by means of an extrapolation of a given number of the positions of the non-assisted trajectory [X(i), YU) XU- 1 ), Y(t- 1); ...].
In various embodiments, the processing unit PU may calculate the mean value of the speed of a plurality of past instants (for example the last six instants). Hence, the next non-assisted positions ( X , Y) of the cursor C may be calculated using as speed the aforesaid mean value. Likewise, the angle Q may be obtained as a function of one or more previous positions ( X , Y), for example, using the last value, or using an interpolation.
In this case, the cost function /(v, w) may be modified as follows: F)
Figure imgf000028_0001
where the function d'(x, y,X, Y) supplies the distance between the respective position (x,y) and the respective (estimated) position (X,Y).
In various embodiments, the physician and/or the installer may select the parameters (oq, a2 ) or (oq, a2, a3), for example from a list of predefined profiles. By way of example, patients who are imprecise in the commands on account of trembling of low intensity, but constant, will draw benefit from receiving assistance more focussed on orientation of the cursor ( a2 ), whereas patients whose involuntary movements were such as to oppose the target command will find advantage in receiving greater assistance as regards approach of the cursor to the target command (a1). In various embodiments, the processing unit may also process the data stored in the database DB for selecting automatically the coefficients (oq, a2 ) or (oq, a2, a3), or propose default values for the coefficients (oq, a2 ) or (oq, a2 , a3). Preferably, the coefficients (oq, a2 ) or (oq, a2, a3 ) are selected in such a way that their sum is one.
Hence, in the embodiments considered, the optimiser 1020 is configured for selecting a sequence of future values [n(ί+1), w(ί+ \ ); ... v(/+m), m(/+m)] in such a way as to minimise the cost function /(v, w). For instance, in various embodiments, the optimiser 1020 uses MPC for solving the aforesaid optimisation problem.
The MPC method also enables use of one or more constraints. For instance, in various embodiments, the optimiser 1020 uses one or more of the following constraints for i = 1, ..., nr.
- maximum limits Arnma and Nvmax for the variations of v and/or w, in order to obtain a trajectory that is“smooth” and as natural as possible, without any sharp rotations or accelerations, for example:
\v(t + ΐ) - (n + ί - ϊ) \ £ Nvmax\ \w(ΐ + i) - (w + i - 1) | < Arnma
- limits vmax, imin, and 0 max for the absolute values of v and/or w; for example: 0 £= v(t T 0— Vmax, (^min— <^(t + i) £= ^max
- limits for the position data (x,y) of the cursor C; for example:
-Lx £ x(t + i) < L ; -L y £ y(t + i) < Ly
In various embodiments, a constraint (in addition or as an alternative to the first constraint) may also set a maximum variable limit for the speed v on the basis of the distance d of the cursor C from the target F and a minimum value vmin, thus allowing for higher speeds when the cursor C is distant from the target F, for example:
Figure imgf000029_0001
where Kv is a pre-determined coefficient. The above constraint hence imposes that the linear speed v be decreased as the cursor gets nearer to the target F. The latter constraint increases the precision and facilitates selection of the target insofar as selection occurs only after the cursor has been kept over the target for a few seconds.
Of course, without prejudice to the principles underlying the invention, the details of construction and the embodiments may vary widely with respect to what has been described and illustrated herein purely by way of example, without thereby departing from the scope of the present invention, as defined by the annexed claims.

Claims

1. A method for selecting a command (F) by means of a graphic interface, comprising:
a) providing a screenful to be represented on a display (D), wherein said screenful comprises a cursor (C) and a plurality of areas, wherein associated to each area is a respective command (F), and wherein a command (F) is selected by moving the cursor (C) into the area associated to the respective command (F); b) during a training phase (1008), repeating the following steps a plurality of times:
- selecting one command from among said commands (F) as target command;
- receiving from one or more input devices (I) data that identify a requested position ( X,Y) of said cursor (C), and moving said cursor (C) as a function of said data that identify a requested position ( X,Y ); and
- recording one or more non-assisted trajectories for said target command, wherein each non-assisted trajectory is composed of a sequence of a given number of said requested positions ( X,Y );
c) during a personalisation phase (1010), processing said non-assisted trajectories for training a classifier configured to estimate a target command ( F ) as a function of the data that identify a sequence of said given number of positions;
d) during an operating phase (1012), periodically repeating the following steps:
- receiving from said one or more input devices (I) data that identify a requested position ( X,Y) of said cursor (C);
- estimating (1014) by means of said classifier a target command (
F );
- estimating (1020) a sequence of a given number of future values (v, co) for the movement of said cursor (C), wherein the estimation procedure (1020) comprises calculating a cost function determined as a function of said given number of future values (v, w ) for the movement of said cursor (C) and said estimated target command ( F) and selecting the sequence of said given number of future values (v, w ) that minimises said cost function; and
- moving (1022) said cursor (C) sequentially as a function of said sequence of a given number of future values (v, w).
2. The method according to Claim 1, wherein said classifier is an artificial neural network, preferably a network of a feed-forward type.
3. The method according to Claim 1 or Claim 2, wherein said processing said non-assisted trajectories comprises extracting from each non-assisted trajectory the following features:
- initial position of the cursor (C);
- final position of the cursor (C);
- average speed towards each command (F); and
- standard deviation of the speed towards each command (F).
4. The method according to Claim 2 and Claim 3, wherein said features are supplied at input to said artificial neural network.
5. The method according to one of the preceding claims, wherein said cursor (C) is identified with:
- absolute position position (x, y) on the display (D); and
- orientation data ( Q ), which identify, for example, the rotation angle of the cursor (C) with respect to the horizontal axis on the display (D).
6. The method according to Claim 5, wherein said procedure of estimation (1020) of a sequence of a given number of future values (v, w) of the movement of said cursor (C) comprises calculating for each future value (v, w ) of the movement of said cursor (C) a respective future value of the position ( x,y ) of the cursor (C), wherein said cost function comprises:
- a first term that takes into account the distance between each future value of the position (x,y) of the cursor (C) and the position of the target command ( F ).
7. The method according to Claim 6, wherein said procedure of estimation (1020) of a sequence of a given number of future values (v, w ) of the movement of said cursor (C) comprises calculating for each future value (v, w ) of the movement of said cursor (C) a respective future value of the orientation ( Q ) of the cursor (C), wherein said cost function comprises:
- a second term that takes into account the future values of the orientation ( Q ) of the cursor (C) with respect to the target command ( F ).
8. The method according to Claim 6 or Claim 7, wherein said procedure of estimation (1020) of a sequence of a given number of future values (v, fit ) of the movement of said cursor (C) comprises estimating, for example by means of extrapolation, for each future value (v, w ) of the movement of said cursor (C) a respective future value of said requested position (X,Y) as a function of said given number of past values of said requested position (XT), wherein said cost function comprises:
- a third term that takes into account the distance between each future value of the position (x,y) of the cursor (C) and a respective future value of said requested position (XT).
9. The method according to one of the preceding Claims 5 to 8, wherein said sequence of a given number of future values (v, w ) of the movement of said cursor (C) is estimated (1020) using Model Predictive Control, MPC, for solving the problem of optimisation of said cost function.
10. The method according to Claim 9, wherein the Model Predictive Control uses one or more of the following constraints for each future value of the movement of said cursor (C): - constant and/or dynamically determined maximum limits for the variation of the displacement velocity (y) and/or the rotation velocity ( w ) of said cursor (C),
- constant and/or dynamically determined maximum limits for the absolute value of the of the displacement velocity (y) and/or the rotation velocity ( o) ) of said cursor (C),
- limits for the position data (x,y) of the cursor (C).
11. A system for selecting a command (F) by means of a graphic interface, comprising:
- a display (D);
- one or more input devices (I); and
- a processing unit (PU) configured for implementing the method according to any one of the preceding claims.
12. A computer program product that can be loaded into a memory of at least one computer and comprises portions of software code for implementing the method according to any one of Claims 1 to 10.
PCT/IB2019/050631 2018-01-29 2019-01-25 Method aimed at patients with motor disabilities for selecting a command by means of a graphic interface, and corresponding system and computer program product WO2019145907A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT102018000002114 2018-01-29
IT201800002114A IT201800002114A1 (en) 2018-01-29 2018-01-29 PROCEDURE ADDRESSED TO PATIENTS WITH MOTOR DISABILITIES TO CHOOSE A COMMAND USING A GRAPHIC INTERFACE, RELATED SYSTEM AND IT PRODUCT

Publications (1)

Publication Number Publication Date
WO2019145907A1 true WO2019145907A1 (en) 2019-08-01

Family

ID=62167652

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2019/050631 WO2019145907A1 (en) 2018-01-29 2019-01-25 Method aimed at patients with motor disabilities for selecting a command by means of a graphic interface, and corresponding system and computer program product

Country Status (2)

Country Link
IT (1) IT201800002114A1 (en)
WO (1) WO2019145907A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110518847A (en) * 2019-08-30 2019-11-29 长安大学 Surface permanent magnetic Synchronous Machine Models forecast Control Algorithm based on BP neural network
CN117339182A (en) * 2023-12-06 2024-01-05 西交利物浦大学 Rehabilitation system and evaluation method based on rehabilitation of upper limb exercise capacity

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999026126A1 (en) * 1997-11-17 1999-05-27 British Telecommunications Public Limited Company User interface
US20050047629A1 (en) * 2003-08-25 2005-03-03 International Business Machines Corporation System and method for selectively expanding or contracting a portion of a display using eye-gaze tracking
US20070216641A1 (en) 2006-03-20 2007-09-20 Motorola, Inc. User interface stabilization method and system
US20110050563A1 (en) 2009-08-31 2011-03-03 Timothy Douglas Skutt Method and system for a motion compensated input device
US8441356B1 (en) 2009-02-16 2013-05-14 Handhold Adaptive, LLC Methods for remote assistance of disabled persons
US8566696B1 (en) 2011-07-14 2013-10-22 Google Inc. Predicting user navigation events
US8702629B2 (en) 2005-03-17 2014-04-22 Great Lakes Neuro Technologies Inc. Movement disorder recovery system and method for continuous monitoring
US9325799B2 (en) 2006-11-03 2016-04-26 Joanne Walker Systems and methods for computer implemented treatment of behavioral disorders
US9563740B2 (en) 2012-10-16 2017-02-07 The Florida International University Board Of Trustees Neural interface activity simulator
USRE46310E1 (en) 1991-12-23 2017-02-14 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE46310E1 (en) 1991-12-23 2017-02-14 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
WO1999026126A1 (en) * 1997-11-17 1999-05-27 British Telecommunications Public Limited Company User interface
US20050047629A1 (en) * 2003-08-25 2005-03-03 International Business Machines Corporation System and method for selectively expanding or contracting a portion of a display using eye-gaze tracking
US8702629B2 (en) 2005-03-17 2014-04-22 Great Lakes Neuro Technologies Inc. Movement disorder recovery system and method for continuous monitoring
US20070216641A1 (en) 2006-03-20 2007-09-20 Motorola, Inc. User interface stabilization method and system
US9325799B2 (en) 2006-11-03 2016-04-26 Joanne Walker Systems and methods for computer implemented treatment of behavioral disorders
US8441356B1 (en) 2009-02-16 2013-05-14 Handhold Adaptive, LLC Methods for remote assistance of disabled persons
US20110050563A1 (en) 2009-08-31 2011-03-03 Timothy Douglas Skutt Method and system for a motion compensated input device
US8566696B1 (en) 2011-07-14 2013-10-22 Google Inc. Predicting user navigation events
US9563740B2 (en) 2012-10-16 2017-02-07 The Florida International University Board Of Trustees Neural interface activity simulator

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
LOPEZ-VICENTE A. ET AL.: "Adaptive inputs in an interface for people with Dyskinetic Cerebral Palsy: Learning and usability", TECHNOLOGY AND DISABILITY, vol. 28, 2016, pages 79 - 89
RAYA R ET AL.: "Characterizing head motor disorders to create novel interfaces for people with cerebral palsy: creating an alternative communication channel by head motion", IEEE INT. CONF. REHABIL. ROBOT, vol. 2011, 2011, pages 5975409
RAYA R. ET AL.: "A Robust Kalman Algorithm to Facilitate Human-Computer Interaction for People with Cerebral Palsy", USING A NEW INTERFACE BASED ON INERTIAL SENSORS SENSORS, vol. 12, 2012, pages 3049 - 3067
RAYA R. ET AL.: "Design of Input/Output Mapping for a Head-Mounted Interface According to Motor Signs Caused by Cerebral Palsy", ASSISTIVE TECHNOLOGY RESEARCH SERIES, vol. 33, 2013, pages 1039 - 1044
RAYA R. ET AL.: "Empowering the autonomy of children with cognitive and physical impairments by inertial head tracking", PROCEDIA CHEMISTRY, vol. 1, no. 1, September 2009 (2009-09-01), pages 726 - 729, XP026799648
RAYA R. ET AL.: "Wearable inertial mouse for children with physical and cognitive impairments", SENSORS AND ACTUATORS, A: PHYSICAL, vol. 162, no. 2, 2010, pages 248 - 259, XP027320448
SESIN A. ET AL.: "Electrical and Computer Engineering", vol. 28, 2008, FACULTY PUBLICATIONS, article "Adaptive eye-gaze tracking using neural-network-based user profiles to assist people with motor disability"
VELASCO M.A. ET AL.: "Human-computer interaction for users with cerebral palsy based on head orientation. may cursor's movement be modeled by Fitts's law?", INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, vol. 106, 2017, pages 1 - 9, XP085109959, DOI: doi:10.1016/j.ijhcs.2017.05.002
ZIEBART B. ET AL.: "Probabilistic Pointing Target Prediction via Inverse Optimal Control", PROCEEDINGS INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, 2012

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110518847A (en) * 2019-08-30 2019-11-29 长安大学 Surface permanent magnetic Synchronous Machine Models forecast Control Algorithm based on BP neural network
CN110518847B (en) * 2019-08-30 2021-03-30 长安大学 Surface permanent magnet synchronous motor model prediction control method based on BP neural network
CN117339182A (en) * 2023-12-06 2024-01-05 西交利物浦大学 Rehabilitation system and evaluation method based on rehabilitation of upper limb exercise capacity
CN117339182B (en) * 2023-12-06 2024-03-29 西交利物浦大学 Rehabilitation system and evaluation method based on rehabilitation of upper limb exercise capacity

Also Published As

Publication number Publication date
IT201800002114A1 (en) 2019-07-29

Similar Documents

Publication Publication Date Title
EP3491493B1 (en) Gesture based control of autonomous vehicles
Mahmud et al. Interface for human machine interaction for assistant devices: A review
Eid et al. A novel eye-gaze-controlled wheelchair system for navigating unknown environments: case study with a person with ALS
US9694496B2 (en) Providing personalized patient care based on electronic health record associated with a user
Majaranta et al. Eye tracking and eye-based human–computer interaction
US11045366B2 (en) Personal vehicle, and control apparatus and control method therefore
US10157313B1 (en) 3D gaze control of robot for navigation and object manipulation
Gautam et al. Eye movement based electronic wheel chair for physically challenged persons
WO2017104207A1 (en) Information processing device, information processing method, and program
EP3166106A1 (en) Intent managing system
KR20190053097A (en) System and method for guiding social interactions
Shariff et al. Enhancing Text Input for Motor Disabilities through IoT and Machine Learning: A Focus on the Swipe-to-Type Algorithm
US10963063B2 (en) Information processing apparatus, information processing method, and program
JP2020504633A (en) Enhanced control of robotic prostheses with cognitive systems
US20210303258A1 (en) Information processing device, information processing method, and recording medium
WO2019145907A1 (en) Method aimed at patients with motor disabilities for selecting a command by means of a graphic interface, and corresponding system and computer program product
Chacón-Quesada et al. Augmented reality controlled smart wheelchair using dynamic signifiers for affordance representation
WO2023178984A1 (en) Methods and systems for multimodal hand state prediction
KR20210073429A (en) Integration Interface Method and System based on Eye tracking and Gesture recognition for Wearable Augmented Reality Device
Maciel et al. Shared control methodology based on head positioning and vector fields for people with quadriplegia
Roy et al. A robust webcam-based eye gaze estimation system for Human-Computer interaction
Mrabet et al. Development of a new intelligent joystick for people with reduced mobility
Aziz et al. Smart Wheelchairs: A Review on Control Methods
KR102301763B1 (en) System and method for controlling mobile robot
Jiang et al. Integrated gesture recognition based interface for people with upper extremity mobility impairments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19706757

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19706757

Country of ref document: EP

Kind code of ref document: A1