EP4258993A1 - Computerimplementiertes verfahren und system zur abbildung der räumlichen aufmerksamkeit - Google Patents
Computerimplementiertes verfahren und system zur abbildung der räumlichen aufmerksamkeitInfo
- Publication number
- EP4258993A1 EP4258993A1 EP21823911.9A EP21823911A EP4258993A1 EP 4258993 A1 EP4258993 A1 EP 4258993A1 EP 21823911 A EP21823911 A EP 21823911A EP 4258993 A1 EP4258993 A1 EP 4258993A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- stimuli
- stimulus
- scenery
- generating
- reaction times
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/162—Testing reaction times
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/163—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/168—Evaluating attention deficit, hyperactivity
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6813—Specially adapted to be attached to a specific body part
- A61B5/6814—Head
Definitions
- the present invention generally relates to a computer implemented method and a system for mapping spatial attention.
- Spatial attention is the capacity of someone to process stimuli in his surrounding space. Spatial attention is important for perception, for example for visual perception or auditory perception and can even alter perception, allowing someone to selectively process visual or auditory information by directing attention to a location in space through prioritization of an area within the visual or auditory field.
- Unilateral spatial neglect is an attentional deficit, in particular an impaired awareness for stimuli which cannot be attributed to sensory or motor impairments. This neglect may be due to a medical condition and is for example relatively common after a stroke. Patients may for example fail to orientate, to report or to respond to stimuli, especially to stimuli located on the contra-lesional side of space.
- the impaired awareness for stimuli may negatively influence independence of a patient during activities of daily living: body orientation may become impaired leading to a high risk of falling or colliding with objects while walking, patients may have difficulties finding products in a shop, patients may fail to eat the food on the neglected side of their plate, they may be dressing only one side of their body, or patients may be unaware of traffic lights when crossing the street.
- Well-known and widely spread methods used to assess USN comprise pen- and-paper tasks, such as for example the Star Cancellation Test in which a patient is asked to search an array of figures containing 54 small stars, 52 large stars, 13 letters, and 10 words to locate and cross out the small stars.
- the Line Bisection test the patient is asked to use a pencil to mark the midpoint of three different lines on the page.
- assessments may be used, such as behavioural inattention tests or naturalistic action tests, in which someone is asked to perform tasks of daily living such as for example eating, telephone dialling, card sorting, dressing, cleaning one’s mouth and many more, while the performed actions are being observed and assessed by a professional.
- this kind of assessments may be relatively time-consuming.
- the presence of an assessor or observer may influence the performance of the patient.
- the invention aims at providing a solution for mapping visuospatial attention which is able to provide a relatively complete assessment of spatial attention within a reasonable time span.
- a computer implemented method for mapping visuospatial attention comprises the steps of providing a three dimensional scenery on a display to a person, generating stimuli at respective locations in said scenery, measuring, when generating a stimulus of said stimuli, a reaction time between the generating of said stimulus and a spotting by the person of said stimulus, representing said reaction times in function of the locations, thereby visually representing spatial attention of the person.
- the present computer implemented method provides a three-dimensional scenery on a screen. Even if the screen itself is two-dimensional, the represented scenery is three- dimensional, i.e. provide some effect of depth of sight, for example a view on a forest, a river, a road, a playground, a kitchen or any other three-dimensional scenery.
- a map may be obtained representing spatial attention three-dimensionally.
- the reaction time is measured between the generating and the spotting of the stimuli, no additional potential difficulties are added to the assessment, such as naming of objects, pointing to stimuli, or others, which may have an influence on the result.
- a person may for example spot a stimulus without being able to quickly name it because of word finding problems, which may then negatively influence the assessment.
- assessment of spatial attention is mostly done in a binary manner: either a stimulus is spotted or not. Contrary to this binary prior art methods, the present method can provide relatively balanced information in that the method can provide a mapping of the reaction time between the generating and the spotting of a stimulus by the user.
- the method further includes the steps of feeding said measured reaction times into a statistical model modelling the probability that the stimuli at respective locations are spotted after a predetermined time interval, and obtaining from the statistical model estimated reaction times for further locations in said scenery.
- Estimated reaction times for further locations may include a mean estimated reaction time as well as a standard deviation or a variance of said mean estimated reaction time for said further locations.
- the feeding of measured reaction times into a statistical model and obtaining estimated reaction times for further locations can decrease the time needed to perform the mapping of spatial attention while keeping an equal or even higher level of accuracy in that fewer stimuli at fewer locations can be generated. Generating stimuli at as many locations as possible may provide a lot of information on the person’s spatial attention but may be relatively time consuming. Decrease in length of such a mapping of spatial attention has turned out to be an important factor since attention of a person may decrease during performance of the mapping.
- the stimuli are preferably visual stimuli or auditory stimuli.
- Visual stimuli may for example include birds appearing in a tree of a forest, fish in a river, cars on a street, a ball or a balloon on a playground, objects in a cupboard or many other.
- Auditory stimuli may include various sounds, such as shouting, honking, crying, beeping and many others on various volume levels coming from varying angles.
- the method may be performed with visual stimuli only providing a mapping of visuospatial attention, or the method may be performed with auditory stimuli only providing a mapping of auditory attention.
- the method may also combine visual and auditory stimuli by providing both stimuli simultaneously at a same location or by providing visual and auditory stimuli at different locations, in which cases the method can provide a mapping of spatial attention including both visual and auditory attention.
- the measuring of the reaction time can advantageously comprise tracking a position of the person’s pupils.
- Such tracking can be done with known eye-tracking cameras, preferably one camera per eye.
- the at least one eye tracking camera is configured to follow a direction of sight of a person’s pupil.
- the method can consider the stimulus as having been spotted by the person. As an example, visually fixing a stimulus for one or two seconds may be considered as spotting the stimulus.
- the predetermined range where to look to may for example be larger than for visual stimuli, for example up to twice as large, since the direction from where a sound is coming is less easy to fixate by eye.
- Measuring reaction time in this way has the advantage of being relatively easy for the patient without needing active action from the user which may add any potential difficulty, such as with naming, pointing, pushing a button, or any other indication of spotting of the stimulus.
- the display may be a head mounted display.
- the head mounted display can be configured to provide two different representations of the scenery onto the respective eyes of a person to provide a perception of depth within the displayed scenery.
- the head mounted display can for example include two displays, in particular one display per eye.
- a head mounted display can be configured to provide an impression of full immersion of the user in the three-dimensional scenery, which is generally referred to as virtual reality (VR).
- VR virtual reality
- a head mounted display can provide a compact solution which is able to create a relatively realistic three-dimensional scenery.
- a stereoscopic view on the three-dimensional scenery may be obtained by displaying anaglyph scenery images and by using anaglyph glasses. It is also possible to obtain an impression of immersion through the use of large screens or displays around the user.
- Other displays known to the person skilled in the art are possible, as long as they can provide a wide field of view providing an impression of immersion of the user in the three-dimensional scenery.
- the representing can for example include a heatmap of the reaction times, including both measured reaction times as well as estimated reaction times.
- a heatmap is data visualization technique configured to represent a magnitude or a measure of a phenomenon by a colour.
- a given colour for example red, may then be attributed to reaction times above a predetermined threshold indicating that a stimulus has not been spotted within a predetermined time range, while other colours, for example green, yellow, orange, or other colours, may be attributed to reaction times smaller than said predetermined threshold.
- Various colour schemes or colour scales may be used.
- the heatmap can preferably include an associated heatmap configured to visualize a standard deviation or variance associated to the measured reaction times as well as to the estimated reaction times.
- Such a heatmap, or set of heatmaps can allow two- and/or three-dimensional displaying of measured reaction times to stimuli in function of respective locations of said stimuli.
- the heatmap may for example be a three- dimensional cloud of coloured points or may for example be a two-dimensional slice of a three-dimensional map.
- Other visualization techniques may be used as well.
- the method can further comprise the step of providing at least one cue towards the stimulus.
- the cues may be visual cues, or auditory cues, or any other sensory cues configured to attract a user’s attention towards the stimulus.
- This cue can for example be a sign, for example a visual sign, such as an arrow or any other indicator, stationary or moving, leading the attention of the user towards the stimulus.
- a cue may for example be a cue growing in size over time or growing in clarity over time such that a stimulus is first generated without cue, and that a cue is then provided while becoming a ‘stronger’ cue over time attracting more and more attention of the user towards the stimulus.
- Such a cue may be especially advantageous when a first mapping of spatial attention has already been carried out.
- the same computer-implemented method may be used in a training program, the providing of the cue being used in areas where suboptimal spatial attention has been measured.
- a cue may also be provided in a very first mapping of spatial attention without any training purpose.
- the method may further comprise the step of providing distracting elements within the scenery.
- the distracting elements can be configured to distract attention of a user away from the stimuli which should be spotted.
- Said distracting elements may for example include elements which may be related to the stimulus while being different, such as for example other animals popping up when the stimulus to be spotted are only birds.
- Said distracting elements may also be a familiar element in the scenery, for example people passing by in a street when cars need to be spotted.
- a difficulty level is preferably adjustable and may be associated with the relative complexity of the scenery with or without moving elements, with the distribution of locations for generating stimuli, with the generation of cues and/or distracting elements, and/or with a speed of successive generation of stimuli.
- the statistical model is preferably a Gaussian process.
- a Gaussian process is not to be confused with a Gaussian distribution.
- a Gaussian process can model a random process of sampling measured reaction times, which is a function with a continuous domain being time.
- a Gaussian process can also advantageously be used as a prior probability distribution over functions in Bayesian inference in which it can benefit from properties inherited from the normal distribution. Since the Gaussian process can not only provide an estimate but also an uncertainty of that estimate, for example as a standard deviation or a variance, a relatively reliable prediction of reaction times can be provided with relatively few measured data. If a neural network were used to estimate reaction times, a lot more measured reaction times would be necessary to train said network than when using a Gaussian process.
- the obtaining estimated reaction times can comprise performing Gaussian process regression.
- This method of interpolation can generate an unbiased prediction of intermediate values between measured reaction times at respective locations, when data are spatially related, which is the case for mapping spatial attention.
- the predicted values preferably include a mean and an associated standard deviation or variance.
- Other methods of interpolation such as methods based on smoothness, for example of splines, may also be used, as well as for example non-linear interpolation or methods on biased estimators.
- the generating stimuli can advantageously comprise generating a next stimulus where the statistical model includes a high uncertainty, in particular where an uncertainty of an estimated reaction time provided by the statistical model is higher than at other locations or where the uncertainty is higher than a predefined threshold.
- Estimated reaction times for further locations obtained from the statistical model can include a mean estimated reaction time as well as a standard deviation or variance associated to said mean estimated reaction time. The uncertainty can thus be based on said standard deviation or variance associated to said mean estimated reaction time.
- the method may advantageously choose a location for generating a next stimulus based on the uncertainty associated to an estimated reaction time by the statistical model, for example by choosing the location where the uncertainty, in particular standard variation or variance, associated to an estimated reaction time is the highest of all further locations, or alternatively, higher than a predefined threshold, for example higher than one standard deviation, or any other predefined threshold.
- the choice for a next location for generating a next stimulus is not a random process. If iterated, the statistical model would choose the same location for generating a next stimulus based on the uncertainty associated to the estimated reaction times.
- the method can provide a mapping of spatial attention based on a relatively low number of measurements, which can significantly decrease the time of participation of a person, while providing a relatively high accuracy in the mapping of spatial attention.
- Generating a next stimulus may be repeated a predefined number of times, for example 20 times or more or less, to avoid bias in the results due to fatigue of the person spotting the stimuli. Additionally, and/or alternatively, generating a next stimulus may be repeated until estimated reaction times provided by the statistical model have converged, i.e. estimated reaction times do not change significantly anymore between consecutive iterations. In other words, generating a next stimulus does not result anymore in a significant change of estimated reaction times provided by the statistical model anymore, or visually speaking, in a significant change in heatmap.
- the generating stimuli can further comprise generating a next stimulus where a reaction time is measured above a predetermined threshold.
- the method of mapping spatial attention can be further applied for training purposes. A successive mapping of spatial attention can then show an evolution of spatial attention over time. It may then be advantageous when the generation rate of stimuli is higher in locations where measured or estimated reaction times to stimuli are relatively high to obtain a training of reaction to stimuli where needed.
- the method can further comprise measuring a user’s head movement.
- a head movement may for example be tracked by a camera or by a gyroscope or by any other detector or sensor of movement or acceleration. The tracking of the head movement may provide additional information on the process of spotting a stimulus.
- a user may for example first turn the head into a given direction before being able to exactly locate and spot the stimulus and to fix the eyes on the stimulus.
- a system for mapping visuospatial attention comprising a head mounted device including a head mounted display configured to provide a three-dimensional scenery and a controller.
- the controller comprises at least one processor and at least one memory including computer program code, the at least one memory and computer program code configured to, with the at least one processor, cause the controller to perform the above-described computer implemented method for mapping visuospatial attention.
- Said method comprises the steps of providing a three dimensional scenery on a display to a person, generating stimuli at respective locations in said scenery, measuring, when generating a stimulus of said stimuli, a reaction time between the generating of said stimulus and a spotting by the person of said stimulus, representing said reaction times in function of the locations, thereby visually representing spatial attention of the person.
- the controller is preferably integrated into the head mounted device.
- FIG. 1 shows a side view of a preferred embodiment of a system for mapping spatial attention according to an aspect of the invention
- Fig. 2a - 2d show an embodiment of steps of a computer-implemented method for mapping spatial attention according to another aspect of the invention
- FIG. 3a - 3b show an embodiment of the step of representing reaction times in function of the locations of the method of Figures 2a - 2d;
- Figure 4 represents a flowchart of a preferred embodiment of the method of Figures 2a - 2d.
- Figure 5 shows a computing system suitable for performing various steps of the method of Figures 2a - 2d.
- Figure 1 shows a side view of a preferred embodiment of a system 1 for mapping spatial attention according to an aspect of the invention.
- the system 1 comprises a head mounted device 2 including a head mounted display 3 which is configured to be worn over the eyes and which is fastened to the head, for example via a headband 2a, the display 3 and the headband 2a together forming a single head mounted device 2.
- the display 3 is configured to provide a three-dimensional scenery 4, as illustrated in Figures 2a - 2d.
- An impression of three-dimensionality of the scenery 4 can be obtained in various ways, known to the person skilled in the art, for example by displaying stereoscopic images.
- the head mounted display 3 can be a single display or a plurality of displays, for example two displays, which can provide two different representations of the scenery 4 onto the respective eyes of a person to provide a perception of depth within the displayed scenery 4.
- the system 1 can further comprise a circuitry 5 arranged to track an eye of a viewer within the displayed scenery 4.
- Circuitry 5 may for example include an eye tracking camera 5, for example one eye tracking camera per eye.
- the eye tracking camera 5 may be included into the head mounted device 2 and is configured to follow a direction of sight or point of gaze of a person’s pupil, which can for example be done by optical eye tracking using a low-resolution camera which is configured to sense a reflection of light by the eye, in particular by the cornea, for example of infrared or near-infrared light. Eye rotation and a direction of gaze can then be deduced from changes in the reflection of the light. Other eye tracking methods can be used as well, for example using dedicated contact lenses or using an electrooculogram.
- the system 1 can further comprise a sensor 6 configured to detect a head movement of the user, for example a gyroscopic sensor or any other known motion detecting sensor.
- the sensor 6 is preferably also included in the head mounted device 2. Tracking of head movement, eye movement and/or gaze ray can also be used to determine a total area or space in which a person searches for stimuli to be spotted, as will be explained further.
- the system 1 further comprises at least one controller comprising at least one processor 7 and at least one memory 8 including computer program code.
- the at least one memory 8 and computer program code are configured to, with the at least one processor 7, cause the controller to perform the method for mapping spatial attention.
- the controller is preferably at least partly integrated into the head mounted device 2.
- the head mounted device 2 may also be wirelessly or wiredly connectable to an external and/or remote controller, for example including additional memory to store results of the method for mapping spatial attention.
- the system 1 may also include headphones, which are not illustrated here, and which may optionally be integrated into the head mounted device 2.
- Fig. 2a - 2d show an embodiment of some steps of the computer-implemented method for mapping spatial attention.
- the method comprises the steps of providing a three-dimensional scenery 4 on a display 3 to a person, for example on a head mounted display 3 as shown in Figure 1.
- the scenery 3 is a hilly landscape, partly a forest, partly fields.
- the scenery can also be any other kind of scenery, such as streets with buildings along, a road or a river in a countryside, a town centre, a crossroad, a picknick-table in the countryside, a library or any other scenery, as long as the scenery, in combination with the way of displaying, can provide an impression of depth.
- Said scenery 3 may be stored locally on memory included in the head mounted device 2 or remotely on a computing system, from which the scenery can be transmitted to the head mounted device 2, via a wireless or wired connection.
- said scenery 4 may also be displayed on relatively large screens around a person such that the person can also get an impression of being immersed in the scenery.
- the scenery 4 can be static or moving.
- the method further comprises the step of generating stimuli 9 at respective locations in said scenery 4. Said respective locations may be logged in any known three-dimensional coordinate system.
- the stimuli 9 are visual stimuli, but also auditory stimuli are possible.
- birds can appear in the scenery 4, or the singing of birds may be generated at respective locations in the scenery 4, which is an example of an auditory stimulus.
- Many other possible visual or auditory stimuli can be imagined, such for example fish appearing in an underwater scene, people popping up and/or screaming in a town scenery, balloons flying by, food appearing on a picknick table and many more.
- Stimuli are preferably generated successively rather than simultaneously, so that the user has time to perceive and spot said stimuli 9 one by one.
- a first bird may pop up on a left side of the scenery, as shown in Figure 2a, then another bird may appear around a lower middle of the scenery 4 ( Figure 2b).
- a flying bird may appear in the sky, as shown in Figure 2c, and then another bird may appear towards the right side of the scenery 4, as seen in Figure 2d.
- Stimuli may be generated at various depths, depending on the scenery, for example in a range around one meter, or around approximately ten meters, or in between, or closer or further, in a mixed way or in separate test sessions.
- the order of appearance and the location of appearance may be predetermined, or may be adjustable, for example by a person surveying the mapping of spatial attention.
- the direction of looking or the point of gaze of the user may for example be followed by the eye tracking camera 5.
- the method further comprises the step of measuring, when generating a stimulus of said stimuli 9, a reaction time between the generating of said stimulus and a spotting by the person of said stimulus.
- the user is said to have spotted a generated stimulus when the user looks at the stimulus 9, as determined for example by the eye tracking camera 5, for a pre-determined amount of time, for example for at least 1 second or 2 seconds or more or less.
- Said amount of time to define the spotting may be adjustable.
- the spotting of the stimulus by the person may include a predetermined spatial range or solid angle around the location where the stimulus has been generated to account for a potential error margin of the eye tracking.
- the eye tracking may take into account a gaze ray, head movement and/or eye movement.
- an accuracy of the direction of gaze of the person is expressed as a scalar product of the direction of gaze and the direction of the position or location of the generated stimulus, then said scalar product should be 1 when the person looks exactly at the location of the stimulus and 0 when the person looks at an angle of 90° from the direction of the location where the stimulus has been generated.
- the accuracy of the spotting may for example be chosen to be above 0.9 to be qualified as a spotting of the stimulus by the person.
- the accuracy may also be chosen higher, for example to be above 0.95 or above 0.99 to qualify as a spotting of the stimulus. Said accuracy can be adjustable.
- a user may not spot a generated stimulus at all, for example because it is generated at a location which is relatively far away of the previous stimulus, because of inattention, or because the user may suffer from a form of spatial neglect at that location.
- the method may thereto include an upper range or time limit, above which time limit the stimulus disappears again and is marked as not spotted, for example after 10 seconds, after 15 seconds, or more or less, which upper time limit may also be adjustable.
- the method can also include at least one cue 10 towards the stimulus 9.
- a cue 10 may for example be a growing arrow, or a circle filling up around the stimulus, or any other cue which may attract attention of the user and direct said attention towards the stimulus 9.
- Fig. 3a - 3b show an embodiment of the step of representing reaction times in function of the locations of a generated stimulus 9 to visually represent spatial attention of the person according to the method for mapping spatial attention.
- This representation of reaction times may for example be done by a heatmap 11 , 1 T in which a colour code can be indicative for a predetermined range of reaction times, as for example in heatmap 11 , or indicative for a predetermined range of standard deviations associated to said reaction times, as for example in heatmap 1 T.
- a first colour 11 a may indicate a relatively swift spotting of the stimulus 9 at the respective location in the scenery 4, for example within 3 seconds of the generation of the stimulus 9.
- a second colour 11 b may indicate a spotting within a range of for example 4 to 10 seconds
- a third colour 11 c may indicate a reaction time over 10 seconds. Since these colours 11 a, 11 b and 11 c represent a reaction time, both measured reaction times as well as estimated reaction times, in function of the location of the respective stimulus, a map is obtained which visually represents spatial attention of the user: areas coloured in the third colour 11 c refer to locations where a stimulus has been spotted relatively late or not at all, indicating areas of what can be called spatial neglect. Other ranges of reaction times may be used, smaller ranges and/or larger ranges, in combination with a plurality of colours or a continuous scale of colours.
- a first colour 11 ’a may indicate a relative high certainty, or low standard deviation, on the reaction time for the spotting of the stimulus at the respective location in the scenery, for example because the reaction time of the spotting of the stimulus has been measured rather than estimated at the respective location.
- a second colour 11 ’b may indicate a reasonable uncertainty or a standard deviation below a predefined threshold on the reaction time, for example because the reaction time at the respective location results from an estimation of the reaction time by the statistical model, for example at a location in between or close to locations with measured reaction times.
- a third colour 11’c may be indicative of a relatively high uncertainty, or a relatively high standard deviation on the estimated reaction times, for example because no stimulus has been generated at or near the respective locations.
- the visual representation 11 , 1 T is a two-dimensional representation, whereas the scenery 4 in which the stimuli 9 are generated are three-dimensional.
- the two-dimensional representations of Figures 3a and 3b may in fact be slices of a three-dimensional heatmap corresponding to the three-dimensional scenery 4, or they may be projections of part of a sphere on a two-dimensional map when the field of view of the user is represented as a sphere.
- Three-dimensional heatmaps may also be used directly for the representation of the reaction times in function of the locations of the generated stimuli.
- the representation does not only include measured reaction times, but also estimated reaction times, as will be explained further.
- FIG. 4 represents a flowchart of a preferred embodiment of the method for mapping spatial attention.
- a three-dimensional scenery 4 is provided on a display to a person, as explained with reference to Figures 2a - 2d.
- stimuli 9 are generated at respective locations in said scenery 4.
- a reaction time 12 between the generating of said stimulus in step 110 and a spotting by the person of said stimulus in step 120, is measured.
- a next stimulus is generated, preferably after a spotting 120 of said stimulus by a person.
- said reaction times 12 are represented in function of the locations, thereby visually representing spatial attention of the person, as indicated by step 130.
- This representation may be performed on a remote computing system after wired or wireless transmission of the data to said remote computing system. Said transmission may be performed during or after the preceding steps of generating stimuli and measuring reaction times.
- said measured reaction times 12 can be fed into a statistical model 140 modelling the probability that the stimuli at respective locations are spotted after a predetermined time interval.
- a statistical model can for example be a Gaussian process.
- estimated reaction times can be obtained from the statistical model for further locations of the scenery 4, which may for example be done by performing Gaussian process regression. This step may be performed locally on a processor included in the head mounted device 2 or, after wired or wireless transmission of the measured reaction times, on a remote computing system.
- the method can further include a feedback step 160: a next stimulus may be generated where the statistical model includes a high uncertainty, in particular where the statistical model includes the highest uncertainty. Generating a next stimulus may be stopped either after a predetermined number of generated stimuli, or after convergence of the estimated reaction times. In other words, when generating a next stimulus does not significantly modify the estimated reaction times obtained from the statistical model anymore, the iterations 160 may be stopped. In this way, the method can provide a relatively reliable mapping without testing every single location in the scenery.
- the method can also include an additional feedback step 170: a next stimulus may be generated where a reaction time is measured above a predetermined threshold. In this way, the method can be used for training spatial attention and the mapping can visualize progress in training.
- FIG. 5 shows a suitable computing system 500 comprising circuitry enabling the performance of steps of embodiments of the method for mapping spatial attention according to an aspect of the invention.
- the computing system 500 may at least partly be integrated in the head mounted device 2, as previously described.
- Computing system 500 may in general be formed as a suitable general-purpose computer and comprise a bus 510, a processor 502, a local memory 504, one or more optional input interfaces 514, one or more optional output interfaces 516, a communication interface 512, a storage element interface 506, and one or more storage elements 508.
- Bus 510 may comprise one or more conductors that permit communication among the components of the computing system 500.
- Processor 502 may include any type of conventional processor or microprocessor that interprets and executes programming instructions.
- Local memory 504 may include a random-access memory (RAM) or another type of dynamic storage device that stores information and instructions for execution by processor 502 and/or a read only memory (ROM) or another type of static storage device that stores static information and instructions for use by processor 502.
- Input interface 514 may comprise one or more conventional mechanisms that permit an operator or user to input information to the computing device 500, such as a keyboard 520, a mouse 530, a pen, voice recognition and/or biometric mechanisms, a camera, etc.
- Output interface 516 may comprise one or more conventional mechanisms that output information to the operator or user, such as a display 540, etc.
- Communication interface 512 may comprise any transceiver-like mechanism such as for example one or more Ethernet interfaces that enables computing system 500 to communicate with other devices and/or systems, for example with other computing devices 581 , 582, 583.
- the communication interface 512 of computing system 500 may be connected to such another computing system by means of a local area network (LAN) or a wide area network (WAN) such as for example the internet.
- Storage element interface 506 may comprise a storage interface such as for example a Serial Advanced Technology Attachment (SATA) interface or a Small Computer System Interface (SCSI) for connecting bus 510 to one or more storage elements 508, such as one or more local disks, for example SATA disk drives, and control the reading and writing of data to and/or from these storage elements 508.
- SATA Serial Advanced Technology Attachment
- SCSI Small Computer System Interface
- the storage element(s) 508 above is/are described as a local disk, in general any other suitable computer-readable media such as a removable magnetic disk, optical storage media such as a CD or DVD, -ROM disk, solid state drives, flash memory cards, ... could be used.
- circuitry may refer to one or more or all of the following:
- circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware.
- circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in a server, a cellular network device, or other computing or network device.
- top, bottom, over, under, and the like are introduced for descriptive purposes and not necessarily to denote relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances and embodiments of the invention are capable of operating according to the present invention in other sequences, or in orientations different from the one(s) described or illustrated above.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Developmental Disabilities (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Psychiatry (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Physics & Mathematics (AREA)
- Educational Technology (AREA)
- Biophysics (AREA)
- Child & Adolescent Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Processing Or Creating Images (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP20212871 | 2020-12-09 | ||
| PCT/EP2021/084817 WO2022122834A1 (en) | 2020-12-09 | 2021-12-08 | Computer implemented method and system for mapping spatial attention |
Publications (3)
| Publication Number | Publication Date |
|---|---|
| EP4258993A1 true EP4258993A1 (de) | 2023-10-18 |
| EP4258993C0 EP4258993C0 (de) | 2025-04-23 |
| EP4258993B1 EP4258993B1 (de) | 2025-04-23 |
Family
ID=74141254
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP21823911.9A Active EP4258993B1 (de) | 2020-12-09 | 2021-12-08 | Computerimplementiertes verfahren und system zur abbildung räumlicher aufmerksamkeit |
Country Status (2)
| Country | Link |
|---|---|
| EP (1) | EP4258993B1 (de) |
| WO (1) | WO2022122834A1 (de) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120643230B (zh) * | 2025-08-20 | 2025-11-07 | 歌尔股份有限公司 | 反应能力评估方法、头显设备及计算机可读存储介质 |
-
2021
- 2021-12-08 WO PCT/EP2021/084817 patent/WO2022122834A1/en not_active Ceased
- 2021-12-08 EP EP21823911.9A patent/EP4258993B1/de active Active
Also Published As
| Publication number | Publication date |
|---|---|
| EP4258993C0 (de) | 2025-04-23 |
| WO2022122834A1 (en) | 2022-06-16 |
| EP4258993B1 (de) | 2025-04-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Liu et al. | Technical evaluation of HoloLens for multimedia: A first look | |
| Kallai et al. | Spatial orientation strategies in Morris-type virtual water task for humans | |
| CN107224261B (zh) | 利用虚拟现实的视觉障碍检测系统 | |
| Erkan | Examining wayfinding behaviours in architectural spaces using brain imaging with electroencephalography (EEG) | |
| Nash et al. | A review of presence and performance in virtual environments | |
| Hegarty et al. | Individual differences in spatial abilities | |
| Marquardt et al. | Comparing non-visual and visual guidance methods for narrow field of view augmented reality displays | |
| Royden et al. | The perception of heading during eye movements | |
| MacNeilage et al. | A Bayesian model of the disambiguation of gravitoinertial force by visual cues | |
| Otsuka et al. | Dual-route model of the effect of head orientation on perceived gaze direction. | |
| Li et al. | The effect of navigation method and visual display on distance perception in a large-scale virtual building | |
| Larrue et al. | Influence of body-centered information on the transfer of spatial learning from a virtual to a real environment | |
| Yu et al. | Transformations and representations supporting spatial perspective taking | |
| EP4258993B1 (de) | Computerimplementiertes verfahren und system zur abbildung räumlicher aufmerksamkeit | |
| Kubicek et al. | How manual object exploration is associated with 7‐to 8‐month‐old infants’ visual prediction abilities in spatial object processing | |
| Kalia et al. | Learning building layouts with non-geometric visual information: The effects of visual impairment and age | |
| Brooks et al. | Sensory substitution: visual information via haptics | |
| EP4304475A1 (de) | Xr-basierte plattform für neurokognitive motorische affective beurteilungen | |
| Hébert-Lavoie et al. | The influence of immersion on situational awareness in a virtual environment | |
| Zimmons | The influence of lighting quality on presence and task performance in virtual environments | |
| Bianconi et al. | Immersive visual experience for wayfinding analysis | |
| CN110786825A (zh) | 基于虚拟现实视听觉通路的空间知觉失调测训系统 | |
| Saracini et al. | Stereoscopy does not improve metric distance estimations in virtual environments | |
| McNamara et al. | Perceptually-motivated graphics, visualization and 3D displays | |
| Jain et al. | Experience affects the use of ego-motion signals during 3D shape perception |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20230710 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| DAV | Request for validation of the european patent (deleted) | ||
| DAX | Request for extension of the european patent (deleted) | ||
| GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
| INTG | Intention to grant announced |
Effective date: 20241114 |
|
| GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
| GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
| AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
| REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
| REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602021029734 Country of ref document: DE |
|
| REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
| U01 | Request for unitary effect filed |
Effective date: 20250430 |
|
| U07 | Unitary effect registered |
Designated state(s): AT BE BG DE DK EE FI FR IT LT LU LV MT NL PT RO SE SI Effective date: 20250508 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20250423 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20250723 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20250724 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20250423 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20250423 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20250723 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20250823 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20250423 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20250423 |
|
| PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20250423 |