WO2021191126A1 - Système - Google Patents

Système Download PDF

Info

Publication number
WO2021191126A1
WO2021191126A1 PCT/EP2021/057214 EP2021057214W WO2021191126A1 WO 2021191126 A1 WO2021191126 A1 WO 2021191126A1 EP 2021057214 W EP2021057214 W EP 2021057214W WO 2021191126 A1 WO2021191126 A1 WO 2021191126A1
Authority
WO
WIPO (PCT)
Prior art keywords
interest
level
score
audio
signal
Prior art date
Application number
PCT/EP2021/057214
Other languages
English (en)
Inventor
Ali GHADIRZADEH
Mårten BJÖRKMAN
Danica Kragic JENSFELT
Original Assignee
Croseir Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Croseir Ab filed Critical Croseir Ab
Publication of WO2021191126A1 publication Critical patent/WO2021191126A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/378Visual stimuli
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/38Acoustic or auditory stimuli
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/117Identification of persons
    • A61B5/1171Identification of persons based on the shapes or appearances of their bodies or parts thereof
    • A61B5/1176Recognition of faces
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • a system comprising: an output device configured to provide an audio and / or visual stimulation to a user; one or more biometric sensors that are configured to provide biometric-signalling, which is representative of body measurements of the user while they are exposed to the audio and / or visual stimulation; and a processor configured to: process the biometric-signalling in order to determine an interest-level- score; provide a control-signal to the output device based on the interest-level- score, wherein the control-signal is for adjusting the audio and / or visual stimulation that is provided by the output device.
  • Such a system can advantageously adjust audio-visual content in response to a determined interest-level-score to iteratively optimize the audio-visual content.
  • the system can enable new problems to be solved, such as to search for visual contents in a user’s brain, which was not possible before.
  • the processor may be configured to: iteratively process the biometric-signalling to determine a plurality of interest-level- scores, wherein each interest-level-score is associated with an instance of the audio and / or visual stimulation; iteratively provide a plurality of control-signals to the output device based on associated ones of the plurality of interest-level-scores; determine one of the interest-level-scores as a selected-interest-level-score by applying a function to the plurality of interest-level-scores; and provide an output-signal that is representative of the instance of the audio and / or visual stimulation that is associated with the selected-interest-level-score.
  • the loop controller 220 determines one of the interest-level- scores 222 for a plurality of iterations as a selected-interest-level-score by applying a function to the plurality of interest-level-scores 222. Applying such a function may involve selecting the highest interest-level-score as the selected-interest-level-score. Alternatively, applying such a function may involve selecting: the lowest interest-level- score as the selected-interest-level-score; or the interest-level-score that is closest to a target-interest-score, as the selected-interest-level-score. It will be appreciated that the nature of the function will depend on the particular application with which the processor 210 is being used.
  • each training pair in the training dataset is constructed as the following: (i) the input data is the sequence of biometric measures, such as a few hundreds of milliseconds of EEG-signalling 308, while the participant is exposed to the stimuli; and (ii) the output data is a similarity measure between the generated stimuli and the target stimuli, e.g., quantified by a distance metric, e.g., Euclidean distance, between the latent variable values corresponding to the generated and the target stimuli.
  • a distance metric e.g., Euclidean distance
  • the output data for the training can be provided by an operator based on their subjective opinion of the similarity between the stimuli (such as the similarity between: (i) the target face; and (ii) the synthetic human face that was displayed to the participant when the input data was recorded).
  • the trainable parameters of the network are updated such that for every training pair, setting the training input as the input of the network, the produced output of the network is as close as possible to the output of the corresponding training data.
  • This paradigm is known as supervised learning.
  • the optimizer 326 can optimize the generated stimuli to get closer to the target stimuli based on a gradient-based approach (such as a stochastic gradient descent), or based on a gradient-free approach (such as the Nelder-Mead optimization algorithm), or using Reinforcement Learning (RL), as non-limiting examples. It will be appreciated that any algorithm can be used that optimizes the measured interest level 322 for any specific application.
  • a gradient-based approach such as a stochastic gradient descent
  • a gradient-free approach such as the Nelder-Mead optimization algorithm
  • RL Reinforcement Learning
  • a text description 534 of a face is provided to the optimization algorithm 526.
  • the text description 534 can be used as part of a start-up routine so that the initially displayed image on the display device 502 represents a good starting point for the subsequent iterations.
  • a text description 534 of a “40-year-old man” can be provided; in which case a stock image of a 40-year-old man’s face can be provided as an initial image on the display device 502.
  • the optimization algorithm 526 can use the text description 534 such that the determined latent variable
  • Figure 6 shows an example embodiment of a system that can function as a human- machine interface in order to control a robot 638.
  • the control is based on matching images that are displayed to a user 604 with the user’s thoughts about what they would like the robot 638 to do.
  • features of Figure 6 that are also shown in Figure 3 have been given corresponding reference numbers in the 600 series, and will not necessarily be described in detail here.
  • the one or more biometric sensors may include a pupil size sensor that provides pupil-size-signalling.
  • the pupil size sensor may include a camera that obtains images of a user’s eye.
  • the pupil-size-signalling can be representative of the pupil size / dilation of the user’s eye while they are being exposed to the audio and / or visual stimulation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Dermatology (AREA)
  • Neurosurgery (AREA)
  • Neurology (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Système (300) comprenant : un dispositif de sortie (302) conçu pour fournir une stimulation audio et/ou visuelle (114) à un utilisateur (304) ; et un ou plusieurs capteurs biométriques (306) qui sont conçus pour fournir une signalisation biométrique (308), qui représente des mensurations de l'utilisateur (304) lorsque celui-ci est exposé à la stimulation audio et/ou visuelle (114). Le système (300) comprend en outre un processeur (310) conçu pour : traiter la signalisation biométrique (308) afin de déterminer un score de niveau d'intérêt (222) ; et fournir un signal de commande (312) au dispositif de sortie (302) sur la base du score de niveau d'intérêt (222), le signal de commande (312) permettant de régler la stimulation audio et/ou visuelle (114) qui est fournie par le dispositif de sortie (302).
PCT/EP2021/057214 2020-03-23 2021-03-22 Système WO2021191126A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE2050318-1 2020-03-23
SE2050318A SE2050318A1 (en) 2020-03-23 2020-03-23 A system

Publications (1)

Publication Number Publication Date
WO2021191126A1 true WO2021191126A1 (fr) 2021-09-30

Family

ID=75339669

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/057214 WO2021191126A1 (fr) 2020-03-23 2021-03-22 Système

Country Status (2)

Country Link
SE (1) SE2050318A1 (fr)
WO (1) WO2021191126A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110159467A1 (en) * 2009-12-31 2011-06-30 Mark Peot Eeg-based acceleration of second language learning
US20140223462A1 (en) * 2012-12-04 2014-08-07 Christopher Allen Aimone System and method for enhancing content using brain-state data
US20140347265A1 (en) * 2013-03-15 2014-11-27 Interaxon Inc. Wearable computing apparatus and method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL165586A0 (en) * 2004-12-06 2006-01-15 Daphna Palti Wasserman Multivariate dynamic biometrics system
US20120296476A1 (en) * 2009-10-30 2012-11-22 Richard John Cale Environmental control method and system
US20160235323A1 (en) * 2013-09-25 2016-08-18 Mindmaze Sa Physiological parameter measurement and feedback system
US11266342B2 (en) * 2014-05-30 2022-03-08 The Regents Of The University Of Michigan Brain-computer interface for facilitating direct selection of multiple-choice answers and the identification of state changes
US9778628B2 (en) * 2014-08-07 2017-10-03 Goodrich Corporation Optimization of human supervisors and cyber-physical systems
EP3481294B1 (fr) * 2016-07-11 2021-03-10 Arctop Ltd Procédé et système de fourniture d'une interface cerveau-ordinateur
EP3576626A4 (fr) * 2017-02-01 2020-12-09 Cerebian Inc. Système et procédé de mesure d'expériences perceptuelles
CN111629653A (zh) * 2017-08-23 2020-09-04 神经股份有限公司 具有高速眼睛跟踪特征的大脑-计算机接口
CN111542800A (zh) * 2017-11-13 2020-08-14 神经股份有限公司 具有对于高速、精确和直观的用户交互的适配的大脑-计算机接口

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110159467A1 (en) * 2009-12-31 2011-06-30 Mark Peot Eeg-based acceleration of second language learning
US20140223462A1 (en) * 2012-12-04 2014-08-07 Christopher Allen Aimone System and method for enhancing content using brain-state data
US20140347265A1 (en) * 2013-03-15 2014-11-27 Interaxon Inc. Wearable computing apparatus and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Eye Tracking in User Experience Design"
CHEN, X.DUAN, Y.HOUTHOOFT, R.SCHULMAN, J.SUTSKEVER, I.ABBEEL, P.: "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, 2016, pages 2172 - 2180
YANG EUIJUNG ET AL: "The Emotional, Cognitive, Physiological, and Performance Effects of Variable Time Delay in Robotic Teleoperation", INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, SPRINGER NETHERLANDS, DORDRECHT, vol. 9, no. 4, 8 May 2017 (2017-05-08), pages 491 - 508, XP036306876, ISSN: 1875-4791, [retrieved on 20170508], DOI: 10.1007/S12369-017-0407-X *

Also Published As

Publication number Publication date
SE2050318A1 (en) 2021-09-24

Similar Documents

Publication Publication Date Title
Özdenizci et al. Adversarial deep learning in EEG biometrics
CN115004308A (zh) 用于提供活动推荐的接口的方法和系统
Mousavi et al. Improving motor imagery BCI with user response to feedback
Kosmyna et al. Adding Human Learning in Brain--Computer Interfaces (BCIs) Towards a Practical Control Modality
JP2021506052A5 (fr)
Rey et al. Testing computational models of letter perception with item-level event-related potentials
JP2022535799A (ja) 認知トレーニング及び監視のためのシステム及び方法
Carabez et al. Convolutional neural networks with 3D input for P300 identification in auditory brain-computer interfaces
Atique et al. Mirror neurons are modulated by grip force and reward expectation in the sensorimotor cortices (S1, M1, PMd, PMv)
US20210183509A1 (en) Interactive user system and method
US11779512B2 (en) Control of sexual stimulation devices using electroencephalography
EP3035317A1 (fr) Système d'équilibrage de charge cognitive et procédé
Zheng et al. Multiclass emotion classification using pupil size in vr: Tuning support vector machines to improve performance
WO2021191126A1 (fr) Système
Forney et al. Echo state networks for modeling and classification of EEG signals in mental-task brain-computer interfaces
Morales et al. Saccade landing point prediction: A novel approach based on recurrent neural networks
WO2023104519A1 (fr) Classification de traits de personnalité d'utilisateur pour environnements virtuels adaptatifs dans des parcours d'histoire non linéaires
CA3233781A1 (fr) Intervention sur la sante mentale a l'aide d'un environnement virtuel
Ekiz et al. Long short-term memory network based unobtrusive workload monitoring with consumer grade smartwatches
Hmamouche et al. Exploring the dependencies between behavioral and neuro-physiological time-series extracted from conversations between humans and artificial agents
Fu et al. Impending success or failure? An investigation of gaze-based user predictions during interaction with ontology visualizations
US20210063972A1 (en) Collaborative human edge node devices and related systems and methods
CN111967333A (zh) 一种信号生成方法、系统、存储介质及脑机接口拼写器
Rejer et al. Classifier selection for motor imagery brain computer interface
Khemakhem et al. A Novel Deep Multi-Task Learning to Sensing Student Engagement in E-Learning Environments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21715782

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 24.01.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21715782

Country of ref document: EP

Kind code of ref document: A1