WO2024114480A1 - 一种视觉刺激方法、脑机训练方法和脑机训练系统 - Google Patents

一种视觉刺激方法、脑机训练方法和脑机训练系统 Download PDF

Info

Publication number
WO2024114480A1
WO2024114480A1 PCT/CN2023/133431 CN2023133431W WO2024114480A1 WO 2024114480 A1 WO2024114480 A1 WO 2024114480A1 CN 2023133431 W CN2023133431 W CN 2023133431W WO 2024114480 A1 WO2024114480 A1 WO 2024114480A1
Authority
WO
WIPO (PCT)
Prior art keywords
visual stimulation
stimulation
scene
brain
visual
Prior art date
Application number
PCT/CN2023/133431
Other languages
English (en)
French (fr)
Inventor
马征
詹阳
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Publication of WO2024114480A1 publication Critical patent/WO2024114480A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • A61B5/374Detecting the frequency distribution of signals, e.g. detecting delta, theta, alpha, beta or gamma waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/378Visual stimuli
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention relates to the technical field of bioelectric signal processing, and in particular to a visual stimulation method, a brain-computer training method and a brain-computer training system.
  • Attention is a basic cognitive function of the human brain. It is manifested as the ability of the human brain to select and filter perceptual information from sensory pathways such as vision, hearing, and touch, as well as the ability to retain the sensory information that is being paid attention to. It directly affects higher-level cognitive processing such as executive control, learning, and memory. Attention training is a gradual process. In traditional cognitive function training, through a series of behavioral tasks targeting attention, executive control, learning, memory, etc., the visual, auditory, and body movements between people and the environment are monitored and promoted to gradually improve the attention level and other cognitive functions of people, especially those with attention disorders. Cognitive function training based on EEG signals and brain-computer interaction provides a new means of interaction for this training process, and shows greater advantages in concentration, immersion, and biofeedback regulation. It is also suitable for integration in virtual/augmented/mixed reality (VR/AR/MR) environments.
  • VR/AR/MR virtual/augmented/mixed reality
  • the main disadvantages of existing brain-computer training technologies are: First, although the brain-computer interface-based method can be used in complex control scenarios, the existing technology requires users to always maintain a high level of attention, which can easily make users feel frustrated. Therefore, it is difficult for people with attention disorders to achieve good training results. Second, for the biofeedback method that controls only according to changes in EEG rhythm, during the interactive training process, the control method is single and the brain information output is low, so complex control cannot be completed, reducing the fun of the interaction.
  • the main purpose of the present invention is to provide a visual stimulation method, which describes an ERP brain-computer interaction technology with low demand for human brain attention resources, which can be used for more complex and interesting interaction scenarios than traditional concentration-based control, while allowing users to complete brain-computer interaction at a lower attention level.
  • the interface can be controlled, and as the training progresses, the requirements for attention levels can be adjusted, thereby achieving progressive cognitive training and obtaining results that are superior to other existing cognitive training methods.
  • the present invention provides a visual stimulation method, comprising:
  • the stimulation elements When the visual stimulation scene is presented, the stimulation elements will change their original forms, but the guiding symbols will not be affected. The user only needs to focus on the guiding symbols without identifying or judging the stimulation elements.
  • the guide symbol will change at any time, and the user determines whether the guide symbol has changed.
  • the predetermined rules of the stimulation elements are encoded, and the corresponding stimulation elements are presented in sequence according to the predetermined rules.
  • the visual stimulation scene is presented in 3D or 2D form, and the visual stimulation scene at least includes a matching scene and an elimination scene, wherein
  • the user searches for a specified target stimulus element from among several stimulus elements;
  • the user looks at several stimulus elements and eliminates the target stimulus elements one by one.
  • the present invention also provides a brain-computer training method, comprising
  • attention assessment specifically includes real-time estimation of the EEG signal power spectrum within a time window of a specific length from the current moment to the past through fast Fourier transform of the collected EEG signal; calculating the energy in the ⁇ band, ⁇ band, and ⁇ band, and taking ( ⁇ + ⁇ )/ ⁇ as the concentration value, the smaller the ratio, the higher the attention level; and evaluating the attention level according to time windows of different lengths, wherein the attention level includes an instantaneous attention level and a long-term attention level.
  • visual stimulation scenes with multiple difficulty levels are set, and according to the assessed user attention level, visual stimulation scenes with a level corresponding to the current attention level are adjusted and presented in real time.
  • the step of receiving the EEG signal for analysis includes:
  • the filtered EEG signal is segmented into data segments of preset length.
  • the step of receiving the EEG signal and decoding it includes:
  • Decoding is performed through a deep neural network decoder according to the data segment, and the structure of the deep neural network decoder is, layer by layer, input layer, time dimension convolution layer, space dimension convolution layer, average pooling layer, time dimension convolution layer, fully connected layer, average pooling layer, and output layer.
  • the present invention also provides a brain-computer training system, comprising
  • the EEG signal collector is used to collect the user's EEG signals in real time and mark events based on the feedback trig signals;
  • a display device used to present visual stimulation as described above to a user and to feed back a trig signal
  • a data storage device for storing visual stimulation data and EEG signal data
  • the control terminal is used to obtain stimulus materials from the data storage device, generate visual stimulus scenes, and send them to the display screen to present visual stimulation; at the same time, it receives EEG signals from the EEG signal collector in real time for analysis and decoding, updates the visual scene content according to the decoding results, and evaluates the attention level.
  • the visual stimulation interaction scene of the present invention is more complex and interesting than the traditional concentration-based control interaction scene, and can obtain training effects that are better than existing methods.
  • FIG1 is a logic block diagram of a brain-computer training method according to an embodiment of the present invention.
  • FIG2 is a schematic diagram of a brain-computer training system according to an embodiment of the present invention.
  • Brain-computer interaction technology can rely on the analysis of brain waves generated by the human brain, bypass the peripheral nerve pathways, and achieve direct interaction between the human brain and the surrounding environment. Since no physical movements are required, brain-computer interaction can replace physical movements in traditional attention, executive control, learning, memory and other cognitive function training, which helps to improve concentration and improve the effectiveness of cognitive training through brain wave biofeedback mechanism.
  • Non-invasive visual brain-computer interface is a major branch of brain-computer interaction technology. It records scalp brain waves through sensors such as Ag/AgCl electrodes and gold-plated electrodes placed above the scalp. It is a method of non-invasively recording brain electrical activity with a low signal-to-noise ratio. Other methods include invasive brain-computer interfaces based on implanted electrodes such as Utah electrodes, but there are certain surgical risks. Visual brain-computer interfaces are mainly divided into event-related potential (ERP) and steady-state visual evoked potential (SSVEP) brain-computer interfaces. They rely on the analysis of brain waves induced by target flicker to achieve the purpose of interaction.
  • ERP event-related potential
  • SSVEP steady-state visual evoked potential
  • ERP brain-computer interface mainly detects the ERP response signal generated by a single flicker of visual stimulation, while SSVEP requires visual stimulation to flicker repeatedly at a certain frequency to detect the induced steady-state periodic wave with the same frequency/phase as the flicker frequency.
  • the signal generation methods and signal characteristics of the two are different.
  • the detection difficulty of ERP signals is higher than that of SSVEP, and the communication rate is also lower than the latter.
  • ERP BCI has significant advantages over SSVEP BCI in the following aspects: first, the flicker frequency is much lower than SSVEP, and it is less likely to cause visual fatigue; second, unlike SSVEP, which mainly relies on the activity of the occipital visual cortex, ERP can also detect the activity of the occipital cortex.
  • ERP signals that brain-computer interfaces rely on have been proven to be closely related to the cognitive processing functions of the human brain in multiple brain regions such as the frontal, parietal, and temporal lobes. Their signal strength and detection performance directly reflect the ability of related cognitive processing.
  • the existing ERP brain-computer interface requires users to maintain a high level of attention during the interaction process, so it is difficult to apply to people with attention disorders such as autism and children with ADHD.
  • an ERP brain-computer interface speller based on flash stimulation or facial image stimulation the user needs to keep paying attention to the flashing of the spelled characters and silently count the number of flashes.
  • the subjects reflected that only at a high level of attention can the correct output of characters be completed, and a slight relaxation of attention may lead to spelling errors and easy frustration.
  • the present invention describes an ERP brain-computer interaction technology with low requirements for human brain attention resources, which can be used for more complex and interesting interaction scenarios than traditional concentration-based control, while allowing users to complete the control of the brain-computer interface at a lower attention level, and as the training progresses, the requirements for attention levels can be adjusted, thereby achieving progressive cognitive training and obtaining results that are superior to other existing cognitive training methods.
  • the present invention provides a visual stimulation method. Different from the traditional ERP brain-computer interface which requires the identification of the target of the visual stimulation, the present invention is based on a directional presentation of visual stimulation, does not require the user to identify the visual stimulation, and has a lower attention requirement.
  • a guiding symbol is superimposed in the center of the stimulus element that generates the visual stimulation scene to direct the user's attention resources to the guiding symbol. Since no identification is required during the stimulus presentation process, a separate test can be conducted to select the guiding symbol.
  • the guiding symbol can be a yellow "cross” or other symbols that can attract attention.
  • the present invention does not limit the form of the guiding symbol, and the guiding symbol is not limited to the "cross", and other symbols can also be selected, so as not to affect the user's accurate identification of the background stimulus picture.
  • the stimulation elements When the visual stimulation scene is presented, the stimulation elements will change their original form and be presented in a variable mode (such as being replaced with other stimulation images), and the guiding symbols will not be affected. The user only needs to pay attention to the guiding symbols without identifying or judging the stimulation elements.
  • the guide symbol during the presentation of the visual stimulation scene, the guide symbol will change at any time, and the user determines whether the guide symbol has changed.
  • the guide symbol changes from a yellow "cross” to a red “cross”, or remains unchanged, such as always remaining a yellow “cross”.
  • the present invention does not limit the way the guide symbol changes.
  • each visual stimulus will be presented in a certain interval.
  • the time interval between each stimulus presentation be between 100 and 500 ms.
  • the user needs to select one of several separate discs displayed on the screen, he needs to first find and look at the disc; a yellow "cross" guiding symbol is displayed in the center of each disc; when the stimulus begins to appear, each disc will be replaced by a different graphic/image stimulus at a certain time interval, and then restore the original shape of the disc; the user's task is to determine whether the yellow "cross” turns red (or other specified colors), without identifying the disc itself or the replaced graphic/image stimulus, until the system makes a selection based on the EEG signal. Because the guiding symbol changes very slowly or remains unchanged, the attention resources required for this task are far less than traditional methods.
  • the predetermined rules of the stimulus elements are encoded, and the corresponding stimulus elements are presented in sequence according to the predetermined rules.
  • the computer When the computer makes a judgment, it first determines the rows and columns of interest based on the ERP signal, and the intersection position is the target option.
  • the present invention can also use other more efficient encoding methods, such as binomial encoding.
  • the present invention does not specifically limit the encoding method.
  • the present invention can design an attractive interactive scene according to the traditional cognitive training process, that is, the visual stimulation scene can be presented in 3D or 2D form, and the visual stimulation scene can at least include a matching scene and an elimination scene, wherein
  • the user searches for a specified target stimulus element from among several stimulus elements; as an example, several targets are placed in a comfortable and natural background (such as placing multiple different fruits in different positions in the grass), and the user needs to search for a specified target (such as an apple) from them.
  • a specified target such as an apple
  • the user looks at and eliminates the target stimulus elements one by one among several stimulus elements.
  • several targets are placed in a comfortable and natural background, and the user needs to look at each target and eliminate them one by one to get a reward.
  • the present invention further provides a brain-computer training method, comprising:
  • the receiving of EEG signals for analysis includes: for the real-time acquired EEG waves, the control terminal marks the EEG data according to the trig signals when each stimulus is presented on the display screen, and then sends the signals to the computing unit for signal processing and decoding.
  • a trig connection is formed from a unidirectional solid line from the display device to the EEG signal collector; at the same time, a bidirectional connection is implemented from the display device to the control terminal to represent data interaction; a trig connection is represented by a unidirectional dotted line to represent the trig method through VBL.
  • the computing unit After receiving the data, the computing unit performs bandpass filtering on the real-time acquired EEG signal; as an embodiment of the present invention, the acquired real-time signal is subjected to 0.5Hz to 40Hz bandpass filtering; this can be achieved by connecting a 3rd-order high-pass Butterworth digital filter with a cutoff frequency of 0.5Hz and a 5th-order low-pass Butterworth digital filter with a cutoff frequency of 40Hz in series.
  • the training method of the correlation space projection filter coefficient u is:
  • k1 target stimulation segments and k0 non-target stimulation segments respectively, as S ⁇ Rn ⁇ m ⁇ k1 , N ⁇ Rn ⁇ m ⁇ k0 , where n is the number of electrodes and m is the length of a single segment data.
  • the number of k1 and k0 affects the performance of the trained filter. It is recommended that k1 be greater than 300, and k0 be determined by k1 to correspond to all non-target stimulation segments.
  • u is the coefficient of the correlation space projection filter.
  • the filtered EEG signal is segmented into data segments with a length of 600 to 1000 ms.
  • the acquired data segments are sent to the deep neural network decoder for decoding.
  • the step of receiving the EEG signal for decoding includes: decoding by a deep neural network decoder according to the data segment.
  • a deep neural network decoder As an embodiment of the present invention, the structure of the deep neural network decoder is described layer by layer as follows (the output of each layer is the input of the next layer):
  • Input layer input signal X ⁇ R p ⁇ m , where p is the number of hidden variables and m is the length of the time dimension segmented data;
  • Layer 1 Time dimension convolution layer, which consists of 16 1D convolution kernels that operate point by point in the time dimension.
  • the kernel length is fs /2, where fs is the sampling rate.
  • the second layer spatial convolution layer, which consists of a 1D convolution kernel weighted by the spatial dimension (i.e., the latent variable dimension), with a kernel length of p, the number of latent variables, and a step length of 0;
  • Layer 3 Average pooling layer, which takes the average of every 4 samples of the input sample in the time dimension as the output;
  • the fourth layer is the time dimension convolution layer, which consists of a 1D convolution kernel that operates point by point in the time dimension, with a kernel length of fs /8;
  • Layer 5 Fully connected layer, consisting of 1 fully connected network
  • Layer 6 Average pooling layer, which takes the average of every 8 samples of the input sample in the time dimension as the output;
  • Layer 7 Output layer, which stretches and concatenates the input data into a one-dimensional vector and outputs a one-hot binary vector containing two elements, representing the target and non-target stimulation types respectively. Input and output are realized through full connection.
  • the decoder training parameters are set as follows: the cross entropy loss function is used. At the same time, in order to avoid the impact caused by the imbalance of the number of target and non-target stimulation samples, the respective losses are calculated according to the ratio of the number of target and non-target samples. Weighted; the dropout rate is 0.5, that is, 50% of the weights are discarded; iterative training is 300 to 500 times.
  • the brain-computer training method also includes attention assessment, which specifically includes real-time estimation of the EEG signal power spectrum within a time window of a specific length from the current moment to the past through fast Fourier transform of the collected EEG signal; calculating the energy in the ⁇ band, ⁇ band, and ⁇ band, and taking ( ⁇ + ⁇ )/ ⁇ as the concentration value, the smaller the ratio, the higher the attention level; and evaluating the attention level according to time windows of different lengths, wherein the attention level includes instantaneous attention level and long-term attention level.
  • attention assessment specifically includes real-time estimation of the EEG signal power spectrum within a time window of a specific length from the current moment to the past through fast Fourier transform of the collected EEG signal; calculating the energy in the ⁇ band, ⁇ band, and ⁇ band, and taking ( ⁇ + ⁇ )/ ⁇ as the concentration value, the smaller the ratio, the higher the attention level; and evaluating the attention level according to time windows of different lengths, wherein the attention level includes instantaneous attention level and long-term attention
  • the brain-computer training method is provided with visual stimulation scenes of multiple difficulty levels, and the visual stimulation scenes of the level corresponding to the current attention level are adjusted and presented in real time according to the assessed user attention level.
  • the progressive training of the present invention can be understood as different difficulty levels. Ten difficulty levels, 1 to 10, are set according to the number of repetitions of the stimulation, where level 1 means that each stimulation is presented only once, representing the highest difficulty; level 10 means that each stimulation is presented 10 times, representing the lowest difficulty.
  • the present invention performs event marking according to the feedback trig signal, and updates the visual scene content according to the decoding result.
  • the present invention calculates the concentration value reflecting the user's attention level based on the recorded EEG signal, and optionally feeds back to the user on the display.
  • the feedback can be in the form of a bar graph indicating the intensity of attention, or in the form of graphic color changes.
  • the user can select the corresponding level for training according to the level of attention, or the system can adjust the training difficulty and update the scene according to the calculated concentration value, thereby achieving progressive cognitive training that adjusts the required attention level according to the feedback concentration.
  • the present invention further provides a brain-computer training system, comprising
  • the EEG signal collector is used to collect the user's EEG signals in real time and mark events based on the feedback trig signals;
  • the wearable EEG signal collector of the present invention is composed of a main frame, electrodes, a control circuit board, an antenna, and a battery.
  • the main frame can be made of hard and lightweight acrylonitrile-butadiene-styrene copolymer (ABS) material combined with 3D printing technology.
  • ABS acrylonitrile-butadiene-styrene copolymer
  • the electrode uses a flexible dry electrode made of Ag/AgCl to ensure effective contact between the electrode and the scalp.
  • the control circuit board can use an open source wireless EEG acquisition board well known in the industry, such as the OpenBCI wireless acquisition board, or a commercial embedded acquisition board, such as the 8-lead BCIduino Bluetooth acquisition module, and can also be implemented by building separate components, which may specifically include operational amplifier circuits, A/D converters, Bluetooth transceivers, etc.
  • the present invention requires a sampling rate of not less than 1000Hz, The sampling depth is not less than 24 bits.
  • the wearable EEG signal collector of the present invention adopts a few-electrode design, including but not limited to 4 basic signal electrodes and 1 ground electrode.
  • the 4 basic signal electrodes are placed on the forehead, the top of the head, and the left and right occipital-temporal regions, respectively corresponding to the FPz, Cz, PO7, PO8 and other 4 positions in the 10-20EEG international system.
  • 1 ground electrode is placed between FPz and Cz.
  • the electrode at FPz is mainly used to record theta waves and beta waves for the present invention to evaluate the level of attention according to the EEG rhythm; Cz, PO7, PO8 and other 3 electrodes are used for brain-computer interface decoding.
  • recording electrodes can also be added within a range of about 2cm near the above-mentioned electrodes, and electrodes at the midline Pz, Oz and other positions can be supplemented.
  • the number of electrodes required by the present invention is lower than the number of electrodes required by other ERP brain-computer interfaces, which reduces the cost and complexity of the equipment; and the EEG signal acquisition of the present invention does not involve peripheral nerves, thereby effectively avoiding the distraction caused by limb movements in other interaction methods.
  • a display device used to present visual stimulation as described above to a user and to feed back a trig signal
  • the display device described in the present invention can be a display screen or other display modes such as virtual/augmented/mixed reality (VR/AR/MR).
  • the display device is used to present visual stimulation to the user, and at the same time, a trig signal for stimulation presentation should be provided.
  • the trig signal should indicate the exact moment when the visual stimulation is actually presented on the display device, and is used for marking the stimulation event. Since the ERP signal on which the ERP brain-computer interface relies has the characteristic of time locking, the time accuracy of stimulus presentation is crucial for EEG decoding.
  • the present invention requires the time accuracy of trig to be less than 1ms.
  • the vertical synchronization signal provided by the display device can be used as a trig signal, and other methods such as photocells can also be used to obtain the trig signal, but the above-mentioned time accuracy requirements should be met.
  • low-latency virtual display (VR) glasses are used to avoid interference from the surrounding environment and ensure the user's concentration.
  • a data storage device for storing visual stimulation data and EEG signal data
  • the data storage device of the present invention can be implemented by a high-speed solid-state storage module, such as a PCIe4.0NVME solid-state hard disk module available on the market.
  • the storage capacity is based on the amount of data in actual applications.
  • a solid-state storage module with a capacity of 128GB or more can be selected and implemented in conjunction with a traditional disk storage device to reduce costs.
  • Visual stimulation materials include but are not limited to the following three categories: a) object pictures, which refer to pictures taken of objects existing in daily life and nature; b) contour pictures, which refer to pictures obtained by processing the photographed object pictures using computer graphics methods and extracting their key contours and textures; c) synthetic pictures, which refer to pictures drawn manually or by computer graphics. Computer generated images.
  • the control terminal is used to obtain stimulus materials from a data storage device, generate a visual stimulus scene, and send it to a display screen to present visual stimulus; receive EEG signals from an EEG signal collector in real time for analysis and decoding, and update the visual scene content according to the decoding results, and evaluate the attention level.
  • the control terminal of the present invention includes a computing unit, which is composed of parallel computing modules such as a high-performance digital signal processor (DSP) and a graphics processing unit (GPU), and is used for analysis, processing and decoding of EEG signals.
  • DSP digital signal processor
  • GPU graphics processing unit
  • the visual stimulation interaction scene of the present invention is more complex and interesting than the traditional interaction scene based on concentration control, and can obtain training effects that are better than existing methods.
  • the system and method of the present invention are not limited to cognitive training, but can also be used for text output, communication, environmental control, etc. in specific environments.
  • the system and method of the present invention are not limited to non-invasive EEG signals, but are also applicable to neural activity signals recorded in an invasive manner, as well as magnetoencephalography (MEG), near-infrared spectroscopy (NIRS), functional magnetic resonance imaging (fMRI) and other signals induced using the method of the present invention.
  • MEG magnetoencephalography
  • NIRS near-infrared spectroscopy
  • fMRI functional magnetic resonance imaging

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Psychology (AREA)
  • General Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Social Psychology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

本发明公开了一种视觉刺激方法、脑机训练方法和脑机训练系统,包括生成视觉刺激场景,在所述视觉刺激场景的刺激要素中央叠加一个引导符号,将用户的注意资源导向所述引导符号;用户只需关注引导符号,而不对刺激要素进行识别或判断。本发明通过在所述视觉刺激场景的刺激要素中央叠加一个引导符号,将用户的注意资源导向所述引导符号,允许使用者在较低的注意力水平下顺利完成脑机接口的控制;并随着训练的进行,可调整对注意力水平的要求,从而实现渐进式的注意力训练;本发明的视觉刺激交互场景比传统的基于专注度进行控制交互场景的更复杂有趣,且能够获得优于现有方法的训练效果。

Description

一种视觉刺激方法、脑机训练方法和脑机训练系统 技术领域
本发明涉及生物电信号处理技术领域,尤其涉及一种视觉刺激方法、脑机训练方法和脑机训练系统。
背景技术
注意作为人脑的一项基本认知功能,表现为人脑对视觉、听觉、触觉等感觉通路感知信息的选择与过滤能力,以及对所注意的感觉信息的保持能力,直接影响着执行控制、学习、记忆等更高层次的认知加工。注意力的训练是一个渐进的过程。在传统认知功能训练中,通过针对注意、执行控制、学习、记忆等的一系列行为任务,监测与促进人与环境之间的视听觉、肢体动作等交互,以达到逐步提升人尤其是注意力障碍患者的注意力水平及其他认知功能的目的。基于脑电信号与脑机交互的认知功能训练则为这一训练过程提供了新的交互手段,并在专注度、沉浸度及生物反馈调节方面表现出更大优势,同时适用于在虚拟/增强/混合现实(VR/AR/MR)环境中的集成。
现有脑机训练技术的主要缺点:其一,基于脑机接口的方法,虽然可用于复杂控制的场景,然而现有技术需要使用者始终保持较高的注意力水平,易使使用者产生挫败感,因而对于患有注意力障碍的人群而言,较难达到较好的训练效果。其二,对于仅根据脑电节律变化进行控制的生物反馈方法,在交互训练过程中,控制方式单一,大脑信息输出较低,因而无法完成复杂控制,降低了交互的趣味性。
发明内容
针对这些缺点,本发明的主要目的是提供一种视觉刺激方法,描述了一种对人脑注意资源需求较低的ERP脑机交互技术,可用于比传统基于专注度控制更复杂有趣的交互场景,同时允许使用者在较低的注意力水平下完成脑机 接口的控制,且随着训练的进行,可调整对注意力水平的要求,从而实现渐进式的认知训练,获得优于现有其他认知训练方法的效果。
为实现上述目的,本发明提供一种视觉刺激方法,包括
生成视觉刺激场景,在所述视觉刺激场景的刺激要素中央叠加一个引导符号,将用户的注意资源导向所述引导符号;
所述视觉刺激场景呈现时,所述刺激要素将改变原有形态,引导符号不受影响,用户只需关注引导符号,而不对刺激要素进行识别或判断。
进一步的,所述视觉刺激场景呈现过程中,引导符号将在任意时刻发生变化,用户判断该引导符号是否发生了改变。
进一步的,当所述视觉刺激场景中存在多个刺激要素时,对刺激要素预定规则进行编码,并按照预定规则顺次呈现对应刺激要素。
进一步的,所述视觉刺激场景以3D或2D形式呈现,所述视觉刺激场景至少包括匹配场景和消除场景,其中
匹配场景,用户从若干刺激要素中搜寻指定的目标刺激要素;
消除场景,用户在若干刺激要素中注视并逐个消除目标刺激要素。
本发明还提供一种脑机训练方法,包括
采用权利要求上述任意一种所述视觉刺激方法进行视觉刺激;
实时采集用户的脑电信号,并根据反馈的trig信号进行事件打标;
接收脑电信号进行分析与解码;
根据解码结果更新视觉场景内容。
进一步的,还包括注意力评估,所述注意力评估具体包括将采集的脑电信号经过快速傅里叶变换实时估计从当前时刻往前特定长度时间窗内的EEG信号功率谱;计算β频带、α频带、θ频带带内能量,以(α+θ)/β作为专注度值,比值越小代表注意力水平越高;根据不同长度的时间窗,评估注意力水平,所述注意力水平包括瞬时注意力水平与长时注意力水平。
进一步的,设置有多个难度等级的视觉刺激场景,根据评估的用户注意力水平,实时调整并呈现现有注意力水平对应等级的视觉刺激场景。
进一步的,所述接收脑电信号进行分析步骤包括:
对实时获取的脑电信号进行带通滤波;
通过相关空间投影滤波算法实时滤波;
根据事件标记,将滤波后的脑电波信号分割为预设长度的数据段。
进一步的,所述接收脑电信号进行解码步骤包括:
根据数据段通过深度神经网络解码器进行解码,所述深度神经网络解码器结构逐层依次为,输入层、时间维卷积层、空间维卷积层、平均池化层、时间维卷积层、全连接层、平均池化层、输出层。
本发明还提供一种脑机训练系统,包括
脑电信号采集器,用于实时采集用户的脑电信号,并根据反馈的trig信号进行事件打标;
显示装置,用于向用户呈现如上述方法的视觉刺激并反馈trig信号;
数据存储器,用于存储视觉刺激数据与脑电信号数据;
控制终端,用于从数据存储器获取刺激素材,生成视觉刺激场景,发送给显示屏呈现视觉刺激;同时从脑电信号采集器实时接收脑电信号进行分析与解码,并根据解码结果更新视觉场景内容,并评估注意力水平。
本发明的上述技术方案中,通过在所述视觉刺激场景的刺激要素中央叠加一个引导符号,将用户的注意资源导向所述引导符号,允许使用者在较低的注意力水平下顺利完成脑机接口的控制;并随着训练的进行,可调整对注意力水平的要求,从而实现渐进式的注意力训练;本发明的视觉刺激交互场景比传统的基于专注度进行控制交互场景的更复杂有趣,且能够获得优于现有方法的训练效果。
附图说明
为了更清楚地说明本发明实施方式或现有技术中的技术方案,下面将对实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图示出的结构获得其他的附图。
图1为本发明实施例一种脑机训练方法逻辑框图;
图2为本发明实施例一种脑机训练系统示意图。
具体实施方式
下面将结合本发明实施方式中的附图,对本发明实施方式中的技术方案进行清楚、完整地描述,显然,所描述的实施方式仅仅是本发明的一部分实施方式,而不是全部的实施方式。基于本发明中的实施方式,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施方式,都属于本发明保护的范围。
需要说明,本发明实施方式中所有方向性指示(诸如上、下……)仅用于解释在某一特定姿态(如附图所示)下各部件之间的相对位置关系、运动情况等,如果该特定姿态发生改变时,则该方向性指示也相应地随之改变。
并且,本发明各个实施方式之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本发明要求的保护范围之内。
脑机交互技术可以依靠对人脑产生的脑电波的解析,绕过外周神经通路,实现人脑与周围环境的直接交互。由于无需肢体动作的参与,脑机交互可以在传统的注意、执行控制、学习、记忆等认知功能训练中替代肢体动作进行应答,有助于提高专注度,同时通过脑电波生物反馈机制,提高认知训练效力。
非侵入式的视觉脑机接口是脑机交互技术的一个主要分支,通过放置在头皮上方的Ag/AgCl电极、镀金电极等传感器记录头皮脑电波,属于低信噪比无创记录脑电活动的方法,其他方法包括基于Utah电极等植入电极的侵入式脑机接口,但存在一定手术风险。视觉脑机接口主要分为事件相关电位(ERP,event-related potential)和稳态视觉诱发电位(steady-state visual evoked potential,SSVEP)脑机接口两种,依赖对目标闪烁所诱发的脑电波的解析实现交互目的。ERP脑机接口主要检测视觉刺激单次闪烁产生的ERP响应信号,而SSVEP则需要视觉刺激以一定频率重复闪烁,检测所诱发的和闪烁频率同频/同相的稳态周期波,二者信号产生方式与信号特性不同。一般而言,ERP信号的检测难度高于SSVEP,通信速率也低于后者。但是,ERP脑机接口在以下方面较SSVEP脑机接口具有显著优势:其一,闪烁频率远低于SSVEP,更不易引起视觉疲劳;其二,与SSVEP主要依赖枕叶视皮层活动不同,除枕叶外,ERP 脑机接口所依赖的ERP信号已证实与额顶叶、颞叶等多个脑区的人脑认知加工功能密切相关,其信号强度与检测性能则直接反应了相关认知加工的能力。
然而现有ERP脑机接口需要使用者在交互过程中保持较高的注意力水平,因此较难在自闭症、儿童多动症等注意力障碍群体中应用。例如,在使用基于闪光刺激或面孔图像刺激的ERP脑机接口的拼写器时,需要使用者持续保持对所拼写字符闪烁的注意,并默数其闪烁次数。在健康人实验中,被试反映只有在较高的注意水平下才能完成字符的正确输出,而稍放松注意力,即可能导致拼写错误,易产生挫败感。
因此,参见图1,本发明描述了一种对人脑注意资源需求较低的ERP脑机交互技术,可用于比传统基于专注度控制更复杂有趣的交互场景,同时允许使用者在较低的注意力水平下完成脑机接口的控制,且随着训练的进行,可调整对注意力水平的要求,从而实现渐进式的认知训练,获得优于现有其他认知训练方法的效果。
本发明提供一种视觉刺激方法,与传统ERP脑机接口需要对视觉刺激的目标进行辨识不同,本发明基于一种导向性的呈现视觉刺激,不需要使用者辨识视觉刺激,具有较低的注意力需求。
在生成视觉刺激场景的刺激要素中央叠加一个引导符号,将用户的注意资源导向所述引导符号。由于刺激呈现过程中无需辨识,可另行开展测试选择引导符号。作为一种示例,所述引导符号可以是黄色的“十”字,或其他能够吸引注意的符号。本发明对引导符号的形式不做限定,引导符号不局限于“十”字,也可以选取其他符号,以不影响用户对背景刺激图片的准确辨识为准。
所述视觉刺激场景呈现时,所述刺激要素将改变原有形态,以可变的模式呈现(如更换为其他刺激图像),引导符号不受影响,用户只需关注引导符号,而不对刺激要素进行识别或判断。
在本发明的一个实施例中,所述视觉刺激场景呈现过程中,引导符号将在任意时刻发生变化,用户判断该引导符号是否发生了改变。作为一种示例,所述引导符号由黄色“十”字变为红色“十”字,或者保持不变,比如一直保持为黄色“十”字,本发明对引导符号的变化方式不做限定。
如果用户需要对多个视觉刺激进行选择,各视觉刺激将按一定间隔先后呈现。为获得较好的信号质量,建议各刺激呈现的时间间隔在100~500ms之 间。作为一种示例,假如用户需要从屏幕上显示的若干彼此分离的圆盘中选择其中一个圆盘,他需要首先找到并注视此圆盘;各圆盘中心均显示一个黄色“十”字引导符号;在刺激开始呈现时,各圆盘将以一定时间间隔分别被不同的图形/图像刺激瞬时替换,而后恢复圆盘原来的形状;用户的任务是判断这个黄色的“十”字是否变为红色(或其他指定颜色),而无需对圆盘自身或替换的图形/图像刺激进行识别,直到系统根据脑电信号做出选择判断。由于引导符号的变化非常缓慢,或者不变,该任务所需注意资源远少于传统方法。
在本发明的一个实施例中,当所述视觉刺激场景中存在多个刺激要素时,对刺激要素预定规则进行编码,并按照预定规则顺次呈现对应刺激要素。作为一种示例,以行/列编码方式进行说明,假设屏幕上有20个选项(圆盘),即使这些选项可能随机分布在屏幕的不同位置,我们可以将它们映射到一个虚拟的4行5列的矩阵方格中,每个方格中映射一个选项。每次随机选择其中一行或一列选项同步呈现刺激。这样仅需要4+5=9次刺激,即可遍历所有20个选项,从而提高了呈现效率。计算机进行判断时,先根据ERP信号确定所关注的行与列,其交叉位置即为目标选项。本发明也可以使用其他更加高效的编码方式,如二项式编码。本发明对编码的方式不做具体限定。
在本发明的一个实施例中,本发明可根据传统的认知训练过程,设计具有吸引力的交互场景,即所述视觉刺激场景可以以3D或2D形式呈现,所述视觉刺激场景可以至少包括匹配场景和消除场景,其中
匹配场景,用户从若干刺激要素中搜寻指定的目标刺激要素;作为一种示例,如在舒适、自然的背景下放置若干目标(如在草丛不同位置放置多个不同水果),使用者需要从中搜寻指定的目标(如苹果)。
消除场景,用户在若干刺激要素中注视并逐个消除目标刺激要素。作为一种示例,如在舒适、自然的背景下放置若干目标,使用者需要注视各目标,并逐个消除,以获得奖励。
在本发明的一个实施例中,如图2所示,本发明还提供一种脑机训练方法,包括
采用权利要求上述任意一种所述视觉刺激方法进行视觉刺激;
实时采集用户的脑电信号,并根据反馈的trig信号进行事件打标;
接收脑电信号进行分析与解码;
其中,所述接收脑电信号进行分析包括:对于实时获取的脑电波,控制终端将根据各个刺激在显示屏上呈现时的trig信号在脑电数据中进行标记,而后发送给计算单元进行信号处理和解码。
具体的,图2中由单向实线显示装置到脑电信号采集器的trig连接;同时,显示装置到控制终端采用实现双向连接表示数据交互;trig连接由单向虚线表示通过VBL进行trig的方式。
计算单元接收到数据后,对实时获取的脑电信号进行带通滤波;作为本发明的一个实施例,对所获取实时信号进行0.5Hz~40Hz带通滤波;可采用一个0.5Hz截止频率的3阶高通Butterworth数字滤波器和一个40Hz截止频率的5阶低通Butterworth数字滤波器串联实现。
通过相关空间投影滤波算法实时滤波;作为本发明的一个实施例,假设x(t)∈Rn×1为当前时刻t获取的EEG信号向量,u∈Rn×p为滤波器系数,其中n为电极个数,p为隐变量个数,则滤波后的信号为y(t)=uT·x(t),y∈Rp×1,上标T表示矩阵的转置运算。u通过训练得到。
作为本发明的一个实施例,其中相关空间投影滤波器系数u的训练方法为:
记录k1个靶刺激分段和k0个非靶刺激分段,分别记为S∈Rn×m×k1,N∈Rn×m×k0,其中n为电极个数,m为单个分段数据的长度。k1和k0的数量影响着训练所得滤波器的性能,建议k1取300以上,k0则取k1确定后对应所有的非靶刺激分段。
计算
计算Ccc=(A-B)·(A-B)T
计算L=chol(CXX+ξ·I),其中函数chol(A)表示对矩阵A做Cholesky分解,I为单位阵;若CXX为非奇异阵,那么ξ=0,否则ξ取一个绝对值较小的非零数,如ξ=0.01。
计算invL=inv(L),其中函数inv(A)表示对矩阵A求逆;
计算[V,D]=eig(invLT·Ccc·invL),其中eig(A)表示对矩阵A求特征值分解,并按特征值由大到小顺序对所得特征值和对应的特征向量进行排序,矩阵V的每 一列代表分解所得的一个特征向量,向量D的每个元素代表V中每个特征向量对应的特征值;
计算U=invL·V;
取U的前p列构成矩阵u∈Rn×p
u即为相关空间投影滤波器的系数。
根据事件标记,将滤波后的脑电波信号分割为长度为600~1000ms的数据段。
将获取的数据分段发送给深度神经网络解码器进行解码。
所述接收脑电信号进行解码步骤包括:根据数据段通过深度神经网络解码器进行解码。作为本发明的一个实施例,所述深度神经网络解码器结构逐层描述如下(每层的输出为下一层的输入):
输入层:输入信号X∈Rp×m,其中p为隐变量个数,m为时间维分段数据的长度;
第1层:时间维卷积层,由16个按时间维逐点运算的1维卷积核构成,核长为fs/2,fs为采样率;
第2层:空间维卷积层,由1个按空间维(即隐变量维)加权的1维卷积核构成,核长为隐变量个数p,步长为0;
第3层:平均池化层,对输入样本按时间维每4个样本取一次平均,作为输出;
第4层:时间维卷积层,由1个按时间维逐点运算的1维卷积核构成,核长为fs/8;
第5层:全连接层,由1个全连接网络构成;
第6层:平均池化层,对输入样本按时间维每8个样本取一次平均,作为输出;
第7层:输出层,将输入数据拉伸、拼接为1个一维向量,输出为包含两个元素的one-hot型二进制向量,分别代表靶和非靶刺激两个类型。输入、输出通过全连接实现。
该解码器训练相关参数设置为:采用交叉熵损失函数,同时为避免靶、非靶刺激样本数量不平衡造成的影响,按靶、非靶样本数量比值对各自损失进行 加权;Droupout率为0.5,即50%的权重丢弃不用;迭代训练300~500次。
进一步的,脑机训练方法还包括注意力评估,所述注意力评估具体包括将采集的脑电信号经过快速傅里叶变换实时估计从当前时刻往前特定长度时间窗内的EEG信号功率谱;计算β频带、α频带、θ频带带内能量,以(α+θ)/β作为专注度值,比值越小代表注意力水平越高;根据不同长度的时间窗,评估注意力水平,所述注意力水平包括瞬时注意力水平与长时注意力水平。
在本发明的一个实施例中,脑机训练方法设置有多个难度等级的视觉刺激场景,根据评估的用户注意力水平,实时调整并呈现现有注意力水平对应等级的视觉刺激场景。本发明的渐进式训练可理解为不同的难度等级。依据刺激的重复次数设定1~10等10个难度等级,其中1级表示每个刺激仅呈现1次,代表最高难度;10级表示每个刺激呈现10次,代表最低难度。
具体的,本发明根据反馈的trig信号进行事件打标,并根据解码结果更新视觉场景内容。作为本发明的一种实施例,本发明根据记录的EEG信号,计算得到的反映用户注意力水平的专注度值,可选地在显示器上反馈给用户。反馈的形式可以是表明注意力强度的柱状图,抑或是图形色彩变化等形式。用户可根据注意水平,选择相应的等级进行训练,或系统根据计算所得的专注度值,调整训练难度,更新场景,从而实现根据反馈的专注度调整所需的注意力水平的渐进式认知训练。
在本发明的一个实施例中,如图2所示,本发明还提供一种脑机训练系统,包括
脑电信号采集器,用于实时采集用户的脑电信号,并根据反馈的trig信号进行事件打标;
具体的,本发明的穿戴式脑电信号采集器由主体支架、电极、控制电路板、天线、电池组成。主体支架可选用硬质、轻便的丙烯腈-丁二烯-苯乙烯共聚物(ABS)材料,结合3D打印技术制作。电极选用Ag/AgCl材质的柔性干电极,保证电极与头皮的有效接触。控制电路板可采用本行业熟知的开源无线EEG采集板,如OpenBCI无线采集板,或商业嵌入式采集板,如8导联的BCIduino蓝牙采集模块,也可通过分离元器件搭建实现,具体可能包括运算放大电路、A/D转换器、蓝牙收发器等。本发明要求采样率不低于1000Hz, 采样深度不低于24bit。
本发明的穿戴式脑电信号采集器采用少电极设计,包含但不限于4个基本信号电极、1个地电极。4个基本信号电极放置于前额、头顶、左右枕-颞区各1个,分别对应了10-20EEG国际系统中的FPz、Cz、PO7、PO8等4个位置。1个地电极放置于FPz、Cz中间的位置。FPz处的电极主要用于记录θ波与β波,以供本发明中根据EEG节律评估注意力水平;Cz、PO7、PO8等3个电极用于脑机接口解码。在用户状态不佳,或环境干扰较大情况下,也可在上述这些电极附近2cm左右范围内增加记录电极,并补充中线Pz、Oz等位置的电极。本发明所需电极数量低于其他ERP脑机接口所需电极数量,降低了设备成本与复杂性;且本发明脑电信号采集不涉及外周神经,从而有效避免了其他交互方式中肢体运动引起的注意力分散。
显示装置,用于向用户呈现如上述方法的视觉刺激并反馈trig信号;
本发明所述的显示装置可以是显示屏或虚拟/增强/混合现实(VR/AR/MR)等其他显示方式,显示装置用于向用户呈现视觉刺激,同时应提供刺激呈现的trig信号,该trig信号应指示视觉刺激在显示装置上实际呈现出来的准确时刻,用于刺激事件的打标。由于ERP脑机接口所依赖的ERP信号具有锁时的特点,因此刺激呈现的时间精度对EEG解码来说至关重要。本发明需要trig的时间精度小于1ms。可利用显示装置提供的垂直同步信号作为trig信号,也可采用光电池等其他方法得到trig信号,但均应满足上述时间精度要求。作为一种实施例,采用低延迟的虚拟显示(VR)眼镜,可以避免周围的环境干扰,保证使用者的专注度。
数据存储器,用于存储视觉刺激数据与脑电信号数据;
本发明数据存储器可以采用高速固态存储模块实现,如市场在售的PCIe4.0NVME固态硬盘模块。存储容量以实际应用中数据量为准,可选取固态存储模块容量128GB以上,配合传统磁盘存储器实现,以降低成本。视觉刺激材料包括但不限于以下3类:a)物体图片,指拍摄于日常生活与自然界中存在的物体的照片;b)轮廓图片,指利用计算机图形方法对拍摄的物体图片进行处理,提取其关键轮廓、纹理得到的图片;c)合成图片,指人工绘制或通 过计算机生成的图片。
控制终端,用于从数据存储器获取刺激素材,生成视觉刺激场景,发送给显示屏呈现视觉刺激;从脑电信号采集器实时接收脑电信号进行分析与解码,并根据解码结果更新视觉场景内容,并评估注意力水平。本发明控制终端包括计算单元,所述计算单元由高性能数字信号处理器(DSP)、图形处理器(GPU)等并行计算模块组成,用于脑电信号的分析处理与解码。
本发明的上述技术方案中,通过在所述视觉刺激场景的刺激要素中央叠加一个引导符号,将用户的注意资源导向所述引导符号,允许使用者在较低的注意力水平下顺利完成脑机接口的控制;并随着训练的进行,可调整对注意力水平的要求,从而实现渐进式的注意力训练;本发明的视觉刺激交互场景比传统的基于专注度进行控制交互场景的更复杂有趣,且能够获得优于现有方法的训练效果。
本发明系统和方法不仅限于认知训练,也可用于特定环境下的文本输出、通讯交流、环境控制等用途。
本发明系统和方法不仅限于非侵入式的脑电波EEG信号,同样适用于以侵入式方式记录的神经活动信号,以及在使用本发明所述方法诱发的脑磁图(MEG),近红外光谱(NIRS),功能性磁共振(fMRI)等信号。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是在本发明的发明构思下,利用本发明说明书及附图内容所作的等效结构变换,或直接/间接运用在其他相关的技术领域均包括在本发明的专利保护范围。

Claims (10)

  1. 一种视觉刺激方法,其特征在于,生成视觉刺激场景,在所述视觉刺激场景的刺激要素中央叠加一个引导符号,将用户的注意资源导向所述引导符号;
    所述视觉刺激场景呈现时,所述刺激要素将改变原有形态,引导符号不受影响,用户只需关注引导符号,而不对刺激要素进行识别或判断。
  2. 如权利要求1所述的视觉刺激方法,其特征在于,所述视觉刺激场景呈现过程中,引导符号将在任意时刻发生变化,用户判断该引导符号是否发生了改变。
  3. 如权利要求2所述的视觉刺激方法,其特征在于,当所述视觉刺激场景中存在多个刺激要素时,对刺激要素预定规则进行编码,并按照预定规则顺次呈现对应刺激要素。
  4. 如权利要求2所述的视觉刺激方法,其特征在于,所述视觉刺激场景以3D或2D形式呈现,所述视觉刺激场景至少包括匹配场景和消除场景,其中
    匹配场景,用户从若干刺激要素中搜寻指定的目标刺激要素;
    消除场景,用户在若干刺激要素中注视并逐个消除目标刺激要素。
  5. 一种脑机训练方法,其特征在于,包括
    采用权利要求1-4任意一种所述视觉刺激方法进行视觉刺激;
    实时采集用户的脑电信号,并根据反馈的trig信号进行事件打标;
    接收脑电信号进行分析与解码;
    根据解码结果更新视觉场景内容。
  6. 如权利要求5所述的脑机训练方法,其特征在于,还包括注意力评估,所述注意力评估具体包括将采集的脑电信号经过快速傅里叶变换实时估计从当前时刻往前特定长度时间窗内的EEG信号功率谱;计算β频带、α频带、θ频带带内能量,以(α+θ)/β作为专注度值,比值越小代表注意力水平越高;根据不同长度的时间窗,评估注意力水平,所述注意力水平包括瞬时注意力水平与长时注意力水平。
  7. 如权利要求6所述的脑机训练方法,其特征在于,设置有多个难度等 级的视觉刺激场景,根据评估的用户注意力水平,实时调整并呈现现有注意力水平对应等级的视觉刺激场景。
  8. 如权利要求5所述的脑机训练方法,其特征在于,所述接收脑电信号进行分析步骤包括:
    对实时获取的脑电信号进行带通滤波;
    通过相关空间投影滤波算法实时滤波;
    根据事件标记,将滤波后的脑电波信号分割为预设长度的数据段。
  9. 如权利要求8所述的脑机训练方法,其特征在于,所述接收脑电信号进行解码步骤包括:
    根据数据段通过深度神经网络解码器进行解码,所述深度神经网络解码器结构逐层依次为,输入层、时间维卷积层、空间维卷积层、平均池化层、时间维卷积层、全连接层、平均池化层、输出层。
  10. 一种脑机训练系统,其特征在于,包括
    脑电信号采集器,用于实时采集用户的脑电信号,并根据反馈的trig信号进行事件打标;
    显示装置,用于向用户呈现如权利要求1-4所述方法的视觉刺激;
    数据存储器,用于存储视觉刺激数据与脑电信号数据;
    控制终端,用于从数据存储器获取刺激素材,生成视觉刺激场景,发送给显示屏呈现视觉刺激;同时从脑电信号采集器实时接收脑电信号进行分析与解码,并根据解码结果更新视觉场景内容,并评估注意力水平。
PCT/CN2023/133431 2022-11-30 2023-11-22 一种视觉刺激方法、脑机训练方法和脑机训练系统 WO2024114480A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211520659.0 2022-11-30
CN202211520659.0A CN115981458A (zh) 2022-11-30 2022-11-30 一种视觉刺激方法、脑机训练方法和脑机训练系统

Publications (1)

Publication Number Publication Date
WO2024114480A1 true WO2024114480A1 (zh) 2024-06-06

Family

ID=85965563

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/133431 WO2024114480A1 (zh) 2022-11-30 2023-11-22 一种视觉刺激方法、脑机训练方法和脑机训练系统

Country Status (2)

Country Link
CN (1) CN115981458A (zh)
WO (1) WO2024114480A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115981458A (zh) * 2022-11-30 2023-04-18 中国科学院深圳先进技术研究院 一种视觉刺激方法、脑机训练方法和脑机训练系统
CN117152012B (zh) * 2023-09-05 2024-05-03 南京林业大学 孤独症人群视觉降噪的智能视觉处理系统、方法及设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9629976B1 (en) * 2012-12-21 2017-04-25 George Acton Methods for independent entrainment of visual field zones
CN107929007A (zh) * 2017-11-23 2018-04-20 北京萤视科技有限公司 一种利用眼动追踪和智能评估技术的注意力和视觉能力训练系统及方法
CN109271020A (zh) * 2018-08-23 2019-01-25 西安交通大学 一种基于眼动追踪的稳态视觉诱发脑机接口性能评价方法
CN112545517A (zh) * 2020-12-10 2021-03-26 中国科学院深圳先进技术研究院 一种注意力训练方法和终端
CN114424945A (zh) * 2021-12-08 2022-05-03 中国科学院深圳先进技术研究院 一种基于随机图形图像闪现的脑波生物特征识别系统与方法
US20220318551A1 (en) * 2021-03-31 2022-10-06 Arm Limited Systems, devices, and/or processes for dynamic surface marking
CN115981458A (zh) * 2022-11-30 2023-04-18 中国科学院深圳先进技术研究院 一种视觉刺激方法、脑机训练方法和脑机训练系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9629976B1 (en) * 2012-12-21 2017-04-25 George Acton Methods for independent entrainment of visual field zones
CN107929007A (zh) * 2017-11-23 2018-04-20 北京萤视科技有限公司 一种利用眼动追踪和智能评估技术的注意力和视觉能力训练系统及方法
CN109271020A (zh) * 2018-08-23 2019-01-25 西安交通大学 一种基于眼动追踪的稳态视觉诱发脑机接口性能评价方法
CN112545517A (zh) * 2020-12-10 2021-03-26 中国科学院深圳先进技术研究院 一种注意力训练方法和终端
US20220318551A1 (en) * 2021-03-31 2022-10-06 Arm Limited Systems, devices, and/or processes for dynamic surface marking
CN114424945A (zh) * 2021-12-08 2022-05-03 中国科学院深圳先进技术研究院 一种基于随机图形图像闪现的脑波生物特征识别系统与方法
CN115981458A (zh) * 2022-11-30 2023-04-18 中国科学院深圳先进技术研究院 一种视觉刺激方法、脑机训练方法和脑机训练系统

Also Published As

Publication number Publication date
CN115981458A (zh) 2023-04-18

Similar Documents

Publication Publication Date Title
WO2024114480A1 (zh) 一种视觉刺激方法、脑机训练方法和脑机训练系统
Shen et al. Contrastive learning of subject-invariant EEG representations for cross-subject emotion recognition
Becker et al. Emotion recognition based on high-resolution EEG recordings and reconstructed brain sources
Chaudhuri et al. Driver fatigue detection through chaotic entropy analysis of cortical sources obtained from scalp EEG signals
CN111783942B (zh) 一种基于卷积循环神经网络的脑认知过程模拟方法
CN112545517A (zh) 一种注意力训练方法和终端
US20070060830A1 (en) Method and system for detecting and classifying facial muscle movements
Anderson et al. EEG signal classification with different signal representations
Sawangjai et al. EEGANet: Removal of ocular artifacts from the EEG signal using generative adversarial networks
JPS63226340A (ja) 脳神経活動の位置と内部域の時間的関係を表示する方法とその装置
CN109247917A (zh) 一种空间听觉诱发p300脑电信号识别方法及装置
Lee et al. A real-time movement artifact removal method for ambulatory brain-computer interfaces
Nykopp Statistical modelling issues for the adaptive brain interface
Pun et al. Brain-computer interaction research at the Computer Vision and Multimedia Laboratory, University of Geneva
Wang et al. Performance enhancement of P300 detection by multiscale-CNN
CN114533086A (zh) 一种基于空域特征时频变换的运动想象脑电解码方法
US11759136B2 (en) Apparatus and method for generating 1:1 emotion-tailored cognitive behavioral therapy in meta verse space through artificial intelligence control module for emotion-tailored cognitive behavioral therapy
Lee et al. Decoding visual responses based on deep neural networks with ear-EEG signals
CN114424945A (zh) 一种基于随机图形图像闪现的脑波生物特征识别系统与方法
CN107085464A (zh) 基于p300字符拼写任务的情绪识别方法
Hortal et al. Selection of the best mental tasks for a svm-based bci system
Choi et al. Non–human primate epidural ECoG analysis using explainable deep learning technology
Li et al. Subject-based dipole selection for decoding motor imagery tasks
CN109078262B (zh) 一种基于外周神经电刺激的mi-bci训练方法
Lee et al. Decoding event-related potential from ear-EEG signals based on ensemble convolutional neural networks in ambulatory environment