CN112545517A - Attention training method and terminal - Google Patents

Attention training method and terminal Download PDF

Info

Publication number
CN112545517A
CN112545517A CN202011437300.8A CN202011437300A CN112545517A CN 112545517 A CN112545517 A CN 112545517A CN 202011437300 A CN202011437300 A CN 202011437300A CN 112545517 A CN112545517 A CN 112545517A
Authority
CN
China
Prior art keywords
eeg
user
visual
data
computing terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011437300.8A
Other languages
Chinese (zh)
Inventor
马征
詹阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202011437300.8A priority Critical patent/CN112545517A/en
Publication of CN112545517A publication Critical patent/CN112545517A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/168Evaluating attention deficit, hyperactivity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4836Diagnosis combined with treatment in closed-loop systems or methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/6803Head-worn items, e.g. helmets, masks, headphones or goggles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0044Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense
    • A61M2021/005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense images, e.g. video
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/08Other bio-electrical signals
    • A61M2230/10Electroencephalographic signals

Abstract

The embodiment of the invention provides an attention training method and a terminal, which are applied to a system comprising an EEG helmet, a visual stimulator and a computing terminal, wherein the computing terminal is respectively connected with the EEG helmet and the visual stimulator, and the method comprises the following steps: generating visual stimulation data through the computing terminal, and driving the visual stimulator to perform stimulation presentation on a user in a variable mode based on the visual stimulation data; acquiring an EEG signal of the user through an EEG helmet and sending the EEG signal to the computing terminal; evaluating, by the computing terminal, the user's attention level based on the EEG signal. The variable mode is adopted in the scheme to stimulate and present the user, so that the user can be ensured to successfully finish training under a lower attention level.

Description

Attention training method and terminal
Technical Field
The present invention relates to the field of attention training technologies, and in particular, to an attention training method and a terminal.
Background
The brain-computer interface is a new technology in the field of human-computer interaction, can directly depend on brain wave signals of people to realize the control of people on targets in the surrounding environment, and compared with the traditional interaction modes such as biofeedback and limb control in attention training, the brain-computer interface has various control modes, is less interfered and influenced by peripheral nerve motion control, is easier to arouse the interest of users, and improves the subjective participation degree of the users.
The brain-computer interface directly obtains information from the recorded human brain nerve activity signals, and is used for controlling virtual objects (such as game control) or real objects (such as wheelchairs) in an external environment, and the information is classified into invasive and non-invasive. The invasive brain-computer interface records electroencephalogram signals by means of intracranial implanted electrodes, such as neurolink flexible electrodes, Utah electrodes and the like, and belongs to an invasive recording method, while the non-invasive brain-computer interface records electroencephalogram signals by means of sensors, such as Ag/AgCl electrodes, gold-plated electrodes and the like, which are placed above the scalp, and belongs to a non-invasive recording method. The invasive brain-computer interface has high quality of recorded signals, but has operation risks, and is mostly used for patients needing craniotomy, while the non-invasive brain-computer interface has no operation risks and low cost, and is mostly used for normal healthy people or patients without craniotomy, but the signal-to-noise ratio of the recorded signals is very low. Depending on the neural signal difference, the conventional visual brain-computer interface is mainly classified into an event-related potential (ERP) brain-computer interface and a steady-state visual evoked potential (SSVEP) brain-computer interface 2. The ERP brain-computer interface depends on ERP components such as N1, P2, P300 and the like generated in brain electrical signals, and the existing paradigm has higher requirements on the use attention level. For example, when using a speller of an ERP brain-computer interface based on a flash stimulus or a face image stimulus, it is necessary for the user to keep attention on the flickering of a target character desired to be output, and to count the number of times of flickering thereof by default. In the actual measurement experiment, the healthy subject can finish the correct output of the characters only by requiring higher attention level, and the correct output of the characters can cause misspelling by slightly relaxing the attention, and meanwhile, the output rate is slower, and the frustration is easy to generate. The SSVEP brain-computer interface utilizes the SSVEP signal with the same frequency induced by the high-frequency flash stimulation to realize the detection of the target watched by the user. However, attention to the modulating effect on SSVEP signals has not been found in relevant cognitive neuroscience studies, and thus SSVEP brain-computer interfaces may be less involved in attention intervention than ERP brain-computer interfaces, and are less suitable for attention training than the former. The invention relates to a method based on ERP brain-computer interface.
Attention is an important high-level function of the human brain, and the ability of the human brain to select and filter sensory information of sensory pathways such as vision, hearing, and touch, and to maintain the attention to the sensory information directly affect higher-level conscious processing. The results of the behavioral research show that attention can be paid to improve the detection capability of people and accelerate the reaction time; results from neuroelectrophysiological studies indicate that attention can increase the intensity of occipital and apical cortical neuronal activity. Attention training can improve the concentration of human brain processing problems, facilitate the problem solving and reduce errors caused by attention reduction. Good attention is helpful for people to learn and master new knowledge and skills quickly, and is an indispensable capacity for special fields such as pilots, doctors, certain high-risk industries and the like. Neurological diseases such as autism and hyperactivity are often accompanied by attention deficit, which causes obstacles to the healthy growth and development of children.
Attention training is mainly realized by strengthening the visual and auditory sense, limb movement and other interactions between people and the environment. For example, "an incremental attention training method based on VR and eye tracker" (patent publication No. CN202010085927.5) discloses a method for constructing various scenes with attractive force by using virtual reality glasses and evaluating attention based on the eye tracker, which evaluates attention by judging whether a viewpoint can be effectively maintained through an eye movement signal, and judges a training level standard; "an EEG-based real-time human brain attention testing and training system" (patent publication No: CN201710164162.2) discloses a method for assessing attention levels through EEG signals, thereby improving attention and performance in attention tests (e.g., flower blooming, leaf growth) based on real-time biofeedback by the user; 'an attention training method and system based on brain-computer interaction' (patent publication No. CN201611106017.0) discloses a method based on brain-computer interaction, which obtains the assessment of concentration degree through the change of brain-computer rhythm and feeds back the training result in the virtual breakthrough game through an intelligent terminal.
The main defects of the prior art are that in the interactive training process, the control mode is single, the output of brain information is low, complex control cannot be completed, and the interactive process is lack of interestingness. Whether concentration is evaluated by using an eye movement maintaining signal (patent 'an incremental attention training method based on VR and eye movement instruments') or by biofeedback (patent 'a real-time human brain attention testing and training system based on EEG' and 'an attention training method and system based on brain-computer interaction'), interactive objects are controlled according to calculated concentration values, and the single control mode can only be used for simple one-dimensional control (such as flower opening and leaf growth), cannot meet more complex and interesting control scenes, can arouse the reduction of interest of users in long-term training and influences the attention training effect. Compared with the prior art, the brain-computer interface has higher information output and can be used for scenes with complex control.
However, the existing brain-computer interface technology requires that the user continuously maintain a high attention level, is difficult to work under a weak attention condition, is easy to cause frustration and poor interactive experience for normal people due to the fact that the normal people cannot operate and control as expected, and for people suffering from attention disorder, the attention level of the people is low, so that the people are more difficult to operate and control, and the purpose of expected attention training cannot be achieved.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an attention training method and a terminal. The variable mode is adopted in the scheme to stimulate and present the user, and the user is allowed to successfully complete the control of the brain-computer interface under a lower attention level.
Specifically, the present invention proposes the following specific examples:
the embodiment of the invention provides an attention training method, which is applied to a system comprising an EEG helmet, a visual stimulator and a computing terminal, wherein the computing terminal is respectively connected with the EEG helmet and the visual stimulator, and the attention training method comprises the following steps:
generating visual stimulation data through the computing terminal, and driving the visual stimulator to perform stimulation presentation on a user in a variable mode based on the visual stimulation data;
acquiring an EEG signal of the user through an EEG helmet and sending the EEG signal to the computing terminal;
evaluating, by the computing terminal, the user's attention level based on the EEG signal.
In a specific embodiment, the method further comprises the following steps:
generating feedback information based on the attention level data;
modifying the visual stimulus data based on the feedback information to drive the visual stimulator to display different content than the visual stimulus data originally;
further comprising: presenting the attention level data to the user.
In a particular embodiment, the EEG helmet comprises 3 signal electrodes and 1 ground electrode; wherein, the positions of the 3 signal electrodes on the EEG helmet respectively correspond to the head top, the left occipital-temporal area and the right occipital-temporal area of the user; the location of the ground electrode on the EEG helmet corresponds to the area between the forehead and the top of the head of the user.
In a particular embodiment, the EEG helmet further comprises FPz signal electrodes, the position of the FPz signal electrodes on the EEG helmet corresponds to the forehead of the user.
In a specific embodiment, the method further comprises the following steps: said "assessing by said computing terminal said user's attention level based on said EEG signal" comprises:
acquiring, by the computing terminal, the EEG signals from the FPz signal electrodes;
determining in real time, using a fast Fourier transform, a power spectrum of the acquired EEG signal within a time window of a particular length from a current time instant onwards;
calculating the ratio of the energy in the beta frequency band to the energy in the theta frequency band in the power spectrum;
the ratio is taken as the attention level, wherein the greater the ratio, the higher the attention level.
In a specific embodiment, the response time of the visual stimulator is less than 1 ms; the visual stimulator includes: VR glasses, AR glasses, MR glasses, displays, or projectors.
In a specific embodiment, the driving the visual stimulator to perform stimulus presentation to the user in a variable mode based on the visual stimulus data includes:
determining a plurality of targets included in the visual stimulus data;
aiming at the same target, selecting other targets different from the target, wherein the other targets comprise: a graphic or image;
and driving the visual stimulator to alternately display the target and the other targets so as to stimulate and present the user.
In a specific embodiment, the driving the visual stimulator to perform stimulus presentation to the user in a variable mode based on the visual stimulus data includes:
determining a plurality of targets included in the visual stimulus data;
arranging a plurality of selected targets in a plurality of targets in a preset mode, wherein the arranged targets are divided into a plurality of parts; each said portion comprising a plurality of said targets;
each section is displayed in turn while presenting each target in each section in a variable pattern for presentation of stimuli to the user.
In a specific embodiment, the visual stimulation data includes a plurality of test items, and different test items correspond to different difficulty levels;
the driving the visual stimulator to perform stimulus presentation to the user in a variable mode based on the visual stimulus data includes
Determining the test items included in the visual stimulus data and a difficulty rating for each of the test items;
and sequentially selecting the test items according to the difficulty grades from low to high to drive the visual stimulator to stimulate and present the user in a variable mode.
In a specific embodiment, the scene for performing the stimulus presentation is an interactive scene; the interaction scenario includes: searching and matching scenes and eliminating scenes.
In a specific embodiment, the computing terminal comprises a picture material library for generating visual stimulus data, wherein the picture material library comprises any combination of one or more of the following: object pictures, outline pictures, composite pictures.
In a particular embodiment, an EEG decoder is included in the computing terminal; the EEG decoder is trained based on the visual stimulus data;
said "assessing by said computing terminal said user's attention level based on said EEG signal" comprises:
decoding the EEG signal through the EEG decoder to obtain a decoding result;
evaluating the user's attention level based on the decoding result.
In a particular embodiment, the decoding result is a score output by the EEG decoder;
the "evaluating the attention level of the user based on the decoding result" includes:
determining a decoding probability based on the score, wherein the higher the score, the greater the corresponding decoding probability;
converting the decoding probabilities to concentration values of the visual stimulus data at a lowest level of difficulty, wherein the higher the decoding probability, the greater the corresponding concentration value;
determining a level of attention based on the concentration value.
In a specific embodiment, a trained classifier is arranged in the EEG decoder;
the "decoding the EEG signal by the EEG decoder to obtain a decoding result" includes:
decoding the EEG signal through the trained classifier to obtain the matching degree of the target concerned by the user and the target shown in the visual stimulation data at the same time point; the classifier is trained based on the visual stimulus data.
In a specific embodiment, said "decoding said EEG signal by said EEG decoder" comprises:
segmenting the EEG signal that is continuous in time into a plurality of data segments of a preset time length;
down-sampling each data segment to a sampling rate of 20Hz, and sequentially splicing the data segments into eigenvectors according to the sequence of electrode channels;
and leading the feature vector into the EEG decoder for decoding.
In a specific embodiment, the classifier is obtained by performing supervised training through preset feature vectors;
the preset feature vector is obtained by driving the visual stimulator to stimulate and present through the visual stimulation data, recording a plurality of EEG signals when a user is instructed to pay attention to a specified target, and finally splicing the recorded EEG signals.
The embodiment of the invention also provides a terminal which is applied to a system comprising an EEG helmet, a visual stimulator and a computing terminal, wherein the computing terminal is respectively connected with the EEG helmet and the visual stimulator and comprises a processor for executing the method.
Therefore, the embodiment of the invention provides an attention training method and a terminal, which are applied to a system comprising an EEG helmet, a visual stimulator and a computing terminal, wherein the computing terminal is respectively connected with the EEG helmet and the visual stimulator, and the method comprises the following steps: generating visual stimulation data through the computing terminal, and driving the visual stimulator to perform stimulation presentation on a user in a variable mode based on the visual stimulation data; acquiring an EEG signal of the user through an EEG helmet and sending the EEG signal to the computing terminal; evaluating, by the computing terminal, the user's attention level based on the EEG signal. The variable mode is adopted to stimulate and present the user, and the user is allowed to successfully complete the control of the brain-computer interface under a lower attention level.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic flow chart of an attention training method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a frame structure of a system in an attention training method according to an embodiment of the present invention;
fig. 3 is a schematic structural framework diagram of a terminal according to an embodiment of the present invention.
Detailed Description
Various embodiments of the present disclosure will be described more fully hereinafter. The present disclosure is capable of various embodiments and of modifications and variations therein. However, it should be understood that: there is no intention to limit the various embodiments of the disclosure to the specific embodiments disclosed herein, but rather, the disclosure is to cover all modifications, equivalents, and/or alternatives falling within the spirit and scope of the various embodiments of the disclosure.
The terminology used in the various embodiments of the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the various embodiments of the present disclosure. As used herein, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the various embodiments of the present disclosure belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined in various embodiments of the present disclosure.
Example 1
The embodiment 1 of the invention discloses an attention training method, which is applied to a system comprising an EEG helmet, a visual stimulator and a computing terminal, wherein the computing terminal is respectively connected with the EEG helmet and the visual stimulator, and as shown in figure 1, the method comprises the following steps:
101, generating visual stimulation data through the computing terminal, and driving the visual stimulator to perform stimulation presentation on a user in a variable mode based on the visual stimulation data;
step 102, acquiring an EEG (Electroencephalogram) signal of the user through an EEG helmet, and sending the EEG signal to the computing terminal;
step 103, evaluating, by the computing terminal, the user's attention level based on the EEG signal.
Specifically, in step 103, the targets that the user focuses on at different time points are determined based on the EEG signals, and the attention level of the user can be evaluated by comparing the targets at the same time point with the targets displayed based on the visual stimulation data, wherein the higher the matching degree is, the higher the attention level is.
Specifically, as shown in fig. 2, the system in the present solution is composed of an EEG helmet, a computing terminal and a visual stimulator, wherein the computing terminal mainly includes a picture material library, a scene controller for generating visual stimulation data, and 3 components of an EEG decoder. The EEG helmet is worn by a user and is used for recording EEG signals of the user in real time; the visual stimulator is used for displaying visual stimulation; the calculation terminal arranges and generates visual stimulation according to the material base and preset rules, drives the visual stimulator to present the required stimulation, simultaneously receives EEG signals from an EEG helmet in real time, and finally an EEG decoder reads out control signals according to the received EEG signals and in combination with time sequence information given by a scene controller, evaluates the attention level and provides the scene controller for adjusting display content and feedback information.
The EEG helmet comprises 3 signal electrodes and 1 ground electrode; wherein, the positions of the 3 signal electrodes on the EEG helmet respectively correspond to the head top, the left occipital-temporal area and the right occipital-temporal area of the user; the location of the ground electrode on the EEG helmet corresponds to the area between the forehead and the top of the head of the user.
Further, the EEG helmet may be comprised of a main body support, electrodes, a control circuit board, an antenna, and a battery. Wherein, the main body bracket can be made of hard and light acrylonitrile-butadiene-styrene copolymer (ABS) material by combining with 3D printing technology. As for the electrode, a flexible dry electrode made of Ag/AgCl material is selected, so that the effective contact between the electrode and the scalp is ensured. The control circuit board can adopt an open source wireless EEG acquisition board, such as an OpenBCI wireless acquisition board or a commercial embedded acquisition board; for example, the 8-lead BCIduino Bluetooth acquisition module can also be built by separating elements, and a specific control circuit board may comprise an operational amplifier circuit, an A/D converter, a Bluetooth transceiver and the like. The specific sampling rate of the control circuit board is not lower than 128Hz, and the sampling depth is not lower than 12 bit.
The EEG helmet of this scheme adopts few electrode designs, can contain but not limited to 3 basic signal electrodes, 1 ground electrode. 3 basic signal electrodes are respectively placed at 1 of the head top, the left occipital area, the right occipital area and the temporal area, and respectively correspond to 3 positions of Cz, PO7, PO8 and the like in the 10-20EEG international system. The 1 ground electrode is placed at a position intermediate between FPz, Cz. And 3 electrodes of Cz, PO7, PO8 and the like are used for brain-computer interface decoding. In the case of poor use state or large environmental disturbance, recording electrodes may be added to the EEG helmet within about 2cm of the vicinity of these electrodes to supplement electrodes at positions such as the center lines Pz and Oz.
The response time of the visual stimulator is less than 1 ms; the visual stimulator includes: VR glasses, AR glasses, MR glasses, displays, or projectors. The visual stimulator in the present solution is used to present visual stimuli to the user. This scheme adopts response time to be less than 1 ms's Virtual Reality (VR) glasses to avoid surrounding environment interference, guarantee user's the degree of being absorbed in. Since EEG signals have a time-locked feature, the temporal accuracy of the stimulus presentation is crucial for EEG decoding. The scheme requires that the time precision of stimulus presentation is less than 1 ms. In addition to VR displays, the present invention may also use augmented/mixed reality (AR/MR) glasses, displays, projectors, etc. with low response delays.
As for the computing terminal, a picture material library included therein for generating visual stimulus data, wherein the picture material library includes any combination of one or more of the following: object pictures, outline pictures, composite pictures.
Specifically, since the stimulus is presented using pictures, a library of picture materials is constructed. The collected material library includes 3 types: a) an object picture, which is a photograph of an object taken in daily life and in the natural world; b) the contour picture is a picture obtained by processing a shot object picture by using a computer graphics method and extracting key contours and textures of the shot object picture; c) composite pictures refer to pictures that are drawn manually or generated by a computer. In specific experiments, it was found that the best decoding effect can be obtained by synthesizing pictures. Therefore, in practical situations, a suitable picture from the material library should be selected to present the stimulus according to the personal condition of the user.
In a specific embodiment, the driving the visual stimulator to perform stimulus presentation to the user in a variable mode based on the visual stimulus data in step 101 includes:
determining a plurality of targets included in the visual stimulus data; aiming at the same target, selecting other targets different from the target, wherein the other targets comprise: a graphic or image; and driving the visual stimulator to alternately display the target and the other targets so as to stimulate and present the user.
In particular, the stimulus is presented in a variable mode, i.e. the same target option is presented with different and varying visual stimuli. In a specific example, if a user needs to select one of several discs displayed on a screen separately from each other, the user needs to first look at the disc. When the stimulus is presented, the disk will be instantaneously replaced by a different graphical/image stimulus, and then the original shape of the disk is restored. From the user's perspective, the user will see that the disc is quickly replaced with a series of different graphics/images. The user is tasked with determining whether there is a pre-designated one of the alternate graphics/images (e.g., a triangle). The presentation of the sequence of specified lengths is complete and the computer will determine the disk at which the user is gazing and make a checkmark or other treatment. This task requires the user to pay attention to and recognize the graphics of the replacement disc, while the presentation of the replacement graphic variable will induce a stronger visual component that will satisfy the correct decoding of the brain-computer interface even with a weak attention condition. The specific replaced graphics/images can be tested and selected to be optimal according to the personal condition of the user. In addition to being completely replaced with a different graphic/image, the present solution may also present the stimulus using changes in graphic/image size, color, form, position, motion state, etc., and combinations thereof. In the actual measurement experiment, the decoding performance of the variable mode stimulation is better than that of other brain-computer interfaces.
Further, the driving the visual stimulator to perform stimulus presentation to the user in a variable mode based on the visual stimulus data in step 101 includes: determining a plurality of targets included in the visual stimulus data; arranging a plurality of selected targets in a plurality of targets in a preset mode, wherein the arranged targets are divided into a plurality of parts; each said portion comprising a plurality of said targets; each section is displayed in turn while presenting each target in each section in a variable pattern for presentation of stimuli to the user.
In the actual test procedure, the user is first allowed to select a target option from up to several tens of options displayed on the screen. These options are not presented simultaneously, but sequentially according to certain rules. To improve presentation efficiency, different arrangements may be employed. The following description will be given by taking a row/column layout as an example. Assuming that there are 20 options (discs) on the screen, even though they may be randomly distributed across the screen, they can be mapped to a virtual 4 row 5 column matrix grid, one option per grid. One of the rows or one of the columns of options is randomly selected at a time to present the stimulus synchronously. This requires only 4+ 5-9 stimuli to traverse all 20 options, thereby increasing presentation efficiency. When the computer judges, the concerned row and column are firstly determined according to the ERP signal, and the crossing position is the target option. Other more efficient arrangements, such as binomial coding, may also be used with the present invention.
In a specific embodiment, the visual stimulation data includes a plurality of test items, and different test items correspond to different difficulty levels; thus, the driving the visual stimulator to perform stimulus presentation to the user in a variable mode based on the visual stimulus data in step 101 includes: determining the test items included in the visual stimulus data and a difficulty rating for each of the test items; and sequentially selecting the test items according to the difficulty grades from low to high to drive the visual stimulator to stimulate and present the user in a variable mode.
Specifically, according to the concentration degree of the feedback, the required concentration level can be adjusted, and the progressive concentration training is realized. Progressive training of the present solution may be understood as different difficulty levels. Setting 10 difficulty grades such as 1-10 according to the repeated times of the stimulation, wherein the grade 1 represents that each stimulation is presented only 1 time and represents the highest difficulty; a rating of 10 indicates 10 presentations per stimulus, representing the lowest difficulty. The user can select the corresponding grade for training according to the attention level.
In addition, the scene for stimulus presentation is an interactive scene; the interaction scenario includes: searching and matching scenes and eliminating scenes. In particular, since the present solution allows complex control outputs, attractive interaction scenarios can be designed. By way of example, the present solution may consider constructing the following two interaction scenarios. First, search/match scenarios. Placing several objects in a comfortable, natural setting (e.g., placing a number of different fruits at different locations in a bush) requires the user to search for the intended object (e.g., apple). Second, the scene is eliminated. Several targets are placed in a comfortable, natural setting, and the user needs to look at each target and remove it one by one to obtain a reward. The scenes and stimuli may be presented in 3D form (as developed based on the Unity3D platform), or in 2D planar form. In the early experiments, better interaction results are obtained in a 2D environment.
In a specific embodiment, the method further comprises the following steps: generating feedback information based on the attention level data; modifying the visual stimulus data based on the feedback information to drive the visual stimulator to display different content than the visual stimulus data originally;
further comprising: presenting the attention level data to the user.
Specifically, the scheme is further provided with a concentration degree feedback function. According to the scheme, the calculated concentration value reflecting the attention level of the user is fed back to the user on a display optionally according to the recorded EEG signal. The feedback may be in the form of a histogram indicating the intensity of attention, or a graphical color change, etc.
In the scheme, the brain-computer interface technology based on the weak attention condition can be used for controlling more complex and interesting interactive scenes based on the concentration degree than the traditional method, and simultaneously allows a user to smoothly complete the control of the brain-computer interface under the lower attention level, and can adjust the requirement on the attention level along with the progress of training, thereby realizing the progressive attention training and obtaining the effect superior to other existing attention training methods. And the number of required electrodes is less (only 3 signal electrodes are needed for decoding), which is lower than that of electrodes required by other ERP brain-computer interfaces (more than 8 signal electrodes are usually needed), and the equipment cost and complexity are reduced. And the scheme does not relate to peripheral nerves, thereby effectively avoiding the defect of distraction caused by limb movement in other interaction modes and playing a better training effect.
Example 2
The embodiment 2 of the invention also discloses an attention training method, and on the basis of the embodiment 1, the EEG helmet further comprises FPz signal electrodes, and the position of the FPz signal electrode on the EEG helmet corresponds to the forehead of the user. Thus, the electrodes in the EEG helmet may comprise, but are not limited to, 4 basic signal electrodes, 1 ground electrode. 4 basic signal electrodes are respectively placed on the forehead, the top of the head and 1 of the left occipital-temporal area, the left occipital-temporal area and the right occipital-temporal area respectively correspond to 4 positions of FPz, Cz, PO7, PO8 and the like in the 10-20EEG international system. The 1 ground electrode is placed at a position intermediate between FPz, Cz. FPz is mainly used for recording theta wave and beta wave for the assessment of attention level according to EEG rhythm in the scheme; and 3 electrodes of Cz, PO7, PO8 and the like are used for brain-computer interface decoding. In the case of poor use state or large environmental disturbance, recording electrodes may be added to the EEG helmet within about 2cm of the vicinity of these electrodes to supplement electrodes at positions such as the center lines Pz and Oz. Since the present solution also provides a method for assessing attention level based on EEG decoding results, and the electrode at FPz is only used to provide an additional reference for attention level based on EEG rhythm, the number of electrodes can be further reduced by removing FPz electrodes.
Based on this, this scheme still includes: said "assessing by said computing terminal said user's attention level based on said EEG signal" comprises: acquiring, by the computing terminal, the EEG signals from the FPz signal electrodes; determining in real time, using a fast Fourier transform, a power spectrum of the acquired EEG signal within a time window of a particular length from a current time instant onwards; calculating the ratio of the energy in the beta frequency band to the energy in the theta frequency band in the power spectrum; the ratio is taken as the attention level, wherein the greater the ratio, the higher the attention level.
Specifically, attention levels were calculated from EEG rhythms recorded at the forehead FPz electrode (FPz electrodes were positioned). Specifically, a fast Fourier transform is used to estimate the power spectrum of the EEG signal in a time window of a specific length from the current time in real time, and the ratio of the energy in a beta frequency band (13-18 Hz) to the energy in a theta frequency band (3-7 Hz) is calculated as the concentration value. The larger the ratio, the higher the attention level. By using time windows of different lengths, an instantaneous attention level (e.g. a time window of 1s length) and a long attention level (e.g. a time window of 30s length) can be given.
Example 3
The embodiment 3 of the invention also discloses an attention training method, and further discloses that the computing terminal comprises an EEG decoder on the basis of the embodiments 1 and 2; the EEG decoder is trained based on the visual stimulus data;
said "assessing by said computing terminal said user's attention level based on said EEG signal" in step 103 comprises: decoding the EEG signal through the EEG decoder to obtain a decoding result; evaluating the user's attention level based on the decoding result.
Further, the decoding result is a score output by the EEG decoder; the "evaluating the attention level of the user based on the decoding result" includes: determining a decoding probability based on the score, wherein the higher the score, the greater the corresponding decoding probability; converting the decoding probabilities to concentration values of the visual stimulus data at a lowest level of difficulty, wherein the higher the decoding probability, the greater the corresponding concentration value; determining a level of attention based on the concentration value.
Specifically, during decoding, the EEG signal generated by the user is analyzed to determine the target focused by the user; secondly, determining a target generated based on the visual stimulation data; finally, the method also comprises the step of comparing the two targets at the same time point to determine the matching degree.
Furthermore, a trained classifier is arranged in the EEG decoder; the "decoding the EEG signal by the EEG decoder to obtain a decoding result" includes: decoding the EEG signal through the trained classifier to obtain the matching degree of the target concerned by the user and the target shown in the visual stimulation data at the same time point; the classifier is trained based on the visual stimulus data. The specific classifier may be an LDA or SVM classifier.
Said "decoding said EEG signal by said EEG decoder" comprises: segmenting the EEG signal that is continuous in time into a plurality of data segments of a preset time length; down-sampling each data segment to a sampling rate of 20Hz, and sequentially splicing the data segments into eigenvectors according to the sequence of electrode channels; and leading the feature vector into the EEG decoder for decoding.
The classifier is obtained by carrying out supervised training through a preset feature vector; the preset feature vector is obtained by driving the visual stimulator to stimulate and present through the visual stimulation data, recording a plurality of EEG signals when a user is instructed to pay attention to a specified target, and finally splicing the recorded EEG signals.
Specifically, the classifier is pre-trained prior to decoding. The specific method can be as follows: under the condition that the difficulty level is 10, a user is enabled to pay attention to a specified target option, labeled data of 20 options are recorded and are respectively spliced into feature vectors, a classifier is supervised trained, and the obtained classifier is used for EEG decoding. The resulting classifier may for example be a classifier such as LDA or SVM,
the EEG is decoded based on a pre-trained classifier. Firstly, dividing the EEG signal which is continuously recorded into data segments with the length of 600ms according to the stimulus generation time; then, the data are segmented and down-sampled to a sampling rate of 20Hz, and are sequentially spliced into a feature vector according to channels; and finally, sending the feature vectors into a classifier for decoding. The present invention evaluates from EEG decoding results. Specifically, the classifier output scores are mapped to decoding probabilities and quantized to concentration values at the standard difficulty level 1. The higher the decoding probability, the higher the attention level.
Example 4
Embodiment 4 of the present invention further discloses a terminal, which is applied to a system including an EEG helmet, a visual stimulator and a computing terminal, wherein the computing terminal is respectively connected to the EEG helmet and the visual stimulator, as shown in fig. 3, and includes a processor for executing the method described in embodiments 1 to 3.
Specifically, embodiment 4 of the present invention further discloses other related technical features, and for the specific related technical features, reference is made to the records in embodiments 1 to 3, which are not described herein again.
The embodiment of the invention provides an attention training method and a terminal, which are applied to a system comprising an EEG helmet, a visual stimulator and a computing terminal, wherein the computing terminal is respectively connected with the EEG helmet and the visual stimulator, and the method comprises the following steps: generating visual stimulation data through the computing terminal, and driving the visual stimulator to perform stimulation presentation on a user in a variable mode based on the visual stimulation data; acquiring an EEG signal of the user through an EEG helmet and sending the EEG signal to the computing terminal; evaluating, by the computing terminal, the user's attention level based on the EEG signal. The variable mode is adopted in the scheme to stimulate and present the user, so that the user can be ensured to successfully finish training under a lower attention level.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above-mentioned invention numbers are merely for description and do not represent the merits of the implementation scenarios.
The above disclosure is only a few specific implementation scenarios of the present invention, however, the present invention is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present invention.

Claims (17)

1. An attention training method is applied to a system comprising an EEG helmet, a visual stimulator and a computing terminal, wherein the computing terminal is respectively connected with the EEG helmet and the visual stimulator, and the method comprises the following steps:
generating visual stimulation data through the computing terminal, and driving the visual stimulator to perform stimulation presentation on a user in a variable mode based on the visual stimulation data;
acquiring an EEG signal of the user through an EEG helmet and sending the EEG signal to the computing terminal;
evaluating, by the computing terminal, the user's attention level based on the EEG signal.
2. The method of claim 1, further comprising:
generating feedback information based on the attention level data;
modifying the visual stimulus data based on the feedback information to drive the visual stimulator to display different content than the visual stimulus data originally;
further comprising: presenting the attention level data to the user.
3. The method of claim 1, wherein the EEG helmet comprises 3 signal electrodes and 1 ground electrode; wherein, the positions of the 3 signal electrodes on the EEG helmet respectively correspond to the head top, the left occipital-temporal area and the right occipital-temporal area of the user; the location of the ground electrode on the EEG helmet corresponds to the area between the forehead and the top of the head of the user.
4. The method of claim 3 further comprising FPz signal electrodes in the EEG helmet, the position of the FPz signal electrodes on the EEG helmet corresponding to the forehead of the user.
5. The method of claim 4, further comprising: said "assessing by said computing terminal said user's attention level based on said EEG signal" comprises:
acquiring, by the computing terminal, the EEG signals from the FPz signal electrodes;
determining in real time, using a fast Fourier transform, a power spectrum of the acquired EEG signal within a time window of a particular length from a current time instant onwards;
calculating the ratio of the energy in the beta frequency band to the energy in the theta frequency band in the power spectrum;
the ratio is taken as the attention level, wherein the greater the ratio, the higher the attention level.
6. The method of claim 1, wherein the response time of the visual stimulator is less than 1 ms; the visual stimulator includes: VR glasses, AR glasses, MR glasses, displays, or projectors.
7. The method of claim 1, wherein the driving the visual stimulator to stimulate presentation to the user in a variable pattern based on the visual stimulus data comprises:
determining a plurality of targets included in the visual stimulus data;
aiming at the same target, selecting other targets different from the target, wherein the other targets comprise: a graphic or image;
and driving the visual stimulator to alternately display the target and the other targets so as to stimulate and present the user.
8. The method of claim 1, wherein the driving the visual stimulator to stimulate presentation to the user in a variable pattern based on the visual stimulus data comprises:
determining a plurality of targets included in the visual stimulus data;
arranging a plurality of selected targets in a plurality of targets in a preset mode, wherein the arranged targets are divided into a plurality of parts; each said portion comprising a plurality of said targets;
each section is displayed in turn while presenting each target in each section in a variable pattern for presentation of stimuli to the user.
9. The method of claim 1, wherein the visual stimulus data includes a plurality of test items, different test items corresponding to different difficulty ratings;
the driving the visual stimulator to perform stimulus presentation to the user in a variable mode based on the visual stimulus data includes
Determining the test items included in the visual stimulus data and a difficulty rating for each of the test items;
and sequentially selecting the test items according to the difficulty grades from low to high to drive the visual stimulator to stimulate and present the user in a variable mode.
10. The method of claim 1, wherein the scene in which the stimulus presentation is performed is an interactive scene; the interaction scenario includes: searching and matching scenes and eliminating scenes.
11. The method of claim 1, wherein the computing terminal comprises a picture material library for generating visual stimulus data, wherein the picture material library comprises any combination of one or more of: object pictures, outline pictures, composite pictures.
12. The method of claim 1 or 11, wherein an EEG decoder is included in the computing terminal; the EEG decoder is trained based on the visual stimulus data;
said "assessing by said computing terminal said user's attention level based on said EEG signal" comprises:
decoding the EEG signal through the EEG decoder to obtain a decoding result;
evaluating the user's attention level based on the decoding result.
13. The method of claim 12, wherein the decoding result is a score output by the EEG decoder;
the "evaluating the attention level of the user based on the decoding result" includes:
determining a decoding probability based on the score, wherein the higher the score, the greater the corresponding decoding probability;
converting the decoding probabilities to concentration values of the visual stimulus data at a lowest level of difficulty, wherein the higher the decoding probability, the greater the corresponding concentration value;
determining a level of attention based on the concentration value.
14. The method of claim 12, wherein a trained classifier is provided in the EEG decoder;
the "decoding the EEG signal by the EEG decoder to obtain a decoding result" includes:
decoding the EEG signal through the trained classifier to obtain the matching degree of the target concerned by the user and the target shown in the visual stimulation data at the same time point; the classifier is trained based on the visual stimulus data.
15. The method as claimed in claim 12, wherein said "decoding the EEG signal by the EEG decoder" comprises:
segmenting the EEG signal that is continuous in time into a plurality of data segments of a preset time length;
down-sampling each data segment to a sampling rate of 20Hz, and sequentially splicing the data segments into eigenvectors according to the sequence of electrode channels;
and leading the feature vector into the EEG decoder for decoding.
16. The method of claim 14, wherein the classifier is supervised training with preset feature vectors;
the preset feature vector is obtained by driving the visual stimulator to stimulate and present through the visual stimulation data, recording a plurality of EEG signals when a user is instructed to pay attention to a specified target, and finally splicing the recorded EEG signals.
17. A terminal for use in a system comprising an EEG helmet, a visual stimulator and a computing terminal, wherein said computing terminal is connected to said EEG helmet and said visual stimulator, respectively, comprising a processor for performing the method of any one of claims 1-16.
CN202011437300.8A 2020-12-10 2020-12-10 Attention training method and terminal Pending CN112545517A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011437300.8A CN112545517A (en) 2020-12-10 2020-12-10 Attention training method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011437300.8A CN112545517A (en) 2020-12-10 2020-12-10 Attention training method and terminal

Publications (1)

Publication Number Publication Date
CN112545517A true CN112545517A (en) 2021-03-26

Family

ID=75060420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011437300.8A Pending CN112545517A (en) 2020-12-10 2020-12-10 Attention training method and terminal

Country Status (1)

Country Link
CN (1) CN112545517A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113288147A (en) * 2021-05-31 2021-08-24 杭州电子科技大学 Mild cognitive impairment rehabilitation evaluation system based on EEG and neurofeedback technology
CN113440151A (en) * 2021-08-03 2021-09-28 合肥科飞康视科技有限公司 Concentration detection system, detection method and use method of system
CN113576497A (en) * 2021-08-30 2021-11-02 清华大学深圳国际研究生院 Visual steady-state evoked potential detection system oriented to binocular competition
CN113599773A (en) * 2021-09-22 2021-11-05 上海海压特智能科技有限公司 Gait rehabilitation training system and method based on rhythmic visual stimulation
CN113679386A (en) * 2021-08-13 2021-11-23 北京脑陆科技有限公司 Method, device, terminal and medium for recognizing attention
CN114209343A (en) * 2021-04-29 2022-03-22 上海大学 Portable attention training system and method based on AR and SSVEP
CN115120240A (en) * 2022-08-30 2022-09-30 山东心法科技有限公司 Sensitivity evaluation method, equipment and medium for special industry target perception skills
WO2022240355A1 (en) * 2021-05-12 2022-11-17 Nanyang Technological University Mental arousal level regulation system and method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103656833A (en) * 2013-12-24 2014-03-26 天津师范大学 Wearable brain wave attention training instrument
CN106691441A (en) * 2016-12-22 2017-05-24 蓝色传感(北京)科技有限公司 Attention training system based on brain electricity and movement state feedback and method thereof
TWI604823B (en) * 2015-08-18 2017-11-11 國立交通大學 A brainwaves based attention feedback training method and its system thereof
CN109645994A (en) * 2019-01-04 2019-04-19 华南理工大学 A method of based on brain-computer interface system aided assessment vision positioning
CN109947250A (en) * 2019-03-19 2019-06-28 中国科学院上海高等研究院 Brain-computer interface communication means and device, computer readable storage medium and terminal
CN110090018A (en) * 2019-05-06 2019-08-06 安徽建筑大学 A kind of focus analysis system based on brain dateline band
CN110222643A (en) * 2019-06-06 2019-09-10 西安交通大学 A kind of Steady State Visual Evoked Potential Modulation recognition method based on convolutional neural networks

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103656833A (en) * 2013-12-24 2014-03-26 天津师范大学 Wearable brain wave attention training instrument
TWI604823B (en) * 2015-08-18 2017-11-11 國立交通大學 A brainwaves based attention feedback training method and its system thereof
CN106691441A (en) * 2016-12-22 2017-05-24 蓝色传感(北京)科技有限公司 Attention training system based on brain electricity and movement state feedback and method thereof
CN109645994A (en) * 2019-01-04 2019-04-19 华南理工大学 A method of based on brain-computer interface system aided assessment vision positioning
CN109947250A (en) * 2019-03-19 2019-06-28 中国科学院上海高等研究院 Brain-computer interface communication means and device, computer readable storage medium and terminal
CN110090018A (en) * 2019-05-06 2019-08-06 安徽建筑大学 A kind of focus analysis system based on brain dateline band
CN110222643A (en) * 2019-06-06 2019-09-10 西安交通大学 A kind of Steady State Visual Evoked Potential Modulation recognition method based on convolutional neural networks

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114209343A (en) * 2021-04-29 2022-03-22 上海大学 Portable attention training system and method based on AR and SSVEP
WO2022240355A1 (en) * 2021-05-12 2022-11-17 Nanyang Technological University Mental arousal level regulation system and method
CN113288147A (en) * 2021-05-31 2021-08-24 杭州电子科技大学 Mild cognitive impairment rehabilitation evaluation system based on EEG and neurofeedback technology
CN113440151A (en) * 2021-08-03 2021-09-28 合肥科飞康视科技有限公司 Concentration detection system, detection method and use method of system
CN113440151B (en) * 2021-08-03 2024-04-12 合肥科飞康视科技有限公司 Concentration force detection system, detection method and use method of system
CN113679386A (en) * 2021-08-13 2021-11-23 北京脑陆科技有限公司 Method, device, terminal and medium for recognizing attention
CN113576497A (en) * 2021-08-30 2021-11-02 清华大学深圳国际研究生院 Visual steady-state evoked potential detection system oriented to binocular competition
CN113576497B (en) * 2021-08-30 2023-09-08 清华大学深圳国际研究生院 Visual steady-state evoked potential detection system for binocular competition
CN113599773A (en) * 2021-09-22 2021-11-05 上海海压特智能科技有限公司 Gait rehabilitation training system and method based on rhythmic visual stimulation
CN115120240A (en) * 2022-08-30 2022-09-30 山东心法科技有限公司 Sensitivity evaluation method, equipment and medium for special industry target perception skills

Similar Documents

Publication Publication Date Title
CN112545517A (en) Attention training method and terminal
Winkler et al. Robust artifactual independent component classification for BCI practitioners
Jeong et al. Cybersickness analysis with eeg using deep learning algorithms
Akcakaya et al. Noninvasive brain–computer interfaces for augmentative and alternative communication
Kalantari et al. Comparing physiological responses during cognitive tests in virtual environments vs. in identical real-world environments
Groen et al. The time course of natural scene perception with reduced attention
CN108324292B (en) Indoor visual environment satisfaction degree analysis method based on electroencephalogram signals
KR102388596B1 (en) device and method that collect signals related to the brain in an active state and provide digital therapeutics information
Lees et al. Speed of rapid serial visual presentation of pictures, numbers and words affects event-related potential-based detection accuracy
İşcan et al. A novel steady-state visually evoked potential-based brain–computer interface design: character plotter
WO2020023232A1 (en) Multiple frequency neurofeedback brain wave training techniques, systems, and methods
CN113397547A (en) Film watching evaluation service system and method based on physiological data
Su et al. Adolescents environmental emotion perception by integrating EEG and eye movements
Matsumoto et al. Classifying P300 responses to vowel stimuli for auditory brain-computer interface
US20230347100A1 (en) Artificial intelligence-guided visual neuromodulation for therapeutic or performance-enhancing effects
Zhao et al. Human-computer interaction for augmentative communication using a visual feedback system
Kober et al. Does Feedback Design Matter? A Neurofeedback Study Comparing Immersive Virtual Reality and Traditional Training Screens in Elderly.
Leslie et al. Measuring musical engagement using expressive movement and EEG brain dynamics.
CN116458901A (en) Nerve feedback training method, system, equipment and storage medium
CN111329446A (en) Visual stimulation system and method for processing spatial frequency of facial pores through brain visual pathway
Afdideh et al. Development of a MATLAB-based toolbox for brain computer interface applications in virtual reality
CN110302459A (en) Emotion regulation and control training method, device, equipment and system
Alshear Brain wave sensors for every body
US20230218198A1 (en) Methods and systems for neurofeedback training
Ling Decoding and Reconstructing Orthographic Information from Visual Perception and Mental Imagery Using EEG and fMRI

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210326

RJ01 Rejection of invention patent application after publication