CN110262657B - Asynchronous vision-induced brain-computer interface method based on' switch to target - Google Patents

Asynchronous vision-induced brain-computer interface method based on' switch to target Download PDF

Info

Publication number
CN110262657B
CN110262657B CN201910492853.4A CN201910492853A CN110262657B CN 110262657 B CN110262657 B CN 110262657B CN 201910492853 A CN201910492853 A CN 201910492853A CN 110262657 B CN110262657 B CN 110262657B
Authority
CN
China
Prior art keywords
switch
target
stimulation
user
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910492853.4A
Other languages
Chinese (zh)
Other versions
CN110262657A (en
Inventor
谢俊
张玉彬
杜光景
薛涛
曹国智
徐光华
李敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201910492853.4A priority Critical patent/CN110262657B/en
Publication of CN110262657A publication Critical patent/CN110262657A/en
Application granted granted Critical
Publication of CN110262657B publication Critical patent/CN110262657B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Dermatology (AREA)
  • Neurosurgery (AREA)
  • Neurology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

An asynchronous vision-induced brain-computer interface method based on 'switch to target' comprises the steps of firstly placing electrodes and installing an eye tracker, then carrying out eye tracker calibration, then carrying out switch unit selection through a constructed eye tracker switch interface, entering a vision-induced stimulation interface, selecting a stimulation unit corresponding to the switch unit as target stimulation, and carrying out target identification on acquired electroencephalogram signals to finish a target identification task; the invention combines the rapid and sensitive eyeball position positioning with the asynchronous vision-induced brain-computer interface, reduces the false triggering rate and enables the system to respond rapidly; meanwhile, the use comfort is increased and the fatigue feeling is reduced to a certain degree.

Description

Asynchronous vision-induced brain-computer interface method based on' switch to target
Technical Field
The invention relates to the technical field of neural engineering and brain-computer interfaces in biomedical engineering, in particular to an asynchronous vision-induced brain-computer interface method based on switching to a target.
Background
The brain-computer interface is a novel man-machine interaction mode, and information exchange and effective interaction between the brain and the external environment are directly carried out without depending on human muscle tissues and peripheral nerve pathways, so that the brain-computer interface is widely applied to medical rehabilitation and industrial control. The steady state vision-induced brain-computer interface is a method for inducing brain response by watching visual stimulation with a specific frequency, and has the advantages of strong anti-interference capability, high information transmission rate and induction without training of ordinary users, so the steady state vision-induced brain-computer interface is the most practical signal type in the common brain-computer interfaces.
At present, the steady state evoked brain-computer interface generally has two realization modes, one is a synchronous mode, and the disadvantage is that the starting and ending time of the task is determined by the system, and the user has no power of self-selection; the other mode is an asynchronous mode, which is more flexible than a synchronous mode, and a user can autonomously determine the starting and stopping time of a task in the process of the task, thus representing a more natural interactive mode, and the asynchronous mode gradually becomes a key research direction of a brain-computer interface in recent years. However, in the process of continuously acquiring electroencephalogram signals, the conventional asynchronous brain-computer interface may cause target misjudgment due to the influence of factors such as environmental interference, and the misjudgment rate is effectively reduced by setting a brain-computer switch. The 'switch' of the asynchronous brain-computer interface can be divided into two types according to the signal types, one type is a homogeneous signal as the switch, namely, an electroencephalogram signal is used as the switch, and the mode can cause aliasing of various electroencephalograms in a time-frequency space domain, thereby increasing the complexity of signal identification; the other is that the heterogeneous signal is used as a switch, namely, a signal of different types from the electroencephalogram signal is used as the switch, for example, an electro-oculogram signal is used as the switch, and the method can effectively reduce false triggering.
For the method using heterogeneous signals as switches, a one-to-many multi-way switch form that one switch corresponds to a plurality of stimulation units is adopted at present, namely, after the switch of the upper brain computer is turned on, the system flow enters the operation of the lower multi-stimulation target brain computer interface. The mode not only has the positioning delay from the switch to the target, but also has the possibility that the sight line crosses the non-target stimulation unit in the process of transferring the sight line from the brain-computer switch to the target stimulation unit, so that the false triggering of the non-target stimulation unit is caused, the accuracy is reduced, the execution time of the brain-computer interface is prolonged, and the practicability of the brain-computer interface is not facilitated.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an asynchronous vision-induced brain-computer interface method based on switching to a target, which reduces the false triggering rate and enables the system to respond quickly; meanwhile, the use comfort is increased and the fatigue feeling is reduced to a certain degree.
In order to achieve the purpose, the invention adopts the technical scheme that:
an asynchronous vision-induced brain-computer interface method based on 'switch to target' comprises the following steps:
step 1, performing hardware connection:
1.1) respectively placing measuring electrodes A1, A2, … and An at n positions of a visual pillow area of the head of a user, placing a reference electrode R at a position of a single-side earlobe of the user, and placing a ground electrode G at a position of a forehead Fpz of the user;
1.2) installing an eye tracker: the eye tracker is placed in the middle under the computer screen, the included angle between the computer screen and the horizontal plane is kept within the range of 90-120 degrees, the distance m between a user and the computer screen is adjusted through a calibration program of the eye tracker, and the range of the distance m is 40-90 cm;
step 2, entering a target selection interface from switch to target: the target selection interface from 'switch to target' is divided into a switch interface and a stimulation interface;
the switch interface is composed of N switch units S1, S2, … and Sn on the screen, each switch unit is a circle with the diameter of D pixels, the current eye fixation position is synchronously displayed on the computer screen in real time in a mode of respectively taking an average value of the horizontal and vertical coordinates of the left and right eye fixation positions, when the fixation position of a user falls in the switch unit, the switch is turned on, and when the fixation position of the user does not fall in the switch unit, the switch is turned off;
the stimulation interface is N checkerboard movement stimulation units T1, T2, … and Tn displayed on the screen and correspondingly placed at the positions of the N switch units in the switch interface, and the stimulation units contract and expand in a sine or cosine modulation mode to form visual stimulation;
the switch unit and the stimulation unit are overlapped and presented in a time-sharing manner, when the user's gaze position falls in any one switch unit, the switch is turned on, the switch unit disappears, the stimulation unit appears, and the user selects the stimulation target at the position to perform the identification task;
step 3, carrying out target identification on the acquired electroencephalogram signals, synchronously recording the time of starting and ending stimulation by a computer, acquiring original electroencephalogram signals by a measuring electrode, and identifying the target watched by a user by a typical correlation analysis CCA (cancer correlation analysis) algorithm and a linear Discriminant analysis LDA (linear Discriminant analysis) algorithm;
step 4, judging whether the single recognition task is finished, intercepting the electroencephalogram data with the same length through a time sliding window to perform target recognition in the process of carrying out the single recognition task, judging the electroencephalogram data to be a target when the target recognition results of two adjacent times are the same, indicating the target watched by a user through a screen by a computer, realizing visual feedback to the user and judging that the single recognition task is finished;
and 5, after the computer finishes the single identification task, returning to the step 2, recovering the switch interface, enabling the stimulation unit to disappear and the switch unit to appear, and repeating the step 2, the step 3 and the step 4 to perform the next target identification task.
In the step 3, the target watched by the user is identified through a typical Correlation analysis cca (cancer Correlation analysis) algorithm and a linear Discriminant analysis lda (linear Discriminant analysis) algorithm, which specifically includes the following operations: firstly, filtering and trapping original electroencephalogram signals, then obtaining correlation coefficient between electroencephalogram data and sinusoidal reference signals containing different turnover frequencies through a CCA algorithm, and finally sending the correlation coefficient into an LDA algorithm for classification learning.
The invention has the beneficial effects that: the method combines the eye movement tracking technology with the asynchronous steady state vision-induced brain-computer interface technology, and shows the following advantages:
(1) compared with the traditional synchronous brain-computer interaction mode, the method introduces the eye movement tracking technology into the application and implementation of the brain-computer interface, adopts the asynchronous eye movement switch mode to increase the autonomy of the user and reduce the fatigue of the user;
(2) compared with the traditional asynchronous brain-computer interaction mode, the method adopts the one-to-one brain-computer switch to replace a one-to-many switch form, so that the false triggering rate is reduced, the system is enabled to respond quickly, the practicability of the brain-computer interface is improved, and the brain-computer interaction process is more friendly.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of a switch interface and a stimulation interface in accordance with the present invention.
FIG. 3 is a schematic diagram of intercepting electroencephalogram data with the same length through a time sliding window to perform target identification according to the embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Referring to fig. 1, an asynchronous vision-induced brain-computer interface method based on "switch to target" comprises the following steps:
step 1, performing hardware connection:
1.1) respectively placing measuring electrodes at positions PO3, POz, PO4, O1, Oz and O2 of a visual pillow area of the head of a user, placing a reference electrode R at a position of a single-side earlobe of the measuring electrode, and placing a ground electrode G at a position of a forehead Fpz of the measuring electrode;
1.2) installing an eye tracker: the eye tracker is placed in the middle under the computer screen, guarantees that the top edge of eye tracker is together with computer screen lower limb, keeps computer screen and horizontal plane contained angle 110, adjusts the distance m of user and computer screen through the calibration procedure of eye tracker 60 +/-2 (cm), adopts the five-point method to accomplish the calibration of eye tracker, adopts 5 equal diameters promptly to be drIs presented to the user as a white calibration point, drThe variation range is 0-10mm, wherein five points are selected as a central position point of a computer screen and four corners of the computer screen respectively and are close to the edge vertex of the computer screen, the distance from any one point to the upper edge/lower edge of the computer screen is b 1-54 mm, the distance from any point to the left edge/right edge is b 2-77 mm, a user sequentially observes 5 calibration points presented by the computer screen I, and the eye tracker M collects visual parameter information and presents a calibration result on the computer screen I to finish calibration;
step 2, referring to fig. 2, entering a target selection interface of "switch to target": the target selection interface from 'switch to target' is divided into a switch interface and a stimulation interface;
the switch interface is composed of 4 switch units S1, S2, S3 and S4 on the screen, each switch unit is a circle with the diameter D of 100 pixels, the current eye fixation position is synchronously displayed on the computer screen in real time in a mode of respectively averaging the horizontal and vertical coordinates of the left and right eye fixation positions, when the fixation position of a user falls in the switch unit, the switch is opened, and when the fixation position of the user does not fall in the switch unit, the switch is closed;
the stimulation interface is composed of 4 checkerboard movement stimulation units T1, T2, T3 and T4 displayed on a screen, and the stimulation units are correspondingly placed at the positions of 4 switch units in the switch interface and contract and expand in a sine modulation mode to form visual stimulation;
the switch unit and the stimulation unit are overlapped and presented in a time-sharing manner, when the user's gaze position falls in any one switch unit, the switch is turned on, the switch unit disappears, the stimulation unit appears, and the user selects the stimulation target at the position to perform the identification task;
step 3, carrying out target identification on the acquired electroencephalogram signals, synchronously recording the time of starting and ending stimulation by a computer, acquiring original electroencephalogram signals by measuring electrodes, and identifying targets watched by a user by a typical correlation Analysis (CCA) algorithm and a Linear Discriminant Analysis (LDA) algorithm, wherein the method specifically comprises the following operations: firstly, filtering and trapping original electroencephalogram signals, then obtaining correlation coefficient between electroencephalogram data and sinusoidal reference signals containing different turnover frequencies through a CCA algorithm, and finally sending the correlation coefficient into an LDA algorithm for classification learning;
step 4, judging whether the single recognition task is finished, intercepting the electroencephalogram data with the same length through a time sliding window to perform target recognition in the process of carrying out the single recognition task, judging the electroencephalogram data to be a target when the target recognition results of two adjacent times are the same, indicating the target watched by a user through a screen by a computer, realizing visual feedback to the user and judging that the single recognition task is finished;
and 5, after the computer finishes the single identification task, returning to the step 2, recovering the switch interface, enabling the stimulation unit to disappear and the switch unit to appear, and repeating the step 2, the step 3 and the step 4 to perform the next target identification task.
In the embodiment, four users (P1-P4) are tested, and electroencephalogram signals are recorded synchronously in the test process, so that the states of the users can be checked in the test, the users are prevented from blinking, moving and other actions, and the data quality of the electroencephalogram signals is ensured; referring to fig. 3, the length of the experimentally set time sliding window is 3 seconds, and the step length is 0.5 seconds; placing electrodes for a user according to the step 1, installing an eye tracker and calibrating the eye tracker for the user; completing the experiment according to the steps 2, 3, 4 and 5, wherein the training data used in the step 3 come from P1-P4; when the recognition results of the two adjacent time sliding windows are the same, indicating that the task is finished, and selecting the recognition target again according to the step 2; in the experiment, each user performs 15 times of experiments on each stimulation unit, and the interval time between the two times of experiments is 1.5 seconds; the results of the accuracy are shown in table 1 and show that the average accuracy of the data length of 3 seconds for the users involved in the experiment can reach over 90%.
TABLE 1 accuracy results
Figure BDA0002087605350000071

Claims (2)

1. An asynchronous vision-induced brain-computer interface method based on 'switch to target', which is characterized by comprising the following steps:
step 1, performing hardware connection:
1.1) respectively placing measuring electrodes A1, A2, … and An at n positions of a visual pillow area of the head of a user, placing a reference electrode R at a position of a single-side earlobe of the user, and placing a ground electrode G at a position of a forehead Fpz of the user;
1.2) installing an eye tracker: the eye tracker is placed in the middle under the computer screen, the included angle between the computer screen and the horizontal plane is kept within the range of 90-120 degrees, the distance m between a user and the computer screen is adjusted through a calibration program of the eye tracker, and the range of the distance m is 40-90 cm;
step 2, entering a target selection interface from switch to target: the target selection interface from 'switch to target' is divided into a switch interface and a stimulation interface;
the switch interface is composed of N switch units S1, S2, … and SN on the screen, each switch unit is a circle with the diameter of D pixels, the current eye fixation position is synchronously displayed on the computer screen in real time in a mode of respectively taking an average value of the horizontal and vertical coordinates of the left and right eye fixation positions, when the fixation position of a user falls in the switch unit, the switch is turned on, and when the fixation position of the user does not fall in the switch unit, the switch is turned off;
the stimulation interface is N checkerboard movement stimulation units T1, T2, … and TN displayed on a screen and correspondingly placed at the positions of the N switch units in the switch interface, and the stimulation units contract and expand in a sine or cosine modulation mode to form visual stimulation;
the switch unit and the stimulation unit are overlapped and presented in a time-sharing manner, when the watching position of a user falls into any one switch unit, the switch is turned on, the switch unit disappears, the stimulation unit appears, and the user selects the stimulation target of the watching position to perform an identification task;
step 3, carrying out target identification on the acquired electroencephalogram signals, synchronously recording the time of starting and ending stimulation by a computer, acquiring original electroencephalogram signals by a measuring electrode, and identifying the target watched by a user by a typical correlation analysis CCA (Canonica correlation analysis) algorithm and a linear discriminant analysis LDA (linear discriminant analysis) algorithm;
step 4, judging whether the single recognition task is finished, intercepting the electroencephalogram data with the same length through a time sliding window to perform target recognition in the process of carrying out the single recognition task, judging the electroencephalogram data to be a target when the target recognition results of two adjacent times are the same, indicating the target watched by a user through a screen by a computer, realizing visual feedback to the user and judging that the single recognition task is finished;
and 5, after the computer finishes the single identification task, returning to the step 2, recovering the switch interface, enabling the stimulation unit to disappear and the switch unit to appear, and repeating the step 2, the step 3 and the step 4 to perform the next target identification task.
2. The asynchronous vision-induced brain-computer interface method based on 'switch-to-target' in claim 1, characterized in that: in step 3, the target watched by the user is identified through a typical Correlation analysis cca (cancer Correlation analysis) algorithm and a linear discriminant analysis lda (linear discriminant analysis) algorithm, which specifically includes the following operations: firstly, filtering and trapping original electroencephalogram signals, then obtaining correlation coefficient between electroencephalogram data and sinusoidal reference signals containing different turnover frequencies through a CCA algorithm, and finally sending the correlation coefficient into an LDA algorithm for classification learning.
CN201910492853.4A 2019-06-06 2019-06-06 Asynchronous vision-induced brain-computer interface method based on' switch to target Active CN110262657B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910492853.4A CN110262657B (en) 2019-06-06 2019-06-06 Asynchronous vision-induced brain-computer interface method based on' switch to target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910492853.4A CN110262657B (en) 2019-06-06 2019-06-06 Asynchronous vision-induced brain-computer interface method based on' switch to target

Publications (2)

Publication Number Publication Date
CN110262657A CN110262657A (en) 2019-09-20
CN110262657B true CN110262657B (en) 2020-05-15

Family

ID=67917192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910492853.4A Active CN110262657B (en) 2019-06-06 2019-06-06 Asynchronous vision-induced brain-computer interface method based on' switch to target

Country Status (1)

Country Link
CN (1) CN110262657B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113419628A (en) * 2021-06-24 2021-09-21 西安交通大学 Brain-computer interface method with dynamically-variable visual target based on eye movement tracking

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101571748A (en) * 2009-06-04 2009-11-04 浙江大学 Brain-computer interactive system based on reinforced realization
CN102778949A (en) * 2012-06-14 2012-11-14 天津大学 Brain-computer interface method based on SSVEP (Steady State Visual Evoked Potential) blocking and P300 bicharacteristics
CN102799274A (en) * 2012-07-17 2012-11-28 华南理工大学 Method of asynchronous brain switch based on steady state visual evoked potentials
KR20130141904A (en) * 2012-06-18 2013-12-27 서울대학교산학협력단 Half-field ssvep based bci system and motion method thereof
CN105511622A (en) * 2015-12-14 2016-04-20 华南理工大学 Thresholdless brain switch method based on P300 electroencephalogram mode
CN106681494A (en) * 2016-12-07 2017-05-17 华南理工大学 Environment control method based on brain computer interface
CN108958489A (en) * 2018-07-20 2018-12-07 东南大学 A kind of interesting image regions Rapid Detection method based on brain electricity and eye tracker
CN109508094A (en) * 2018-12-11 2019-03-22 西安交通大学 A kind of vision inducting brain-machine interface method of the asynchronous eye movement switch of combination
CN109828664A (en) * 2019-01-15 2019-05-31 西安交通大学 Steady State Visual Evoked Potential brain-machine interface method based on sense feedback dynamic adjustment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101571748A (en) * 2009-06-04 2009-11-04 浙江大学 Brain-computer interactive system based on reinforced realization
CN102778949A (en) * 2012-06-14 2012-11-14 天津大学 Brain-computer interface method based on SSVEP (Steady State Visual Evoked Potential) blocking and P300 bicharacteristics
KR20130141904A (en) * 2012-06-18 2013-12-27 서울대학교산학협력단 Half-field ssvep based bci system and motion method thereof
CN102799274A (en) * 2012-07-17 2012-11-28 华南理工大学 Method of asynchronous brain switch based on steady state visual evoked potentials
CN105511622A (en) * 2015-12-14 2016-04-20 华南理工大学 Thresholdless brain switch method based on P300 electroencephalogram mode
CN106681494A (en) * 2016-12-07 2017-05-17 华南理工大学 Environment control method based on brain computer interface
CN108958489A (en) * 2018-07-20 2018-12-07 东南大学 A kind of interesting image regions Rapid Detection method based on brain electricity and eye tracker
CN109508094A (en) * 2018-12-11 2019-03-22 西安交通大学 A kind of vision inducting brain-machine interface method of the asynchronous eye movement switch of combination
CN109828664A (en) * 2019-01-15 2019-05-31 西安交通大学 Steady State Visual Evoked Potential brain-machine interface method based on sense feedback dynamic adjustment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
稳态视觉诱发电位的脑机接口范式及其信号处理方法研究;徐光华 等;《西安交通大学学报》;20150318;全文 *

Also Published As

Publication number Publication date
CN110262657A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN109271020B (en) Eye tracking-based steady-state vision-evoked brain-computer interface performance evaluation method
Trejo et al. Brain-computer interfaces for 1-D and 2-D cursor control: designs using volitional control of the EEG spectrum or steady-state visual evoked potentials
Zhou et al. A hybrid asynchronous brain-computer interface combining SSVEP and EOG signals
CN100366215C (en) Control method and system and sense organs test method and system based on electrical steady induced response
Kim et al. Quantitative evaluation of a low-cost noninvasive hybrid interface based on EEG and eye movement
Fukuma et al. Real-time control of a neuroprosthetic hand by magnetoencephalographic signals from paralysed patients
CN109508094B (en) Visual induction brain-computer interface method combined with asynchronous eye movement switch
US20190307356A1 (en) Brain-computer interface for user's visual focus detection
US20130130799A1 (en) Brain-computer interfaces and use thereof
Valeriani et al. Enhancement of group perception via a collaborative brain–computer interface
CN109828664B (en) Steady-state visual evoked potential brain-computer interface method based on dynamic regulation of sensory feedback
Wittevrongel et al. Faster p300 classifier training using spatiotemporal beamforming
Ge et al. Training-free steady-state visual evoked potential brain–computer interface based on filter bank canonical correlation analysis and spatiotemporal beamforming decoding
Falzon et al. Complex-valued spatial filters for SSVEP-based BCIs with phase coding
Gembler et al. Exploring the possibilities and limitations of multitarget SSVEP-based BCI applications
CN110262657B (en) Asynchronous vision-induced brain-computer interface method based on' switch to target
Lv et al. Design and implementation of an eye gesture perception system based on electrooculography
Zhang et al. Design and implementation of an asynchronous BCI system with alpha rhythm and SSVEP
Chumerin et al. Processing and Decoding Steady-State Visual Evoked Potentials for Brain-Computer Interfaces
Lin et al. Eye gestures recognition technology in Human-computer Interaction
CN108491792A (en) Office scene human-computer interaction Activity recognition method based on electro-ocular signal
TWI699672B (en) Method and device for recognizing visual control commands by brain wave signal
CN111158471A (en) Human-computer interaction method based on eye movement and brain-computer interface technology
Wilmott et al. Transsaccadic integration of visual information is predictive, attention-based, and spatially precise
Wang et al. Research on a spatial–temporal characterisation of blink-triggered eye control interactions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant