CN110347242A - Audio visual brain-computer interface spelling system and its method based on space and semantic congruence - Google Patents
Audio visual brain-computer interface spelling system and its method based on space and semantic congruence Download PDFInfo
- Publication number
- CN110347242A CN110347242A CN201910455137.9A CN201910455137A CN110347242A CN 110347242 A CN110347242 A CN 110347242A CN 201910455137 A CN201910455137 A CN 201910455137A CN 110347242 A CN110347242 A CN 110347242A
- Authority
- CN
- China
- Prior art keywords
- character
- region
- interface
- display
- audio visual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 82
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000000638 stimulation Effects 0.000 claims abstract description 56
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 230000004936 stimulating effect Effects 0.000 claims abstract description 12
- 239000011159 matrix material Substances 0.000 claims description 18
- 240000006409 Acacia auriculiformis Species 0.000 claims description 12
- 210000005069 ears Anatomy 0.000 claims description 12
- 238000003491 array Methods 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 8
- 238000009826 distribution Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 230000005284 excitation Effects 0.000 claims 1
- 230000002902 bimodal effect Effects 0.000 abstract description 12
- 210000004556 brain Anatomy 0.000 description 13
- 238000004458 analytical method Methods 0.000 description 8
- 230000011218 segmentation Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 210000003205 muscle Anatomy 0.000 description 5
- 230000004424 eye movement Effects 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 230000002123 temporal effect Effects 0.000 description 4
- 238000012937 correction Methods 0.000 description 3
- 230000005611 electricity Effects 0.000 description 3
- 230000010354 integration Effects 0.000 description 3
- 229910021607 Silver chloride Inorganic materials 0.000 description 2
- 241000209140 Triticum Species 0.000 description 2
- 235000021307 Triticum Nutrition 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000004744 fabric Substances 0.000 description 2
- 210000001595 mastoid Anatomy 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- HKZLPVFGJNLROG-UHFFFAOYSA-M silver monochloride Chemical compound [Cl-].[Ag+] HKZLPVFGJNLROG-UHFFFAOYSA-M 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 206010002026 amyotrophic lateral sclerosis Diseases 0.000 description 1
- 230000003935 attention Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 201000000585 muscular atrophy Diseases 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 210000004761 scalp Anatomy 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/023—Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
- G06F3/0233—Character input methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Dermatology (AREA)
- Neurosurgery (AREA)
- Neurology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention relates to a kind of audio visual brain-computer interface spelling system and its method based on space and semantic congruence, in spelling system, audio visual stimulating apparatus generates the vision and auditory stimulation about target character using the audio visual experimental paradigm based on space and semantic congruence to subject;The EEG signals of eeg signal acquisition device acquisition subject;Computer carries out data prediction to EEG signals data, obtains pretreated EEG signals data;Computer carries out feature extraction and classifying to pretreated EEG signals data, obtains classification results, and classification results are sent to output display unit.The present invention is by combining the space characteristics of visual stimulus and auditory stimulation and semantic feature, realize space and the semantic consistency of the stimulation of audio visual bimodal, compared to single visual stimulus or auditory stimulation, ERPs by a larger margin can be induced, to improve the performance of brain-computer interface.
Description
Technical field
The present invention relates to brain-computer interface technical fields, more particularly to a kind of audio visual brain based on space and semantic congruence
Machine interface spelling system and its method.
Background technique
Brain-computer interface is system directly interactive between one kind novel brain and computer, it is (special for amyotrophia patient
It is not amyotrophic lateral sclerosis patient) that a direct communication exchanges are established between brain and external device is logical
Road.Brain machine interface system common at present is to record related event related potential (Event-related in scalp
Potentials, ERPs) EEG signals, by analyzing EEG signals, recognition command or output character.Spelling based on P300
Device is exactly a kind of characters spells system based on oddball event related potential.The existing spelling system based on P300 is main
Have following several: view-based access control model, based on the sum of the sense of hearing based on tactile, although the P300 characters spells systematicness of view-based access control model
It can be much better than P300 characters spells system based on the sense of hearing and based on tactile, but the P300 characters spells system of view-based access control model
Performance can't be satisfactory, need to be improved, and the P300 characters spells system of view-based access control model controls eyes muscle
Patient that is impaired or degenerating at any time has certain limitation, it is therefore proposed that a performance is better than visual P300 characters spells system
The P300 characters spells system of system and the audiovisual bimodal with versatility has and has very important significance.
Summary of the invention
Based on this, it is necessary to it is not satisfactory for the brain-computer interface performance of current view-based access control model, and be based on
The patient that the brain-computer interface spelling system of vision may be damaged or degenerate at any time to the control of eyes muscle has the problem of limitation, mentions
For a kind of audio visual brain-computer interface spelling system and its method based on space and semantic congruence, space and semantic congruence should be based on
Audio visual brain-computer interface spelling system not only can be improved view-based access control model brain-computer interface spelling system performance, but also can be improved
Its versatility (be not only suitable for the limited patient of the sense of hearing, and be suitable for the patient of limited sight).Have researches show that visual stimulus and
The space of auditory stimulation and semantic consistency will affect the integration effect of audiovisual bimodal stimulation, i.e., it is increased to induce amplitude
ERPs waveform, the present invention propose that the stimulation of the audio visual of space and semantic congruence, which is applied to P300 characters spells device, constructs a view
The characters spells system for listening bimodal, not only can be improved the performance of the P300 characters spells system of view-based access control model, but also can be improved
The versatility of the P300 characters spells system of view-based access control model.
To solve the above problems, the invention adopts the following technical scheme:
A kind of audio visual brain-computer interface spelling system based on space and semantic congruence, the system include audio visual stimulation dress
It sets, eeg signal acquisition device and computer;
The audio visual stimulating apparatus generates subject using the audio visual experimental paradigm based on space and semantic congruence
Vision and auditory stimulation about target character;
The EEG signals of the eeg signal acquisition device acquisition subject, and collected EEG signals data are sent
To the computer;
The computer carries out data prediction to the EEG signals data, obtains pretreated EEG signals number
According to;
The computer carries out feature extraction and classifying to the pretreated EEG signals data, obtains classification knot
Fruit, and the classification results are sent to output display unit, the output display unit is used for output character.
Correspondingly, the present invention also proposes a kind of audio visual brain-computer interface spelling methods based on space and semantic congruence, should
Method the following steps are included:
The vision about target character is generated to subject using the audio visual experimental paradigm based on space and semantic congruence
And auditory stimulation;
The EEG signals for acquiring subject, obtain EEG signals data;
Data prediction is carried out to the EEG signals data, obtains pretreated EEG signals data;
Feature extraction and classifying is carried out to the pretreated EEG signals data, obtains classification results, and will be described
Classification results are sent to output display unit, and the output display unit is used for output character.
Compared with prior art, the present invention has following technical effect that
Audio visual brain-computer interface spelling system proposed by the invention and its method are used based on space and semantic congruence
Audio visual experimental paradigm generates the visual stimulus and auditory stimulation of the target character about desired spelling to subject, by that will regard
The space characteristics and semantic feature for feeling stimulation and auditory stimulation combine, and realize space and the semanteme one of the stimulation of audio visual bimodal
Cause property, compared to single visual stimulus or auditory stimulation, audio visual bimodal stimulation of the invention can induce more substantially
The ERPs of degree, so that the performance of brain-computer interface is improved, meanwhile, audio visual brain-computer interface spelling system proposed by the invention was both fitted
For the patient that the sense of hearing is limited, and it is suitable for the patient of limited sight, there is stronger versatility.
Detailed description of the invention
Fig. 1 is the present invention audio visual brain-computer interface spelling system based on space and semantic congruence in one embodiment
Structural schematic diagram;
Fig. 2 is that audio visual stimulating apparatus generates subject using the audio visual experimental paradigm based on space and semantic congruence
About the vision of target character and the flow diagram of auditory stimulation;
The schematic diagram of character initial interface when Fig. 3 highlights for target character;
The schematic diagram of group region interface when Fig. 4 is flashing display;
The schematic diagram of sub-region interface when Fig. 5 is flashing display.
Specific embodiment
Technical solution of the present invention is described in detail below in conjunction with attached drawing and preferred embodiment.
In one of the embodiments, as shown in Figure 1, the present invention discloses a kind of audio visual based on space and semantic congruence
Brain-computer interface spelling system, the system include audio visual stimulating apparatus, eeg signal acquisition device and computer, wherein audiovisual
Feel that stimulating apparatus generates the view about target character using the audio visual experimental paradigm based on space and semantic congruence to subject
Feel and auditory stimulation;Eeg signal acquisition device acquires the EEG signals of subject, and collected EEG signals data are sent out
It send to computer;Computer carries out data prediction to EEG signals data, obtains pretreated EEG signals data;It calculates
Machine carries out feature extraction and classifying to pretreated EEG signals data, obtains classification results, and classification results are sent to
Output display unit, output display unit are used for output character.
Specifically, in the present embodiment, audio visual stimulating apparatus, which is used to generate subject, wants spelling about subject
Target character vision and auditory stimulation so that subject generate EEG signals.Audio visual stimulating apparatus generates subject
When vision and auditory stimulation, using the experimental paradigm based on space and semantic congruence, the experimental paradigm is spatially by target word
The sound source position of symbol and the vision position of appearing of target character are consistent, semantically by the auditory-articulatory of target character
It is consistent with vision presentation, to realize the Spatial Semantics consistency of audio visual bimodal stimulation, induces by a relatively large margin
ERPs。
Eeg signal acquisition device is used to acquire the EEG signals of subject, and collected EEG signals data are sent
To computer.The Scan4.5 number that eeg signal acquisition device in the present embodiment can be developed using Neuroscan company
Word acquisition system, electrode used therein are Ag/AgCl electrode, and select covering auditory visual integration and attention and recognize related brain areas
32 conductive electrodes, wherein reference electrode REF is placed at auris dextra mastoid process, and AFz is grounding electrode, and VEOG and HEOG are respectively used to record
Vertical eye movement and horizontal eye movement, impedance are less than 5K Ω.The sample frequency of EEG signals be 250Hz, bandpass filtering be 0.5~
100Hz。
Computer carries out data prediction to the EEG signals data that eeg signal acquisition device acquires, and wherein data are located in advance
Reason includes that EEG signals data are successively carried out with an electric treatment, segment processing, artefact is gone to handle and be filtered.It goes at eye electricity
Reason, i.e. interference of the removal blink to EEG signals;To going the EEG signals data after an electric treatment to carry out the segmentation of brain electricity, interception is pierced
The first 100 milliseconds EEG signals data to 800 milliseconds of totally 900 milliseconds of durations after stimulation are swashed, to the EEG signals data after segmentation
Baseline correction is carried out, is used for baseline correction for 100 milliseconds before moderate stimulation;EEG signals data after segmentation are carried out at artefact
Reason, such as ± 80 μ V is selected to remove artefact to EEG signals data;Finally, to removing artefact treated, EEG signals data are carried out
Bandpass filtering treatment, for example, by using 0.1 hertz to 30 hertz or 0.1 hertz to 24 hertz of bandpass filtering to going artefact to handle
EEG signals data afterwards carry out bandpass filtering treatment.
Computer carries out feature extraction and classifying to obtained pretreated EEG signals data, obtains classification results,
And be sent to classification results output display unit (display), output display unit is used for output character, so that user can
Directly to be exchanged with the external world.The feature of EEG signals mainly includes temporal signatures and space characteristics, and wherein temporal signatures refer to
After stimulation in some time window waveform amplitude size, correspond to time sampling point;Space characteristics refer to after stimulation some when
Between put the active region of brain, i.e. counter electrode number.Feature extraction is to extract the time sampling point and electrode that are conducive to classification
Number.The present invention carries out feature extraction using the method for principal component analysis.The sorting algorithm that the present invention uses is that Bayes is linear
Discriminant analysis, the target of linear discriminant analysis are to separate to represent different types of data using hyperplane, special about two classification
The classification for levying vector depends on vector in the which side of hyperplane.Linear discriminant analysis is due to its lower operation requirement
It obtains preferable classification results and has been widely used in a variety of brain machine interface systems.Bayes's linear discriminant analysis is adjustable calculation
Method, for preventing high dimensional data overfitting, which can rapidly estimate adjustment degree by training data automatically, without
It needs to verify, main thought is returned under Bayesian frame.
The audio visual brain-computer interface spelling system that the present embodiment is proposed uses the audio visual based on space and semantic congruence
Experimental paradigm generates the visual stimulus and auditory stimulation of the target character about desired spelling to subject, by by visual stimulus
It is combined with the space characteristics and semantic feature of auditory stimulation, realizes space and the semantic consistency of the stimulation of audio visual bimodal,
Compared to single visual stimulus or auditory stimulation, the audio visual bimodal stimulation of the present embodiment can induce by a larger margin
ERPs, so that the performance of brain-computer interface is improved, meanwhile, the audio visual brain-computer interface spelling system that the present embodiment is proposed both was applicable in
In the patient that the sense of hearing is limited, and it is suitable for the patient of limited sight, there is stronger versatility.
As a kind of specific embodiment, audio visual stimulating apparatus includes display module and voice module, wherein showing
For showing character initial interface, group region interface and sub-region interface the realization such as LED display can be used, pronounce mould in module
Block can realize that audio visual stimulating apparatus is used is tested based on the audio visual of space and semantic congruence using headset or earphone etc.
Normal form generates subject to be included the following steps about the vision of target character and the process of auditory stimulation, as shown in Figure 2:
Step 1: display module shows character initial interface, which includes matrix form group region unit, i.e. word
Symbol initial interface includes several group of region unit, and whole group region units is distributed in matrix form;In each group region unit
Show several characters (such as English alphabet character, numerical character, spcial character or other characters etc.), and each group of area
Domain block all has respective zone number (such as a group region unit is numbered with number, Chinese character or other modes).
Step 2: the target character for wanting spelling to subject highlights.For example, display module can use it is green
Color filled box is highlighted as the background colour of target character, and in Fig. 3, target character is " A ", target character " A "
It is highlighted by green background.
Step 3: after target character highlights, character initial interface switches to a group region interface, into a group area
Mode is dodged in domain.In a group region interface, each group of region unit equiprobability flashes the corresponding zone number of display at random, for example,
As shown in figure 4, group region unit flashing display " 1 " of the zone number for 1, and each group region unit at least flashes display once
Corresponding zone number, each group region unit flashing show that the range of duplicate number is 1~5 time;Voice module is each
Group region unit flashing issues the auditory-articulatory of corresponding zone number when showing corresponding zone number, for example, zone number is 1
Group region unit flashing display " 1 " when, as shown in figure 4, voice module issue zone number 1 auditory-articulatory " yi ".Group region
Block flashing shows that the position of corresponding zone number is corresponding with the sound channel for the auditory-articulatory that voice module issues the zone number, makes
It obtaining spatially, the sound source position of zone number and the vision position of appearing of zone number are consistent, semantically, area
The presentation of the auditory-articulatory of Field Number and vision is consistent, that is, realizes that audio-visual space is consistent and audiovisual semantic congruence.
Step 4: group region unit flashing shows corresponding zone number and voice module issues corresponding zone number
After auditory-articulatory, group region interface switches to sub-region interface, dodges mode into subregion, subregion dodges mode and group area
Flashing mode under domain sudden strain of a muscle mode is identical.Sub-region interface includes matrix sub-regions block, i.e. sub-region interface includes several
Subregion block, and whole subregion blocks is distributed in matrix form;Group region unit where target character is known as target group region
Block, as shown in figure 3, the target group region unit where target character " A " further includes character " B ", " C ", " D ", " E " and " F ", sub-district
Domain interface is that expansion of the alphabet of display in target group region unit on each sub-regions block is shown, as shown in figure 5, each
Sub-regions block shows one of character.Preferably, it is the efficiency for improving audio visual stimulation, rationally utilizes sub-region interface,
The quantity of the character shown in target group region unit is identical as the quantity of subregion block on sub-region interface, and any two
The character that subregion block is shown is all different.
Step 5: in sub-region interface, each sub-regions block equiprobability flashes the corresponding character of display at random, for example, such as
Shown in Fig. 5, the subregion block flashing where target character " A " shows " A ", and each sub-regions block at least flashes display one
Secondary corresponding character, each sub-regions block flashing show that the range of duplicate number is 1~5 time;Voice module is in each height
Region unit flashing issues the auditory-articulatory of corresponding character when showing corresponding character, for example, the sub-district where target character " A "
When domain block flashing display " A ", as shown in figure 5, voice module issues the auditory-articulatory " ei " of target character " A ".Subregion block dodges
The position of bright display character is corresponding with the voice module sending sound channel of auditory-articulatory of character, so that spatially, the sound of character
The vision position of appearing of sound source position and character is consistent, and semantically, the auditory-articulatory and vision of character, which are presented, to be kept
Unanimously, that is, realize that audio-visual space is consistent and audiovisual semantic congruence.
So far, display module completes the output of a target character, repeats the above steps one to step 5, display module
The output to multiple target characters can be realized.
Further, as shown in figure 3, matrix form group region unit includes 6 group region units, and 6 group region units in 3 ×
2 array distributions;6 characters, and display in any two group region unit are shown in character initial interface, each group of region unit
Character it is different.
Further, voice module includes left ear pronunciation submodule and auris dextra pronunciation submodule;
When any one group region unit flashing shows that corresponding region is compiled in the first row of 3 × 2 arrays on group region interface
Number when, corresponding zone number auditory-articulatory is only issued by left ear pronunciation submodule (such as left headset);
When any one group region unit flashing shows that corresponding region is compiled in the secondary series of 3 × 2 arrays on group region interface
Number when, corresponding zone number auditory-articulatory is only issued by auris dextra pronunciation submodule (such as auris dextra wheat);
When the flashing of any one subregion block shows corresponding character in the first row of 3 × 2 arrays on sub-region interface,
Corresponding character auditory-articulatory is only issued by left ear pronunciation submodule (such as left headset);
When the flashing of any one subregion block shows corresponding character in the secondary series of 3 × 2 arrays on sub-region interface,
Corresponding character auditory-articulatory is only issued by auris dextra pronunciation submodule (such as auris dextra wheat).
Further, in character initial interface, 36 characters shown by 6 group region units are by 26 English words alphabetic words
Symbol, 9 numerical characters and 1 spcial character composition.
Fig. 3-Fig. 5 be the present invention provide 36 characters of one kind distribution mode, zone number flashing display mode and
The alphabet shown in group region unit carries out the mode of expansion display on each sub-regions block, at shown in fig. 5 point of Fig. 3-
On mode for cloth, those skilled in the art can also make other deformations, these deformations are within the scope of protection of the invention.
When target character as shown in Figure 3 highlights in character initial interface, 6 group region units are in 3 × 2 arrays point
Cloth, 6 group region units (can also be suitable according to from top to bottom, from left to right according to serial number from left to right, from top to bottom
Sequence number), zone number 1-6;The group region unit that zone number is 1 is distributed 6 characters, respectively character according to clockwise direction
" A ", " B ", " C ", " D ", " E " and " F ", the group region unit that zone number is 2 are distributed 6 characters according to clockwise direction, respectively
Character " G ", " H ", " I ", " J ", " K " and " L ", the group region unit that zone number is 3 are distributed 6 characters according to clockwise direction, point
Not Wei character " S ", " T ", " U ", " V ", " W " and " X ", zone number be 4 group region unit according to clockwise direction be distributed 6 words
Symbol, respectively character " 5 ", " 6 ", " 7 ", " 8 ", " 9 " and spcial character "-", the group region unit that zone number is 5 is according to clockwise
6 character of directional spreding, respectively character " Y ", " Z ", " 1 ", " 2 ", " 3 " and " 4 ", the group region unit that zone number is 6 is according to suitable
Clockwise is distributed 6 characters, respectively character " M ", " N ", " O ", " P ", " Q " and " R ".In Fig. 3, target character is " A ", mesh
The zone number for marking region unit is " 1 ".
In flashing display as shown in Figure 4 in group region interface, group region unit flashing display area number " 1 ", meanwhile,
Voice module issues the auditory-articulatory " yi " of zone number " 1 ", when the group region unit that zone number is 1 flashes display, other
The display of group region unit is constant.
In flashing display as shown in Figure 5 in sub-region interface, the interior alphabet " A " shown of target group region unit,
" B ", " C ", " D ", " E " and " F " carries out expansion according to sequence from left to right, from top to bottom on subregion block and shows, each
Sub-regions block only shows one in alphabet " A ", " B ", " C ", " D ", " E " and " F ", and any two subregion block
The character of display is different, meanwhile, when the subregion block flashing where target character " A " shows " A ", voice module issues target word
Accord with the auditory-articulatory " ei " of " A ".
As a kind of specific embodiment, after display module highlights 1 second to target character, restore initial to character
Interface;
After continuously display 1 second of character initial interface, character initial interface switches to a group region interface;
For group after region interface continuously display 1 second, it is corresponding that each group of region BOB(beginning of block) equiprobability flashes display at random
Zone number after flashing display, restores to group region interface;Preferably, in order to induce bigger EEG signals, and make by
Examination person is comfortable on, and the background color for flashing the group region unit of display is green, other do not flash the back of the group region unit of display
Scape color is blue;
For group after region interface continuously display 1 second, group region interface switches to sub-region interface;
After continuously display 1 second of sub-region interface, it is corresponding that each sub-regions BOB(beginning of block) equiprobability flashes display at random
Character restores after flashing display to sub-region interface;Preferably, in order to induce bigger EEG signals, and make subject
It is comfortable on, flashes the background color of the subregion block of display for green, other do not flash the background face of the subregion block of display
Color is blue.
Specifically, in the present embodiment, display module the course of work the following steps are included:
(1) prompt of target character
Display module shows character initial interface, and display module highlights target character, and target character mentions
Show that mode is to be highlighted with green box, the display time is 1 second, and it is for 1 seconds to be then return to character initial interface, then into
Enter to group region and dodges mode.
(2) mode is dodged in group region
36 characters (including 26 English alphabet characters, 9 numerical characters and 1 spcial character) are divided into 6 groups
Region is shown respectively, as shown in figure 3, Fig. 3 is one of division mode.
1) realization of group region audiovisual stimulation Spatial Semantics consistency
6 group regions be divided into left and right two column, 3 group regions of each column, by from left to right, from top to bottom in the way of to 6
Group region be numbered (can also by from top to bottom, from left to right in the way of be numbered), number 1-6.Group region view
Listen the stimulation consistent implementation method of Spatial Semantics are as follows: when the zone number (region of certain group region unit of group region interface upper left side
Any one in number 1-3) when presenting, left ear pronunciation submodule issues the auditory-articulatory of corresponding zone number;When a group region
When the zone number (any one in zone number 4-6) of certain group region unit on right side is presented on interface, auris dextra pronunciation submodule
The auditory-articulatory of corresponding zone number is issued, thus the Spatial Semantics consistency in guarantee group region.
2) flashing mode in region is organized
For group after region interface continuously display 1 second, it is corresponding that each group of region BOB(beginning of block) equiprobability flashes display at random
Zone number.Corresponding zone number is equiprobably presented in each group of region unit at random, and the time of corresponding zone number is presented
Range be 180 milliseconds~250 milliseconds, preferably present the corresponding zone number time be 200 milliseconds;Stimulate the time presented
The range at interval (time interval i.e. between the adjacent display of flashing twice of any two group region unit) is 50 milliseconds~100 millis
Second, preferred time interval is 50 milliseconds.In order to induce bigger EEG signals, and it is comfortable on subject, selects green
The background color of group region unit when flashing for stimulation, blue are the background color of group region unit when not flashing.
(3) switching of the group region interface to sub-region interface
Group area flicker terminates, and reverts to group region interface and 1 second continuously display, then organizes region interface and switches to sub-district
Domain interface, subregion block are the expansion of six characters in the target group region unit where target character, each character is one
A region, i.e. six sub-regions.After sub-region interface shows 1 second, enters subregion and dodge mode.
(4) subregion dodges mode
1) realization of subregion audiovisual stimulation Spatial Semantics consistency
6 sub-regions are divided into two column of left and right, 3 group regions of each column, and subregion audiovisual stimulates the consistent realization of Spatial Semantics
Method are as follows: when character (any one in character " A ", " B " and " C ") flashing is aobvious in certain subregion block of sub-region interface upper left side
When showing, left ear pronunciation submodule issues the auditory-articulatory of corresponding character;When in certain subregion block of the upper right side of sub-region interface
When character (any one in character " D ", " E " and " F ") flashing display, auris dextra pronunciation submodule issues the sense of hearing of corresponding character
Pronunciation, to guarantee the Spatial Semantics consistency of subregion.
2) flashing mode of subregion
After continuously display 1 second of sub-region interface, it is corresponding that each sub-regions BOB(beginning of block) equiprobability flashes display at random
Character.In order to induce bigger EEG signals, and it is comfortable on subject, target subregion when green being selected to flash for stimulation
The background color of block, blue are the background color of subregion block when not flashing.After target character flashing display, restore to son
Region interface.
The output of a target character is that mode and subregion sudden strain of a muscle mode are dodged by group region to position in the present invention, under
The output of one target character circuits sequentially the process, i.e., mode-is dodged in the prompt-entrance group region for successively carrying out target character
The switching-of group region interface to sub-region interface enters subregion and dodges mode.
In another embodiment, the present invention proposes a kind of based on the spelling of the audio visual brain-computer interface of space and semantic congruence
Method, method includes the following steps:
The vision about target character is generated to subject using the audio visual experimental paradigm based on space and semantic congruence
And auditory stimulation;
The EEG signals for acquiring subject, obtain EEG signals data;
Data prediction is carried out to EEG signals data, obtains pretreated EEG signals data;
Feature extraction and classifying is carried out to pretreated EEG signals data, obtains classification results, and by classification results
It is sent to output display unit, output display unit is used for output character.
Specifically, in the present embodiment, using the experimental paradigm based on space and semantic congruence to subject generate about
The vision and auditory stimulation of target character, the experimental paradigm is spatially by the sound source position of target character and target character
Vision position of appearing be consistent, the auditory-articulatory of target character and vision presentation are consistent semantically, thus real
The Spatial Semantics consistency of existing audio visual bimodal stimulation, induces ERPs by a relatively large margin.
After generating subject about the vision and auditory stimulation of target character, the EEG signals of subject are acquired.This reality
The acquisition for applying EEG signals in example can use the Scan4.5 digital acquisition system that Neuroscan company develops and realize, should
Acquisition system electrode used therein is Ag/AgCl electrode, and selects the 32 of covering auditory visual integration and attention and cognition related brain areas
Conductive electrode, wherein reference electrode REF is placed at auris dextra mastoid process, and AFz is grounding electrode, and VEOG and HEOG are respectively used to record and hang down
Straight eye movement and horizontal eye movement, impedance are less than 5K Ω.The sample frequency of EEG signals is 250Hz, and bandpass filtering is 0.5~100Hz.
After acquiring EEG signals data, data prediction is carried out to EEG signals data, wherein data prediction includes pair
EEG signals data successively carry out an electric treatment, segment processing, artefact are gone to handle and be filtered.An electric treatment is gone, that is, is gone
Interference except blink to EEG signals;To going the EEG signals data after an electric treatment to carry out the segmentation of brain electricity, interception stimulates preceding 100
The EEG signals data of 800 milliseconds of totally 900 milliseconds of durations after millisecond to stimulation carry out baseline to the EEG signals data after segmentation
It corrects, is used for baseline correction for 100 milliseconds before moderate stimulation;Artefact is carried out to the EEG signals data after segmentation to handle, such as
± 80 μ V are selected to remove artefact to EEG signals data;Finally, to removing artefact treated, EEG signals data carry out band logical filter
Wave processing, for example, by using 0.1 hertz to 30 hertz or 0.1 hertz to 24 hertz of bandpass filtering to removing artefact treated brain
Electrical signal data carries out bandpass filtering treatment.
Next, carrying out feature extraction and classifying to obtained pretreated EEG signals data, classification results are obtained,
And classification results are sent to output display unit, output display unit is used for output character so that user can directly with
External world's exchange.The feature of EEG signals mainly includes temporal signatures and space characteristics, wherein temporal signatures refer to after stimulation certain
The amplitude size of waveform in a time window corresponds to time sampling point;Space characteristics refer to some time point brain after stimulation
Active region, i.e. counter electrode number.Feature extraction is to extract the time sampling point and number of poles that are conducive to classification.This hair
The bright method using principal component analysis carries out feature extraction.The sorting algorithm that the present invention uses is Bayes's linear discriminant analysis,
The target of linear discriminant analysis is to separate to represent different types of data using hyperplane, is classified about two, feature vector
Classification depends on vector in the which side of hyperplane.Linear discriminant analysis requires to obtain preferably due to its lower operation
Classification results be widely used in a variety of brain machine interface systems.Bayes's linear discriminant analysis is adjustable algorithm, is used for
High dimensional data overfitting is prevented, which can rapidly estimate adjustment degree by training data automatically, without testing
Card, main thought is returned under Bayesian frame.
The audio visual brain-computer interface spelling methods that the present embodiment is proposed use the audio visual based on space and semantic congruence
Experimental paradigm generates the visual stimulus and auditory stimulation of the target character about desired spelling to subject, by by visual stimulus
It is combined with the space characteristics and semantic feature of auditory stimulation, realizes space and the semantic consistency of the stimulation of audio visual bimodal,
Compared to single visual stimulus or auditory stimulation, the audio visual bimodal stimulation of the present embodiment can induce by a larger margin
ERPs, to improve the performance of brain-computer interface.
Further, the target of spelling is wanted to subject using the audio visual experimental paradigm based on space and semantic congruence
Character generates vision and the process of auditory stimulation includes the following steps, as shown in Figure 2:
Step 1: display character initial interface, which includes matrix form group region unit, i.e. the initial boundary of character
Face includes several group of region unit, and whole group region units is distributed in matrix form;Display is several in each group region unit
A character (such as English alphabet character, numerical character, spcial character or other characters etc.), and each group of region unit has
There is respective zone number (such as a group region unit is numbered with number, Chinese character or other modes).
Step 2: the target character for wanting spelling to subject highlights.For example, can be using green solid square
Frame is highlighted as the background colour of target character, and in Fig. 3, target character is " A ", and target character " A " is carried on the back by green
Scape highlights.
Step 3: after target character highlights, character initial interface switches to a group region interface, into a group area
Mode is dodged in domain.In a group region interface, each group of region unit equiprobability flashes the corresponding zone number of display at random, for example,
As shown in figure 4, group region unit flashing display " 1 " of the zone number for 1, and each group region unit at least flashes display once
Corresponding zone number, each group region unit flashing show that the range of duplicate number is 1~5 time;Voice module is each
Group region unit flashing issues the auditory-articulatory of corresponding zone number when showing corresponding zone number, for example, zone number is 1
Group region unit flashing display " 1 " when, as shown in figure 4, voice module issue zone number 1 auditory-articulatory " yi ".Group region
Block flashing shows that the position of corresponding zone number is corresponding with the sound channel for the auditory-articulatory that voice module issues the zone number, makes
It obtaining spatially, the sound source position of zone number and the vision position of appearing of zone number are consistent, semantically, area
The presentation of the auditory-articulatory of Field Number and vision is consistent, that is, realizes that audio-visual space is consistent and audiovisual semantic congruence.
Step 4: group region unit flashing shows corresponding zone number and voice module issues corresponding zone number
After auditory-articulatory, group region interface switches to sub-region interface, dodges mode into subregion, subregion dodges mode and group area
Flashing mode under domain sudden strain of a muscle mode is identical.Sub-region interface includes matrix sub-regions block, i.e. sub-region interface includes several
Subregion block, and whole subregion blocks is distributed in matrix form;Group region unit where target character is known as target group region
Block, as shown in figure 3, the target group region unit where target character " A " further includes character " B ", " C ", " D ", " E " and " F ", sub-district
Domain interface is that expansion of the alphabet of display in target group region unit on each sub-regions block is shown, as shown in figure 5, each
Sub-regions block shows one of character.Preferably, it is the efficiency for improving audio visual stimulation, rationally utilizes sub-region interface,
The quantity of the character shown in target group region unit is identical as the quantity of subregion block on sub-region interface, and any two
The character that subregion block is shown is all different.
Step 5: in sub-region interface, each sub-regions block equiprobability flashes the corresponding character of display at random, for example, such as
Shown in Fig. 5, the subregion block flashing where target character " A " shows " A ", and each sub-regions block at least flashes display one
Secondary corresponding character, each sub-regions block flashing show that the range of duplicate number is 1~5 time;Voice module is in each height
Region unit flashing issues the auditory-articulatory of corresponding character when showing corresponding character, for example, the sub-district where target character " A "
When domain block flashing display " A ", as shown in figure 5, voice module issues the auditory-articulatory " ei " of target character " A ".Subregion block dodges
The position of bright display character is corresponding with the voice module sending sound channel of auditory-articulatory of character, so that spatially, the sound of character
The vision position of appearing of sound source position and character is consistent, and semantically, the auditory-articulatory and vision of character, which are presented, to be kept
Unanimously, that is, realize that audio-visual space is consistent and audiovisual semantic congruence.
So far, the output for completing a target character repeats the above steps one to step 5, can be realized to multiple mesh
The output of marking-up symbol.
Further, as shown in figure 3, matrix form group region unit includes 6 group region units, and 6 group region units in 3 ×
2 array distributions;6 characters, and display in any two group region unit are shown in character initial interface, each group of region unit
Character it is different.
The present invention is based on the audio visual brain-computer interface spelling methods in space and semantic congruence be referred to it is above-mentioned based on space
With the implementation method of each device in the audio visual brain-computer interface spelling system of semantic congruence, details are not described herein again.
Each technical characteristic of embodiment described above can be combined arbitrarily, for simplicity of description, not to above-mentioned reality
It applies all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited
In contradiction, all should be considered as described in this specification.
The embodiments described above only express several embodiments of the present invention, and the description thereof is more specific and detailed, but simultaneously
It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art
It says, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to protection of the invention
Range.Therefore, the scope of protection of the patent of the invention shall be subject to the appended claims.
Claims (10)
1. a kind of audio visual brain-computer interface spelling system based on space and semantic congruence, which is characterized in that pierced including audio visual
Excitation device, eeg signal acquisition device and computer;
The audio visual stimulating apparatus using the audio visual experimental paradigm based on space and semantic congruence to subject generate about
The vision and auditory stimulation of target character;
The EEG signals of the eeg signal acquisition device acquisition subject, and collected EEG signals data are sent to institute
State computer;
The computer carries out data prediction to the EEG signals data, obtains pretreated EEG signals data;
The computer carries out feature extraction and classifying to the pretreated EEG signals data, obtains classification results, and
The classification results are sent to output display unit, the output display unit is used for output character.
2. the audio visual brain-computer interface spelling system according to claim 1 based on space and semantic congruence, feature exist
In, the audio visual stimulating apparatus include display module and voice module, the audio visual stimulating apparatus use based on space and
The audio visual experimental paradigm of semantic congruence to subject generate about the vision of target character and the process of auditory stimulation include with
Lower step:
Step 1: the display module shows character initial interface, and the character initial interface includes matrix form group region unit, often
Several characters are shown in one described group of region unit, and each described group of region unit all has zone number;
Step 2: the target character is highlighted;
Step 3: the character initial interface switches to a group region interface, in described group of region interface, each described group of region unit
Equiprobability flashes the corresponding zone number of display at random, and each described group of region unit at least flashes the primary corresponding region of display and compile
Number, and the voice module issues corresponding zone number when each described group of region unit flashes and show corresponding zone number
Auditory-articulatory;
Step 4: described group of region interface switches to sub-region interface, and the sub-region interface includes matrix sub-regions block, institute
The whole character of sub-region interface display in the target group region unit where the target character is stated in each son
Expansion on region unit shows that each subregion block shows one of them described character;
Step 5: each subregion block equiprobability flashes the corresponding character of display at random, and each subregion block is at least
Flashing shows primary corresponding character, and the voice module is sent out when each subregion block flashing shows corresponding character
Corresponding character auditory-articulatory out.
3. the audio visual brain-computer interface spelling system according to claim 2 based on space and semantic congruence, feature exist
In,
The matrix form group region unit includes 6 described group of region units, and the matrix subregion block includes 6 subregions
Block, and 6 described group of region units and 6 subregion blocks are in 3 × 2 array distributions;
6 characters are shown in the character initial interface, each described group of region unit, and in group region unit described in any two
The character of display is different.
4. the audio visual brain-computer interface spelling system according to claim 3 based on space and semantic congruence, feature exist
In the voice module includes left ear pronunciation submodule and auris dextra pronunciation submodule;
When any one of group of region unit flashing shows corresponding area in the first row of 3 × 2 arrays on described group of region interface
When Field Number, corresponding zone number auditory-articulatory is only issued by the left ear pronunciation submodule;
When any one of group of region unit flashing shows corresponding area in the secondary series of 3 × 2 arrays on described group of region interface
When Field Number, corresponding zone number auditory-articulatory is only issued by auris dextra pronunciation submodule;
When any one of subregion block flashing shows corresponding word in the first row of 3 × 2 arrays on the sub-region interface
Fu Shi only issues corresponding character auditory-articulatory by the left ear pronunciation submodule;
When any one of subregion block flashing shows corresponding word in the secondary series of 3 × 2 arrays on the sub-region interface
Fu Shi only issues corresponding character auditory-articulatory by auris dextra pronunciation submodule.
5. the audio visual brain-computer interface spelling system according to claim 4 based on space and semantic congruence, feature exist
In,
In the character initial interface, 36 characters shown by 6 described group of region units are by 26 English alphabet characters, 9
Numerical character and 1 spcial character composition.
6. the audio visual brain-computer interface spelling system according to claim 5 based on space and semantic congruence, feature exist
In,
After the display module highlights 1 second to the target character, restore to the character initial interface;
For the character initial interface after continuously display 1 second, the character initial interface switches to described group of region interface;
After continuously display 1 second of described group of region interface, it is corresponding that each described group of region BOB(beginning of block) equiprobability flashes display at random
Zone number restores after flashing display to described group of region interface;
After continuously display 1 second of described group of region interface, described group of region interface switches to the sub-region interface;
For the sub-region interface after continuously display 1 second, it is corresponding that each subregion BOB(beginning of block) equiprobability flashes display at random
Character restores after flashing display to the sub-region interface.
7. spelling system based on the audio visual brain-computer interface of space and semantic congruence to described in 6 any one according to claim 1
System, which is characterized in that
The data prediction include EEG signals data are successively carried out an electric treatment, segment processing, go artefact handle and
Filtering processing.
8. a kind of audio visual brain-computer interface spelling methods based on space and semantic congruence, which comprises the following steps:
Vision about target character is generated to subject using the audio visual experimental paradigm based on space and semantic congruence and is listened
Feel stimulation;
The EEG signals for acquiring subject, obtain EEG signals data;
Data prediction is carried out to the EEG signals data, obtains pretreated EEG signals data;
Feature extraction and classifying is carried out to the pretreated EEG signals data, obtains classification results, and by the classification
As a result it is sent to output display unit, the output display unit is used for output character.
9. the audio visual brain-computer interface spelling methods according to claim 8 based on space and semantic congruence, feature exist
In, using the audio visual experimental paradigm based on space and semantic congruence to subject want spelling target character generate vision and
The process of auditory stimulation the following steps are included:
Step 1: display character initial interface, the character initial interface includes matrix form group region unit, each described group of region
Several characters are shown in block, and each described group of region unit all has zone number;
Step 2: the target character is highlighted;
Step 3: the character initial interface switches to a group region interface, in described group of region interface, each described group of region unit
Equiprobability flashes the corresponding zone number of display at random, and each described group of region unit at least flashes the primary corresponding region of display and compile
Number, and the voice module issues corresponding zone number when each described group of region unit flashes and show corresponding zone number
Auditory-articulatory;
Step 4: described group of region interface switches to sub-region interface, and the sub-region interface includes matrix sub-regions block, institute
The whole character of sub-region interface display in the target group region unit where the target character is stated in each son
Expansion on region unit shows that each subregion block shows one of them described character;
Step 5: each subregion block equiprobability flashes the corresponding character of display at random, and each subregion block is at least
Flashing shows primary corresponding character, and the voice module is sent out when each subregion block flashing shows corresponding character
Corresponding character auditory-articulatory out.
10. the audio visual brain-computer interface spelling methods according to claim 9 based on space and semantic congruence, feature exist
In,
The matrix form group region unit includes 6 described group of region units, and the matrix subregion block includes 6 subregions
Block, and 6 described group of region units and 6 subregion blocks are in 3 × 2 array distributions;
6 characters are shown in the character initial interface, each described group of region unit, and in group region unit described in any two
The character of display is different.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910455137.9A CN110347242A (en) | 2019-05-29 | 2019-05-29 | Audio visual brain-computer interface spelling system and its method based on space and semantic congruence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910455137.9A CN110347242A (en) | 2019-05-29 | 2019-05-29 | Audio visual brain-computer interface spelling system and its method based on space and semantic congruence |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110347242A true CN110347242A (en) | 2019-10-18 |
Family
ID=68174358
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910455137.9A Pending CN110347242A (en) | 2019-05-29 | 2019-05-29 | Audio visual brain-computer interface spelling system and its method based on space and semantic congruence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110347242A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111012342A (en) * | 2019-11-01 | 2020-04-17 | 天津大学 | Audio-visual dual-channel competition mechanism brain-computer interface method based on P300 |
CN111338482A (en) * | 2020-03-04 | 2020-06-26 | 太原理工大学 | Brain-controlled character spelling recognition method and system based on supervised self-encoding |
CN113576496A (en) * | 2021-07-08 | 2021-11-02 | 华南理工大学 | Vision tracking brain-computer interface detection system |
CN113608612A (en) * | 2021-07-23 | 2021-11-05 | 西安交通大学 | Visual-auditory combined mixed brain-computer interface method |
CN114167989A (en) * | 2021-12-09 | 2022-03-11 | 太原理工大学 | Brain-controlled spelling method and system based on visual and auditory inducement and stable decoding |
CN114756120A (en) * | 2022-03-18 | 2022-07-15 | 华南理工大学 | Multifunctional character input system based on mixed brain-computer interface |
CN116584957A (en) * | 2023-06-14 | 2023-08-15 | 中国医学科学院生物医学工程研究所 | Data processing method, device, equipment and storage medium of hybrid brain-computer interface |
WO2023240951A1 (en) * | 2022-06-13 | 2023-12-21 | 深圳先进技术研究院 | Training method, training apparatus, training device, and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102184019A (en) * | 2011-05-16 | 2011-09-14 | 天津大学 | Method for audio-visual combined stimulation of brain-computer interface based on covert attention |
CN105266805A (en) * | 2015-10-23 | 2016-01-27 | 华南理工大学 | Visuoauditory brain-computer interface-based consciousness state detecting method |
CN106569604A (en) * | 2016-11-04 | 2017-04-19 | 天津大学 | Audiovisual dual-mode semantic matching and semantic mismatch co-stimulus brain-computer interface paradigm |
CN109521870A (en) * | 2018-10-15 | 2019-03-26 | 天津大学 | A kind of brain-computer interface method that the audio visual based on RSVP normal form combines |
-
2019
- 2019-05-29 CN CN201910455137.9A patent/CN110347242A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102184019A (en) * | 2011-05-16 | 2011-09-14 | 天津大学 | Method for audio-visual combined stimulation of brain-computer interface based on covert attention |
CN105266805A (en) * | 2015-10-23 | 2016-01-27 | 华南理工大学 | Visuoauditory brain-computer interface-based consciousness state detecting method |
CN106569604A (en) * | 2016-11-04 | 2017-04-19 | 天津大学 | Audiovisual dual-mode semantic matching and semantic mismatch co-stimulus brain-computer interface paradigm |
CN109521870A (en) * | 2018-10-15 | 2019-03-26 | 天津大学 | A kind of brain-computer interface method that the audio visual based on RSVP normal form combines |
Non-Patent Citations (2)
Title |
---|
安兴伟: "隐性注意下视听双通道脑控字符输入系统关键问题研究", 《中国优秀博士学位论文全文数据库信息科技辑》 * |
陈志强: "基于OpenViBE平台的P300脑机接口研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111012342A (en) * | 2019-11-01 | 2020-04-17 | 天津大学 | Audio-visual dual-channel competition mechanism brain-computer interface method based on P300 |
CN111338482A (en) * | 2020-03-04 | 2020-06-26 | 太原理工大学 | Brain-controlled character spelling recognition method and system based on supervised self-encoding |
CN113576496A (en) * | 2021-07-08 | 2021-11-02 | 华南理工大学 | Vision tracking brain-computer interface detection system |
CN113576496B (en) * | 2021-07-08 | 2022-05-20 | 华南理工大学 | Vision tracking brain-computer interface detection system |
CN113608612A (en) * | 2021-07-23 | 2021-11-05 | 西安交通大学 | Visual-auditory combined mixed brain-computer interface method |
CN113608612B (en) * | 2021-07-23 | 2024-05-28 | 西安交通大学 | Mixed brain-computer interface method combining visual and audio sense |
CN114167989A (en) * | 2021-12-09 | 2022-03-11 | 太原理工大学 | Brain-controlled spelling method and system based on visual and auditory inducement and stable decoding |
CN114167989B (en) * | 2021-12-09 | 2023-04-07 | 太原理工大学 | Brain-controlled spelling method and system based on visual and auditory inducement and stable decoding |
CN114756120A (en) * | 2022-03-18 | 2022-07-15 | 华南理工大学 | Multifunctional character input system based on mixed brain-computer interface |
WO2023240951A1 (en) * | 2022-06-13 | 2023-12-21 | 深圳先进技术研究院 | Training method, training apparatus, training device, and storage medium |
CN116584957A (en) * | 2023-06-14 | 2023-08-15 | 中国医学科学院生物医学工程研究所 | Data processing method, device, equipment and storage medium of hybrid brain-computer interface |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110347242A (en) | Audio visual brain-computer interface spelling system and its method based on space and semantic congruence | |
Jung et al. | Extended ICA removes artifacts from electroencephalographic recordings | |
CN106569604B (en) | Audiovisual bimodal semantic matches and semantic mismatch collaboration stimulation brain-machine interface method | |
CN102793540B (en) | Method for optimizing audio-visual cognitive event-related potential experimental paradigm | |
Yong et al. | Sparse spatial filter optimization for EEG channel reduction in brain-computer interface | |
CN109521870A (en) | A kind of brain-computer interface method that the audio visual based on RSVP normal form combines | |
Groen et al. | The time course of natural scene perception with reduced attention | |
CN103699216B (en) | A kind of based on Mental imagery and the E-mail communication system of vision attention mixing brain-computer interface and method | |
CN106933353A (en) | A kind of two dimensional cursor kinetic control system and method based on Mental imagery and coded modulation VEP | |
CN108294748A (en) | A kind of eeg signal acquisition and sorting technique based on stable state vision inducting | |
CN110262658B (en) | Brain-computer interface character input system based on enhanced attention and implementation method | |
CN104571504B (en) | A kind of online brain-machine interface method based on Imaginary Movement | |
CN109247917A (en) | A kind of spatial hearing induces P300 EEG signal identification method and device | |
CN111930238A (en) | Brain-computer interface system implementation method and device based on dynamic SSVEP (secure Shell-and-Play) paradigm | |
CN106484106A (en) | The non-attention event related potential brain-machine interface method of visual acuity automatic identification | |
CN112617863A (en) | Hybrid online computer-computer interface method for identifying lateral direction of left and right foot movement intention | |
CN109567936B (en) | Brain-computer interface system based on auditory attention and multi-focus electrophysiology and implementation method | |
CN110688013A (en) | English keyboard spelling system and method based on SSVEP | |
CN107822628B (en) | Epileptic brain focus area automatic positioning device and system | |
Lin et al. | Development of a high-speed mental spelling system combining eye tracking and SSVEP-based BCI with high scalability | |
CN112783314B (en) | Brain-computer interface stimulation paradigm generating and detecting method, system, medium and terminal based on SSVEP | |
CN114415833B (en) | Electroencephalogram asynchronous control software design method based on time-space frequency conversion SSVEP | |
Petruk et al. | Stimulus rivalry and binocular rivalry share a common neural substrate | |
Stawicki et al. | Investigating flicker-free steady-state motion stimuli for vep-based bcis | |
CN113552941A (en) | Multi-sensory-mode BCI-VR (binary-coded decimal-alternating Current-virtual Voltage regulator) control method and system and VR equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191018 |