CN114638253B - Identification system and method based on emotion electroencephalogram feature fusion optimization mechanism - Google Patents
Identification system and method based on emotion electroencephalogram feature fusion optimization mechanism Download PDFInfo
- Publication number
- CN114638253B CN114638253B CN202210142362.9A CN202210142362A CN114638253B CN 114638253 B CN114638253 B CN 114638253B CN 202210142362 A CN202210142362 A CN 202210142362A CN 114638253 B CN114638253 B CN 114638253B
- Authority
- CN
- China
- Prior art keywords
- emotion
- attention mechanism
- algorithm
- brain
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 89
- 230000007246 mechanism Effects 0.000 title claims abstract description 81
- 238000000034 method Methods 0.000 title claims abstract description 71
- 238000005457 optimization Methods 0.000 title claims abstract description 50
- 230000004927 fusion Effects 0.000 title claims abstract description 28
- 210000004556 brain Anatomy 0.000 claims abstract description 84
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 67
- 239000002245 particle Substances 0.000 claims abstract description 47
- 238000013528 artificial neural network Methods 0.000 claims abstract description 38
- 230000005611 electricity Effects 0.000 claims abstract description 29
- 238000007781 pre-processing Methods 0.000 claims abstract description 12
- 230000004913 activation Effects 0.000 claims abstract description 9
- 239000013598 vector Substances 0.000 claims description 47
- 238000012549 training Methods 0.000 claims description 30
- 238000012360 testing method Methods 0.000 claims description 25
- 238000000354 decomposition reaction Methods 0.000 claims description 17
- 238000012216 screening Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 12
- 238000003860 storage Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 10
- 238000004458 analytical method Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 230000033764 rhythmic process Effects 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 230000001133 acceleration Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000013145 classification model Methods 0.000 claims description 4
- 238000009827 uniform distribution Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000001228 spectrum Methods 0.000 claims description 3
- 230000003321 amplification Effects 0.000 claims description 2
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 6
- 230000006870 function Effects 0.000 description 31
- 238000010586 diagram Methods 0.000 description 12
- 238000002474 experimental method Methods 0.000 description 9
- 238000011160 research Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000007547 defect Effects 0.000 description 4
- 239000012634 fragment Substances 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 230000037007 arousal Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 125000004122 cyclic group Chemical group 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 230000006698 induction Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 101100029946 Mus musculus Pld3 gene Proteins 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000763 evoking effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000284 resting effect Effects 0.000 description 1
- 210000004761 scalp Anatomy 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
- G06F2218/04—Denoising
- G06F2218/06—Denoising by applying a scale-space analysis, e.g. using wavelet analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses an identification system and method based on emotion brain electrical characteristic fusion optimization mechanism in the technical field of digital signal processing and brain electrical signals, comprising the following steps: acquiring emotion brain electrical signals of a plurality of channels; preprocessing the emotion brain electrical signals; extracting characteristic values of the preprocessed emotion brain signals; analyzing the characteristic value characteristic pattern by using a particle swarm optimization algorithm to obtain emotion brain electricity module characteristics with the maximum activation degree for different users; an RNN model based on an attention mechanism is built, iteration is carried out by utilizing an adaptability function of a particle swarm optimization algorithm, a twin neural network algorithm model is built, and characteristics of an electroencephalogram frequency band are screened by utilizing a loss function. The invention is different from the traditional electroencephalogram data features, namely the frequency domain features of single-band electroencephalogram signals acquired through a few electrode channels, has the problems of single data features, poor recognition accuracy and the like, improves the generalization capability of a model, and has obvious application value.
Description
Technical Field
The invention relates to an identification system and method based on an emotion electroencephalogram feature fusion optimization mechanism, and belongs to the technical field of digital signal processing and electroencephalogram signals.
Background
With the acceleration of social high informatization and economic globalization processes, the Internet, cloud services and digitization are permeated into all aspects of our lives, and a plurality of potential safety hazards are caused while the social and economic development is carried out. At present, the biological feature recognition technology mainly comprises face recognition, iris recognition, DNA recognition, fingerprint recognition and the like, and the biological feature recognition technology has the use defects of easy tampering, copying, being stressed and the like, so that the demands of people on safety protection cannot be met. The brain electrical signal is a unique biological characteristic of an individual, has the advantages of anti-counterfeiting, difficult damage, incapability of being imitated and the like, can minimize the identification risk, and is increasingly focused by vast researchers.
Therefore, the manner of identity recognition and authentication by the brain electrical signals of the individual becomes an important research direction at present. There are multiple ways in which the electroencephalogram signal is subjected to identification and authentication research, and the identification and authentication research can be divided into: the four types of resting state, visual induction, motor imagery and event related potential are adopted, but the four types of electroencephalogram signals are mainly used for identifying the specific background and the specific testee at specific time in the process of acquiring and authenticating the individual identities, most of researches need to induce the stimulation task of the individual brain response mode, so that the complex and changeable practical application scene cannot be met, and the robustness of the model is severely challenged; meanwhile, the traditional EEG signal identification data features are frequency domain features of single-band EEG signals acquired through a few electrode channels, the data features are single and the identification accuracy is poor, aiming at the problems, the invention explores and improves the identification method based on an emotion EEG module optimization mechanism, so that people can better understand the features of individual identification under different emotion states, the generalization capability of the model is improved, and the identification technology of the model can provide help for future research.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, provides an identification system and an identification method based on an emotion electroencephalogram feature fusion optimization mechanism, is different from the traditional electroencephalogram data features which are frequency domain features of single-band electroencephalogram signals collected through a few electrode channels, has the advantages of single data features, poor identification accuracy and the like, improves the generalization capability of a model, and has obvious application value.
In order to achieve the above purpose, the invention is realized by adopting the following technical scheme:
in a first aspect, the invention provides an identification method based on an emotion electroencephalogram feature fusion optimization mechanism, which comprises the following steps:
acquiring emotion brain electrical signals of a plurality of channels;
Preprocessing the emotion brain electrical signals;
extracting characteristic values of the preprocessed emotion brain signals;
Analyzing the characteristic value characteristic pattern by using a particle swarm optimization algorithm to obtain emotion brain electricity module characteristics with the maximum activation degree for different users;
constructing an RNN model based on an attention mechanism, iterating by utilizing an fitness function of a particle swarm optimization algorithm, constructing a twin neural network algorithm model, and screening brain electricity frequency band characteristics by utilizing a loss function;
after the obtained brain electricity frequency band characteristics and the emotion brain electricity module characteristics are fused and segmented, a training set and a testing set are divided;
respectively inputting the training set into an RNN model based on an attention mechanism and a twin neural network algorithm model for training;
inputting the test set into a trained RNN model based on an attention mechanism and a twin neural network algorithm model for prediction to obtain a prediction result;
And distinguishing the identities of different users based on the prediction result.
Further, the preprocessing comprises filtering, downsampling, removing power frequency interference, artifact extraction and re-referencing.
Further, extracting the characteristic value of the preprocessed emotion brain signals, including: after four layers of decomposition are carried out on the preprocessed emotion brain signals by using wavelet packet decomposition, the energy entropy of five rhythms of a wave, a beta wave, a delta wave, a theta wave and a gamma wave of the brain signals are extracted as characteristic values, and a wavelet packet decomposition formula is as follows:
wherein f (t) represents an initial signal, f i,j(tj) represents a reconstructed signal on (i, j) on the i-th layer, j=0, 1,2, …,2 i -1, and an energy spectrum obtained by decomposing the initial signal by a wavelet packet is expressed as:
Wherein E i,j(tj) represents the band energy on the i-th layer, X j,k represents the discrete point amplitude of the reconstructed signal f i,j(tj), and m is the sampling point of the signal; the energy entropy value on the signal frequency band is obtained through the above process, and is marked as W, and the energy entropy of the j node is represented by the following expression:
Where P j represents the energy entropy of the jth node, E j represents the band energy of the jth node, and E represents the total energy of the wavelet packet decomposition.
Further, the speed update formula in the particle swarm optimization algorithm is as follows:
where i=1, 2, …, N, j=1, 2,3,4, k is the number of algorithm iterations, For the velocity vector of the kth algorithm iteration, w (k) is a non-negative inertial weight factor,For non-negative acceleration constants, rand (0, a 1) and rand (0, a 2) are random numbers with uniform distribution,Indicating the best solution location searched so far for the ith particle,Representing the current position of the particle in the search space,A 1、a2 is a control parameter, which represents the position of the best solution searched so far in the whole search space;
The location update formula is:
The fitness function of the particles is defined as:
Where F is the frequency bandwidth, R is the number of test set samples that RNN classification model based on the attention mechanism erroneously recognizes, and R is the total number of test set samples.
Further, the sequence of output vectors of the attention mechanism in the attention mechanism-based RNN model is expressed as:
Wherein c i is an output vector sequence, h j is an attention mechanism input vector, a ij is an attention mechanism weight, and the calculation method is as follows:
eij=fc(si-1,hj)
Wherein e ij is the network output layer, exp (e ij) is an exponential function to the power of e ij based on a natural constant e, s i-1 is the input vector of the attention mechanism, and fc (s i-1,hj) is an additional fully connected shallow network.
Further, the specific loss function in the twin neural network algorithm model is as follows:
Where L (W, (Y, X 1,X2)) is a loss function, N is the number of samples, D W is the distance between the two output vectors, Y represents whether the two samples are similar, y=1 represents that the two samples are similar or matched, y=0 represents no match, and m is a set threshold.
Further, acquiring emotion electroencephalogram signals of a plurality of channels includes: the method comprises the steps of recording leads FP1, FP2, F3, F4, C3, C4, P3, P4, O1, O2, F7, F8, T3, T4, T5, T6, FZ, CZ and PZ by adopting a 10-20 electrode lead positioning standard calibrated by International electroencephalogram, recording the leads FP1, FP2, F3, F4, C3, P4, O1, O2, F7, F8, T3, T4, T5, T6, FZ, CZ and PZ by adopting a binaural vertical connection method, selecting M1 and M2 as a reference electrode, sampling frequency to be 512HZ, and acquiring electroencephalogram signals by using a Neuroscan64 device and performing amplification and analog-digital conversion.
In a second aspect, the invention provides an identification system based on an emotion brain electrical feature fusion optimization mechanism, comprising:
The signal acquisition module: the method comprises the steps of acquiring emotion brain electrical signals of a plurality of channels;
and a pretreatment module: the method is used for preprocessing the emotion brain signals;
and the characteristic recognition module is used for: extracting characteristic values of the preprocessed emotion brain signals;
And an analysis module: the method comprises the steps of analyzing characteristic value characteristic patterns by utilizing a particle swarm optimization algorithm to obtain emotion brain electricity module characteristics with the maximum activation degree for different users;
And a screening module: the method comprises the steps of constructing an RNN model based on an attention mechanism, iterating by utilizing an fitness function of a particle swarm optimization algorithm, constructing a twin neural network algorithm model, and screening brain electricity frequency band characteristics by utilizing a loss function;
the dividing module: the method comprises the steps of carrying out fusion segmentation on the obtained brain electricity frequency band characteristics and the emotion brain electricity module characteristics, and then dividing a training set and a testing set;
training module: the training set is used for respectively inputting the training set into an RNN model based on an attention mechanism and a twin neural network algorithm model for training;
and a prediction module: the method comprises the steps of inputting a test set into a trained RNN model based on an attention mechanism and a twin neural network algorithm model for prediction to obtain a prediction result;
And a judging module: for discriminating the identities of different users based on the prediction result.
In a third aspect, the invention provides an identification device based on an emotion electroencephalogram feature fusion optimization mechanism, which comprises a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is operative according to the instructions to perform the steps of the method according to any one of the preceding claims.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of any of the methods described above.
Compared with the prior art, the invention has the beneficial effects that:
The invention discloses an identification research method based on an emotion electroencephalogram feature fusion optimization mechanism, which comprises the steps of extracting an electroencephalogram signal a wave, a beta wave, a delta wave, a theta wave and a gamma wave by utilizing wavelet packet transformation, inputting the energy entropy of the electroencephalogram signal a wave, the beta wave, the delta wave, the theta wave and the gamma wave as feature vectors into a PSO-attentionRNN optimization recognition model, utilizing a Particle Swarm Optimization (PSO) to screen out electroencephalogram time frequency bands with highest identification rates corresponding to different emotion modules through continuous iteration, intuitively measuring the change of the electroencephalogram frequency bands affecting the recognition effect by utilizing a twin neural network, screening out frequency bands with great influence on the recognition effect in an input signal by utilizing a contrast loss function, and finally fusing the features of the two aspects, and finally, acquiring experimental results on emotion sample data in a laboratory, wherein the experimental results show that: compared with the results of carrying out identification based on specific tasks or based on fixed time frequency bands in other existing researches, the invention effectively improves the accuracy of classification by utilizing the PSO-attentionRNN and the fusion algorithm of the twin neural network, overcomes the defects caused by acquisition of electroencephalogram signals based on specific tasks and identification of the fixed frequency bands, provides a new method for pushing the electroencephalogram identification to practical application, and well solves the problem that the identification and classification are carried out only aiming at single frequency bands of specific channels in the previous researches, so that the classification identification effect is not ideal. In the aspect of recognition rate, the method has more advantages than the traditional method for directly classifying after feature extraction, and can effectively improve the accuracy rate of identity recognition.
Drawings
FIG. 1 is a flow chart of an identification method based on an emotion brain electrical feature fusion optimization mechanism provided by the embodiment of the invention;
fig. 2 is a schematic diagram of a laboratory self-acquisition 19-channel electroencephalogram signal acquisition lead according to an embodiment of the present invention;
FIG. 3 is an explanatory diagram of an experimental evoked emotion electroencephalogram signal according to an embodiment of the present invention;
FIG. 4 is an exploded view of a four-layer wavelet packet transfer tree using db4 wavelets according to one embodiment of the present invention;
FIG. 5 is a basic architecture diagram for identity recognition based on RNN codec of attention mechanism according to an embodiment of the present invention;
FIG. 6 is a diagram of a structure based on a twin neural network according to a first embodiment of the present invention;
FIG. 7 is a diagram showing the intent of obtaining the optimal electroencephalogram time band by searching the fusion model according to the first embodiment of the present invention;
Fig. 8 is a graph of fitness curves of different users after iterative screening under three emotion modules according to a first embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
Embodiment one:
FIG. 1 is a flow chart of the method of the invention, which mainly comprises the following steps:
step one), a laboratory self-collects emotion brain electrical signals of 19 channels of a subject;
Step two), preprocessing the electroencephalogram signals acquired in the step 1 or the data set with preprocessing requirements to reduce interference of myoelectricity, electrooculogram, power frequency signals and the like, reduce influence of artifacts of other functional areas of the brain and improve signal quality. The pretreatment mainly comprises filtering, downsampling, removing power frequency interference, extracting artifacts and re-referencing;
step three), 4-layer decomposition is carried out on the brain electrical signal on the preprocessed signal by using db4 wavelet, and the energy entropy of five rhythms of a wave, beta wave, delta wave, theta wave and gamma wave of the brain electrical signal are extracted as characteristic values;
Analyzing the emotion brain electrical data characteristic mode in the third step by utilizing a particle swarm optimization algorithm, and searching emotion brain electrical module characteristics with the largest activation degree for different users;
Fifthly), constructing an RNN model based on an attention mechanism, iterating by utilizing an fitness function of a particle swarm optimization algorithm, and optimizing the final recognition accuracy;
Step six) is flush with the step five), a twin neural network algorithm model is built at the same time, and the characteristics of the brain electrical frequency range are screened by using a loss function;
step seven), fusing and segmenting the respective optimal electroencephalogram time-frequency characteristics extracted in the step four) and the step six), and dividing a training set and a testing set;
Step eight), respectively inputting the fusion feature vectors processed in the step seven) into a built RNN model based on an attention mechanism and a twin neural network algorithm model for training;
Step nine), inputting the test set divided in the step seven) into a model for prediction.
Step ten), obtaining a predicted result, and judging the identities of different users according to the result.
The above steps are mainly run on a computer using the python language, and the detailed descriptions of the steps are as follows:
In the first step), the laboratory self-collects emotion brain signals of 19 channels of a subject, and the specific placement method of scalp electrodes of each brain area is as follows: the method comprises the steps of adopting 10-20 electrode lead positioning standards calibrated by International electroencephalogram, recording leads FP1, FP2, F3, F4, C3, C4, P3, P4, O1, O2, F7, F8, T3, T4, T5, T6, FZ, CZ and PZ by a binaural vertical connection method, wherein reference electrodes are M1 and M2, sampling frequency is 512HZ, the lead impedance of each channel is smaller than 5k, and FIG. 2 is a laboratory self-acquisition 19-channel electroencephalogram acquisition lead schematic diagram, acquiring electroencephalogram signals by using a Neuroscan64 device, amplifying and analog-digital converting, and inputting the electroencephalogram signals into a computer.
Fig. 3 shows an experiment Fan Shitu designed in the present invention, which considers that the visual-audio stimulus is more attractive than the single image or audio in the emotion brain electric induction process, and improves the probability of emotion induction success. Regarding the selection of emotion-induced videos, the important influence of the native culture on emotion-induced experiments is mainly considered, and finally, film and television clips with emotion factors and with native language and related cultural backgrounds are selected as stimulus materials. Prior to experimental collection, all subjects were informed of the experimental objectives and procedures involved. In the experiment, the testee is required to sit at the desk and focus on watching the video clips, and keep the video clips as motionless as possible during film playing so as not to increase severe myoelectricity and other interference.
The specific experimental procedure includes three major parts: in the beginning stage, a 'preparation' word appears on a video playing screen to prompt a tested person that the experiment is about to start; then the computer randomly plays a movie fragment with the duration of 40S, and the subject needs to carefully watch the played video; finally, an end word appears on the screen, the subject takes appropriate rest at the moment and fills in the SAM9 score table, self-evaluation is carried out on the current emotion state, and the rest time is not limited until the test is the adjusted state of the next single experiment. In a complete experimental process, each tested person has three emotion states, each emotion state corresponds to 3 video segments (the effective duration of each video segment is 40S), so that each tested person has 3*3 independent single experiments in one emotion brain data. The emotion content of the selected film and television fragment is measured through a self-evaluation human body model (SAM), the model comprises 9 titers and a scale of arousal degrees, in addition, in the current electroencephalogram data acquisition, familiarity degree of a film is considered to influence the arousal state of emotion, familiarity degree is additionally added on the basis of the SAM and is used as an evaluation index, 9 grades are also set as well as the titers and the arousal degrees, and each tested person is required to mark the film and television fragment by the SAM. The positive, neutral, negative movie fragment titers-wake-familiarity metrics results are (3.07,5.80,5.32), (4.90,1.30,1.14), (8.20,6.24,5.70), respectively.
In the third step, db4 wavelet is adopted to decompose four layers of wavelet packets to extract energy entropy of different rhythms, fig. 4 is a concrete decomposition method of the wavelet packet decomposition tree, wherein A represents low frequency, D represents high frequency, and the tail sequence number represents the number of layers of wavelet packet decomposition. 4 layers of analysis are carried out on the brain electrical signals of 19 channels acquired by experiments by db4 wavelets, and the energy entropy of five rhythms of a wave, beta wave, delta wave, theta wave and gamma wave of the brain electrical signals is extracted as follows:
The wavelet packet analysis can divide the time-frequency plane more carefully, can provide finer decomposition for the high-frequency part of the signal, has no redundancy or omission, can better reflect the essential characteristics of the signal, and can adaptively select the optimal wavelet basis function according to the characteristics of the signal. The time-frequency analysis method not only overcomes the contradiction between frequency resolution and time resolution in Fourier transformation, but also overcomes the defects of poor frequency resolution in high frequency band and poor time resolution in low frequency band of wavelet transformation. The wavelet packet decomposition formula can be expressed as:
Wherein f i,j(tj) represents a reconstructed signal on (i, j) on the i-th layer, j=0, 1,2, …,2 i -1, and an energy spectrum obtained by decomposing the initial signal f (t) by wavelet packets can be expressed as:
Wherein E i,j(tj) represents the energy of the frequency band on the i-th layer, X j,k represents the discrete point amplitude of the signal f i,j(tj) after reconstruction, and m is the sampling point number of the signal; the energy entropy value on the signal frequency band can be obtained through the above process, and is marked as W, and the energy entropy of the j-th node is represented by the following expression:
Where P j represents the energy entropy of the jth node, E j represents the band energy of the jth node, and E represents the total energy of the wavelet packet decomposition.
The wavelet packet decomposition is to decompose the original signal downward according to the wavelet packet tree classification. In the invention, db4 wavelet is used for 4-layer decomposition of an electroencephalogram signal acquired through experiments, and the energy entropy of five rhythms of a wave, beta wave, delta wave, theta wave and gamma wave of the electroencephalogram signal is extracted as a characteristic value.
The particle swarm optimization algorithm (PSO) in the fourth step is an intelligent optimization algorithm of the swarm, and is derived from research on predation behaviors of the swarm, and the basic idea of the algorithm is to find an optimal solution through cooperation and information sharing among individuals in the swarm, so that the method has the advantages of simplicity, easiness in implementation, few adjustment parameters and the like. The PSO algorithm initializes a group of particles in a particle space, each particle represents a potential optimal solution of the extremum optimization problem, three indexes of position, speed and fitness value are used for representing particle characteristics, the particles move in a search space, the individual position is updated by tracking an individual extremum P best and a group extremum G best, the individual extremum P best is an optimal position of an individual in the experienced position obtained by calculating the fitness value, and the group extremum G best is an optimal position of all particles in the group searched by calculating the fitness value. The particle updates its own position by tracking two "extrema" (P best,Gbest) in each iteration. When searching the electroencephalogram time frequency band under the condition of highest recognition accuracy by using a PSO algorithm, the method is defined as follows: assuming that each particle population in the PSO algorithm contains N particles, each particle has a dimension of 4, the position vector for the ith particle at the kth iteration in the 4-dimensional continuous search spaceRepresenting a current location of a particle in a search space; velocity vectorIndicating the search direction of the particle; representing the optimal solution position searched so far for the ith particle; representing the location of the best solution searched so far in the entire search space. Wherein, will be Respectively defined as a starting frequency F, a frequency bandwidth F, a starting time T and a time width T, and in the whole searching process, a speed updating formula is as follows:
The location update formula is:
where i=1, 2, …, N, j=1, 2,3,4, k is the number of algorithm iterations, w (k) is a non-negative inertial weight factor, For non-negative acceleration constants, rand (0, a 1) and rand (0, a 2) are random numbers with uniform distribution, and a 1、a2 is a control parameter; the fitness function of the particles is defined as:
where R is the number of test set samples that RNN classification model based on the attention mechanism erroneously recognizes, and R is the total number of test set samples. Finally, the initial frequency F, the frequency bandwidth F, the initial time T and the time width T of the electroencephalogram signal with the best effect can be obtained through continuous iterative search of a PSO algorithm, so that the efficiency and the accuracy of identity recognition are effectively improved.
The RNN recognition model of the attention mechanism in step five) is specifically as follows:
The cyclic neural network (RecurrentNeuralNetwork, RNN) is a deep learning model, the structure of the cyclic neural network is transformed from a basic neural network, the time dimension is added, the effect is very good in the aspect of sequence mathematical modeling, and the cyclic neural network is widely applied to the fields of voice, text, images, video and the like. However, the conventional RNN has problems of slow convergence and easy local optimization, and in the codec structure, the encoding encodes all input sequences into one feature c and then decodes the feature c, so that the feature c must contain all information in the original sequence, and its length limits the performance of the model. This problem is then solved by the attention mechanism.
In the invention, attention mechanism is a attentive mechanism, which focuses attention on key information by imitating human observation and thinking, in particular to solve the problem by encoding input brain electrical characteristics by an encoder at each time to obtain brain electrical characteristics c of different frequency bands of different testees.
The attention mechanism is located between the encoder and decoder structures, the input of which is made up of the output vector h i of the encoder and the states s i, i=0, 1, …,7 of the decoder, fc representing an additional fully connected shallow network. x i, i=1, 2, …,8 is a feature vector of the primary frequency band of the electroencephalogram data optimized by the PSO algorithm, the feature vector is input into encodet for encoding, the encoder calculates a vector h i, i=1, 2, …,8 and takes the vector h i, i=1, 2, …,8 as an input of an attention mechanism, and the decoder processes the vector by inputting an initial state vector s 0 to obtain an output vector sequence of an input sequence (s0,h1),(s0,h2),(s0,h3),(s0,h4),(s0,h5),(s0,h6),(s0,h7),(s0,h8),attention of the first group of attention mechanism, which is a context vector c i, i=1, 2, …,8, and can be expressed as:
The model learns the attention weight by using an attention fully-connected network and a softmax function, and the calculation method of the attention mechanism weight a ij is as follows:
eij=fc(si-1,hj)
Wherein e ij is the network output layer, exp (e ij) is an exponential function to the power of e ij based on a natural constant e, s i-1 is the input vector of the attention mechanism, and fc (s i-1,hj) is an additional fully connected shallow network.
The first set of weights of the attention mechanism is calculated to obtain a 11、a12、a13、a14、a15、a16、a17、a18, and meanwhile, a first context vector c 1 is calculated, the decoder uses (s 0,c1) to calculate a first output y 1,y1 of the model, and the decoder decodes the electroencephalogram feature vector into identity information of the user through the encoding and decoding processes, and similarly, the identity information of other users can be calculated.
The attention mechanism in the RNN codec model based on the attention mechanism is located between the encoder and decoder structures and is composed of an output vector h i of the encoder and a state s i of the decoder, fig. 5 is a basic architecture for identity recognition based on the RNN codec based on the attention mechanism, where fc represents an additional fully connected shallow network, the encoder encodes by inputting an electroencephalogram data first frequency band x i of different users, the encoder calculates a vector h i and takes it as an input of the attention mechanism, the decoder processes by inputting an initial state vector s 0, and sequentially obtains an input sequence of attention and a context vector c i of the output vector sequence, and the attention weight is learned mainly using the attention fully connected network and a softmax function.
The twin neural network algorithm model in the step six) is specifically as follows:
Bromley et al in 1993 propose a similarity measurement algorithm based on convolutional neural networks. The algorithm consists of two identical convolutional neural networks, an input image is mapped to a new space after the convolutional neural networks are passed through, a set of new feature vectors are formed, and the similarity degree of the two feature vectors is judged by comparing the distance between the two feature vectors. Since the input picture pair shares the weight in this network structure, the feature extraction section is just like "connected", and is therefore named as a twin neural network.
The simple twin neural network comprises two sub-networks sharing weights, receives input X 1,X2 respectively, converts the input into a feature vector G W(X1),GW(X2 after feature extraction through the sub-networks, and calculates the distance D W between two output vectors in a certain distance measurement mode. The distance measurement mode can be another neural network or an artificially designed measurement algorithm.
In a twin neural network, a contrast loss function (Contrastiveloss) is generally used to reduce the distance between similar samples as much as possible, and increase the distance between dissimilar samples as much as possible, where the specific loss function is as follows:
Wherein Y represents whether two samples are similar, y=1 represents that two samples are similar or matched, y=0 represents that two samples are not matched, m is a set threshold value, and N is the number of samples. When the samples are similar and dissimilar, the loss functions are shown below, respectively.
When the samples are similar, the distance D W between the output vectors tends to be 0 during the training iteration to minimize the loss function value. Likewise, when the samples are dissimilar, the distance D W between the output vectors will tend to be m.
In the twin convolution network of fig. 6, the distances between similar samples are reduced as much as possible and the distances between dissimilar samples are increased as much as possible by receiving the inputs X 1,X2 respectively, converting the inputs into the feature vectors G W(X1),GW(X2 after feature extraction through the subnetworks, and calculating the distances D W between the two output vectors by means of a contrast loss function metric. Fig. 7 is a fitness curve obtained by fusing the model and finally screening to obtain the optimal electroencephalogram time frequency band, wherein the fitness curve is obtained by carrying out iterative screening on different users under three emotion modules, and different user identity information identification can be observed based on the fitness curve.
Embodiment two:
an identification system based on an emotion electroencephalogram feature fusion optimization mechanism can realize the identification method based on the emotion electroencephalogram feature fusion optimization mechanism according to the first embodiment, and comprises the following steps:
The signal acquisition module: the method comprises the steps of acquiring emotion brain electrical signals of a plurality of channels;
and a pretreatment module: the method is used for preprocessing the emotion brain signals;
and the characteristic recognition module is used for: extracting characteristic values of the preprocessed emotion brain signals;
And an analysis module: the method comprises the steps of analyzing characteristic value characteristic patterns by utilizing a particle swarm optimization algorithm to obtain emotion brain electricity module characteristics with the maximum activation degree for different users;
And a screening module: the method comprises the steps of constructing an RNN model based on an attention mechanism, iterating by utilizing an fitness function of a particle swarm optimization algorithm, constructing a twin neural network algorithm model, and screening brain electricity frequency band characteristics by utilizing a loss function;
the dividing module: the method comprises the steps of carrying out fusion segmentation on the obtained brain electricity frequency band characteristics and the emotion brain electricity module characteristics, and then dividing a training set and a testing set;
training module: the training set is used for respectively inputting the training set into an RNN model based on an attention mechanism and a twin neural network algorithm model for training;
and a prediction module: the method comprises the steps of inputting a test set into a trained RNN model based on an attention mechanism and a twin neural network algorithm model for prediction to obtain a prediction result;
And a judging module: for discriminating the identities of different users based on the prediction result.
Embodiment III:
the embodiment of the invention also provides an identity recognition device based on the emotion brain electrical characteristic fusion optimization mechanism, which can realize the identity recognition method based on the emotion brain electrical characteristic fusion optimization mechanism of the embodiment, and comprises a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is configured to operate according to the instructions to perform the steps of the method of:
acquiring emotion brain electrical signals of a plurality of channels;
Preprocessing the emotion brain electrical signals;
extracting characteristic values of the preprocessed emotion brain signals;
Analyzing the characteristic value characteristic pattern by using a particle swarm optimization algorithm to obtain emotion brain electricity module characteristics with the maximum activation degree for different users;
constructing an RNN model based on an attention mechanism, iterating by utilizing an fitness function of a particle swarm optimization algorithm, constructing a twin neural network algorithm model, and screening brain electricity frequency band characteristics by utilizing a loss function;
after the obtained brain electricity frequency band characteristics and the emotion brain electricity module characteristics are fused and segmented, a training set and a testing set are divided;
respectively inputting the training set into an RNN model based on an attention mechanism and a twin neural network algorithm model for training;
inputting the test set into a trained RNN model based on an attention mechanism and a twin neural network algorithm model for prediction to obtain a prediction result;
And distinguishing the identities of different users based on the prediction result.
Embodiment four:
The embodiment of the invention also provides a computer readable storage medium, which can realize the identification method based on the emotion electroencephalogram feature fusion optimization mechanism according to the embodiment, wherein a computer program is stored on the identification method, and the program realizes the following steps when being executed by a processor:
acquiring emotion brain electrical signals of a plurality of channels;
Preprocessing the emotion brain electrical signals;
extracting characteristic values of the preprocessed emotion brain signals;
Analyzing the characteristic value characteristic pattern by using a particle swarm optimization algorithm to obtain emotion brain electricity module characteristics with the maximum activation degree for different users;
constructing an RNN model based on an attention mechanism, iterating by utilizing an fitness function of a particle swarm optimization algorithm, constructing a twin neural network algorithm model, and screening brain electricity frequency band characteristics by utilizing a loss function;
after the obtained brain electricity frequency band characteristics and the emotion brain electricity module characteristics are fused and segmented, a training set and a testing set are divided;
respectively inputting the training set into an RNN model based on an attention mechanism and a twin neural network algorithm model for training;
inputting the test set into a trained RNN model based on an attention mechanism and a twin neural network algorithm model for prediction to obtain a prediction result;
And distinguishing the identities of different users based on the prediction result.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.
Claims (8)
1. The identification method based on the emotion electroencephalogram feature fusion optimization mechanism is characterized by comprising the following steps of:
acquiring emotion brain electrical signals of a plurality of channels;
Preprocessing the emotion brain electrical signals;
extracting characteristic values of the preprocessed emotion brain signals;
Analyzing the characteristic value characteristic pattern by using a particle swarm optimization algorithm to obtain emotion brain electricity module characteristics with the maximum activation degree for different users;
constructing an RNN model based on an attention mechanism, iterating by utilizing an fitness function of a particle swarm optimization algorithm, constructing a twin neural network algorithm model, and screening brain electricity frequency band characteristics by utilizing a loss function;
after the obtained brain electricity frequency band characteristics and the emotion brain electricity module characteristics are fused and segmented, a training set and a testing set are divided;
respectively inputting the training set into an RNN model based on an attention mechanism and a twin neural network algorithm model for training;
inputting the test set into a trained RNN model based on an attention mechanism and a twin neural network algorithm model for prediction to obtain a prediction result;
judging the identities of different users based on the prediction result;
The speed update formula in the particle swarm optimization algorithm is as follows:
Where i=1, 2, …, N, j=1, 2,3,4, k is the number of algorithm iterations, For the velocity vector of the kth algorithm iteration, w (k) is a non-negative inertial weight factor,For non-negative acceleration constants, rand (0, a 1) and rand (0, a 2) are random numbers with uniform distribution,Indicating the best solution location searched so far for the ith particle,Representing the current position of the particle in the search space,A 1、a2 is a control parameter, which represents the position of the best solution searched so far in the whole search space;
The location update formula is:
The fitness function of the particles is defined as:
Wherein F is the frequency bandwidth, R is the number of test set samples which are erroneously identified by the RNN classification model based on the attention mechanism, and R is the total number of test set samples;
the sequence of output vectors of the attention mechanism in the attention mechanism based RNN model is expressed as:
Wherein c i is an output vector sequence, h j is an attention mechanism input vector, a ij is an attention mechanism weight, and the calculation method is as follows:
eij=fc(si-1,hj)
Wherein e ij is the network output layer, exp (e ij) is an exponential function to the power of e ij based on a natural constant e, s i-1 is the input vector of the attention mechanism, and fc (s i-1,hj) is an additional fully connected shallow network.
2. The identification method based on the emotion brain electrical characteristic fusion optimization mechanism according to claim 1, wherein the preprocessing comprises filtering, downsampling, removing power frequency interference, and providing artifacts and re-referencing.
3. The identification method based on the emotion electroencephalogram feature fusion optimization mechanism according to claim 1, characterized in that the feature value extraction of the preprocessed emotion electroencephalogram signals comprises the following steps: after four layers of decomposition are carried out on the preprocessed emotion brain signals by using wavelet packet decomposition, the energy entropy of five rhythms of a wave, a beta wave, a delta wave, a theta wave and a gamma wave of the brain signals are extracted as characteristic values, and a wavelet packet decomposition formula is as follows:
Wherein f (t) represents an initial signal, f i,j(tj) represents a reconstructed signal on (i, j) on the i-th layer, j=0, 1,2, …,2 i -1, and an energy spectrum obtained by decomposing the initial signal by a wavelet packet is expressed as:
wherein E i,j(tj) represents the band energy on the i-th layer, X j,k represents the discrete point amplitude of the reconstructed signal f i,j(tj), and m is the sampling point of the signal; the energy entropy value on the signal frequency band is obtained through the above process, and is marked as W, and the energy entropy of the j node is represented by the following expression:
Where P j represents the energy entropy of the jth node, E j represents the band energy of the jth node, and E represents the total energy of the wavelet packet decomposition.
4. The identification method based on the emotion brain electrical characteristic fusion optimization mechanism of claim 1, wherein the specific loss function in the twin neural network algorithm model is as follows:
Where L (W, (Y, X 1,X2)) is a loss function, N is the number of samples, D W is the distance between the two output vectors, Y represents whether the two samples are similar, y=1 represents that the two samples are similar or matched, y=0 represents no match, and m is a set threshold.
5. The identification method based on the emotion electroencephalogram feature fusion optimization mechanism according to claim 1, wherein the step of obtaining emotion electroencephalogram signals of a plurality of channels comprises the following steps: the method comprises the steps of recording leads FP1, FP2, F3, F4, C3, C4, P3, P4, O1, O2, F7, F8, T3, T4, T5, T6, FZ, CZ and PZ by adopting a 10-20 electrode lead positioning standard calibrated by International electroencephalogram, recording the leads FP1, FP2, F3, F4, C3, P4, O1, O2, F7, F8, T3, T4, T5, T6, FZ, CZ and PZ by adopting a binaural vertical connection method, selecting M1 and M2 as a reference electrode, sampling frequency to be 512HZ, and acquiring electroencephalogram signals by using a Neuroscan64 device and performing amplification and analog-digital conversion.
6. The identification system based on the emotion electroencephalogram feature fusion optimization mechanism is characterized by comprising:
The signal acquisition module: the method comprises the steps of acquiring emotion brain electrical signals of a plurality of channels;
and a pretreatment module: the method is used for preprocessing the emotion brain signals;
and the characteristic recognition module is used for: extracting characteristic values of the preprocessed emotion brain signals;
And an analysis module: the method comprises the steps of analyzing characteristic value characteristic patterns by utilizing a particle swarm optimization algorithm to obtain emotion brain electricity module characteristics with the maximum activation degree for different users;
And a screening module: the method comprises the steps of constructing an RNN model based on an attention mechanism, iterating by utilizing an fitness function of a particle swarm optimization algorithm, constructing a twin neural network algorithm model, and screening brain electricity frequency band characteristics by utilizing a loss function;
the dividing module: the method comprises the steps of carrying out fusion segmentation on the obtained brain electricity frequency band characteristics and the emotion brain electricity module characteristics, and then dividing a training set and a testing set;
training module: the training set is used for respectively inputting the training set into an RNN model based on an attention mechanism and a twin neural network algorithm model for training;
and a prediction module: the method comprises the steps of inputting a test set into a trained RNN model based on an attention mechanism and a twin neural network algorithm model for prediction to obtain a prediction result;
and a judging module: the method is used for judging the identities of different users based on the prediction result;
The speed update formula in the particle swarm optimization algorithm is as follows:
Where i=1, 2, …, N, j=1, 2,3,4, k is the number of algorithm iterations, For the velocity vector of the kth algorithm iteration, w (k) is a non-negative inertial weight factor,For non-negative acceleration constants, rand (0, a 1) and rand (0, a 2) are random numbers with uniform distribution,Indicating the best solution location searched so far for the ith particle,Representing the current position of the particle in the search space,A 1、a2 is a control parameter, which represents the position of the best solution searched so far in the whole search space;
The location update formula is:
The fitness function of the particles is defined as:
Wherein F is the frequency bandwidth, R is the number of test set samples which are erroneously identified by the RNN classification model based on the attention mechanism, and R is the total number of test set samples;
the sequence of output vectors of the attention mechanism in the attention mechanism based RNN model is expressed as:
Wherein c i is an output vector sequence, h j is an attention mechanism input vector, a ij is an attention mechanism weight, and the calculation method is as follows:
eij=fc(si-1,hj)
Wherein e ij is the network output layer, exp (e ij) is an exponential function to the power of e ij based on a natural constant e, s i-1 is the input vector of the attention mechanism, and fc (s i-1,hj) is an additional fully connected shallow network.
7. The identification device based on the emotion electroencephalogram feature fusion optimization mechanism is characterized by comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor being operative according to the instructions to perform the steps of the method according to any one of claims 1 to 5.
8. A computer readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor realizes the steps of the method according to any of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210142362.9A CN114638253B (en) | 2022-02-16 | 2022-02-16 | Identification system and method based on emotion electroencephalogram feature fusion optimization mechanism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210142362.9A CN114638253B (en) | 2022-02-16 | 2022-02-16 | Identification system and method based on emotion electroencephalogram feature fusion optimization mechanism |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114638253A CN114638253A (en) | 2022-06-17 |
CN114638253B true CN114638253B (en) | 2024-07-05 |
Family
ID=81946074
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210142362.9A Active CN114638253B (en) | 2022-02-16 | 2022-02-16 | Identification system and method based on emotion electroencephalogram feature fusion optimization mechanism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114638253B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117562542B (en) * | 2024-01-17 | 2024-04-30 | 小舟科技有限公司 | Emotion recognition method based on electroencephalogram signals, computer equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113243924A (en) * | 2021-05-19 | 2021-08-13 | 成都信息工程大学 | Identity recognition method based on electroencephalogram signal channel attention convolution neural network |
CN113627518A (en) * | 2021-08-07 | 2021-11-09 | 福州大学 | Method for realizing multichannel convolution-recurrent neural network electroencephalogram emotion recognition model by utilizing transfer learning |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109740544B (en) * | 2019-01-07 | 2021-09-07 | 哈尔滨工业大学(深圳) | Auditory attention state arousal degree identification method and device and storage medium |
CN113095428B (en) * | 2021-04-23 | 2023-09-19 | 西安交通大学 | Video emotion classification method and system integrating electroencephalogram and stimulus information |
-
2022
- 2022-02-16 CN CN202210142362.9A patent/CN114638253B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113243924A (en) * | 2021-05-19 | 2021-08-13 | 成都信息工程大学 | Identity recognition method based on electroencephalogram signal channel attention convolution neural network |
CN113627518A (en) * | 2021-08-07 | 2021-11-09 | 福州大学 | Method for realizing multichannel convolution-recurrent neural network electroencephalogram emotion recognition model by utilizing transfer learning |
Also Published As
Publication number | Publication date |
---|---|
CN114638253A (en) | 2022-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111134666B (en) | Emotion recognition method of multi-channel electroencephalogram data and electronic device | |
CN112800998B (en) | Multi-mode emotion recognition method and system integrating attention mechanism and DMCCA | |
CN113598774B (en) | Active emotion multi-label classification method and device based on multi-channel electroencephalogram data | |
CN114533086B (en) | Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation | |
CN112244873A (en) | Electroencephalogram time-space feature learning and emotion classification method based on hybrid neural network | |
CN115590515A (en) | Emotion recognition method and system based on generative self-supervision learning and electroencephalogram signals | |
CN113705398B (en) | Music electroencephalogram space-time characteristic classification method based on convolution-long and short term memory network | |
CN114169364B (en) | Electroencephalogram emotion recognition method based on space-time diagram model | |
Rayatdoost et al. | Subject-invariant EEG representation learning for emotion recognition | |
Xie et al. | WT feature based emotion recognition from multi-channel physiological signals with decision fusion | |
CN109871831A (en) | A kind of emotion identification method and system | |
Lee et al. | Inter-subject contrastive learning for subject adaptive eeg-based visual recognition | |
Yarga et al. | Efficient spike encoding algorithms for neuromorphic speech recognition | |
Wang et al. | Fast and accurate action detection in videos with motion-centric attention model | |
CN114638253B (en) | Identification system and method based on emotion electroencephalogram feature fusion optimization mechanism | |
CN115770044B (en) | Emotion recognition method and device based on electroencephalogram phase amplitude coupling network | |
CN116919422A (en) | Multi-feature emotion electroencephalogram recognition model establishment method and device based on graph convolution | |
CN115859185A (en) | Electroencephalogram emotion recognition method based on pulse convolution neural network | |
CN115659207A (en) | Electroencephalogram emotion recognition method and system | |
CN114662524B (en) | Plug-and-play domain adaptation method based on electroencephalogram signals | |
CN114398991B (en) | Electroencephalogram signal emotion recognition method based on Transformer structure search | |
CN114626408B (en) | Electroencephalogram signal classification method and device, electronic equipment, medium and product | |
CN117725367A (en) | Speech imagination brain electrolysis code method for source domain mobility self-adaptive learning | |
CN115311595B (en) | Video feature extraction method and device and electronic equipment | |
CN113516101B (en) | Electroencephalogram signal emotion recognition method based on network structure search |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |