CN114578963A - An EEG Identity Recognition Method Based on Feature Visualization and Multimodal Fusion - Google Patents

An EEG Identity Recognition Method Based on Feature Visualization and Multimodal Fusion Download PDF

Info

Publication number
CN114578963A
CN114578963A CN202210167636.XA CN202210167636A CN114578963A CN 114578963 A CN114578963 A CN 114578963A CN 202210167636 A CN202210167636 A CN 202210167636A CN 114578963 A CN114578963 A CN 114578963A
Authority
CN
China
Prior art keywords
feature
eeg
features
frequency
frequency band
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210167636.XA
Other languages
Chinese (zh)
Other versions
CN114578963B (en
Inventor
王喆
黄楠
李冬冬
杨海
杜文莉
张静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China University of Science and Technology
Original Assignee
East China University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China University of Science and Technology filed Critical East China University of Science and Technology
Priority to CN202210167636.XA priority Critical patent/CN114578963B/en
Publication of CN114578963A publication Critical patent/CN114578963A/en
Application granted granted Critical
Publication of CN114578963B publication Critical patent/CN114578963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • A61B5/374Detecting the frequency distribution of signals, e.g. detecting delta, theta, alpha, beta or gamma waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • G06F17/141Discrete Fourier transforms
    • G06F17/142Fast Fourier transforms, e.g. using a Cooley-Tukey type algorithm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Pathology (AREA)
  • Psychiatry (AREA)
  • Veterinary Medicine (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Computational Linguistics (AREA)
  • Discrete Mathematics (AREA)
  • Dermatology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Fuzzy Systems (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Signal Processing (AREA)
  • Evolutionary Biology (AREA)
  • Physiology (AREA)

Abstract

The invention discloses an electroencephalogram identity recognition method based on feature visualization and multi-mode fusion, which comprises the following steps: firstly, preprocessing data of the motor imagery electroencephalogram signals; then, aiming at each frequency band, mapping the frequency band characteristics after mean value removal to a brain map according to the electrode positioning of the human brain cortex, and carrying out interpolation by adopting a bi-harmonic spline interpolation method to generate a visual brain topographic map; secondly, extracting depth information from the electroencephalogram time-frequency domain features and the electroencephalogram visual image features by using a depth network, and fusing the depth information on the same dimension to obtain a multi-mode depth feature; and (3) training to obtain an effective depth feature extractor and a multi-mode classifier for each frequency band, and using a frequency band model with the highest performance as an identity recognition model of the system. The electroencephalogram visual feature representation can reflect channel position information, can mine potential electroencephalogram information of an electrode which is not collected, and can mine the complementary relation between image features and traditional vector features in a deep layer.

Description

一种基于特征可视化和多模态融合的脑电身份识别方法An EEG Identity Recognition Method Based on Feature Visualization and Multimodal Fusion

技术领域:Technical field:

本发明涉及脑电身份识别技术领域,具体地说,本发明涉及一种基于特征可视化和多模态融合对脑电信号进行生物特征识别的脑电身份识别方法。The present invention relates to the technical field of EEG identification, in particular to an EEG identification method for biometric identification of EEG signals based on feature visualization and multimodal fusion.

背景技术:Background technique:

身份识别被广泛需求和应用于生活中的各方面,如监控和安全,导致了越来越需要更可靠的身份认证技术以提高安全性。物联网时代的身份认证技术包括了基于口令的认证技术和基于标记的认证技术,被广泛应用于刑事侦查、银行交易、证书安全和门禁系统等领域。随着机器学习的发展,指纹识别、声纹识别、人脸识别等生物识别技术发展较成熟。然而,这些传统认证方法中的个人隐私信息容易被窃取、复制、合成或者伪造,会造成隐私泄露和系统不安全等问题。为了实现自动化用户认证系统,特别是在某些安全系数较高的情况下,利用脑电(Electroencephalogram,简称EEG)等生物信号进行认知生物识别越来越受到人们的关注。Identity recognition is widely demanded and used in various aspects of life, such as surveillance and security, resulting in an increasing need for more reliable authentication technologies to improve security. Identity authentication technology in the Internet of Things era includes password-based authentication technology and token-based authentication technology, which are widely used in criminal investigation, bank transactions, certificate security and access control systems. With the development of machine learning, biometric technologies such as fingerprint recognition, voiceprint recognition, and face recognition are relatively mature. However, the personal privacy information in these traditional authentication methods is easy to be stolen, copied, synthesized or forged, which will cause problems such as privacy leakage and system insecurity. In order to realize an automatic user authentication system, especially in some cases with a high safety factor, the use of biological signals such as electroencephalogram (EEG) for cognitive biometrics has attracted more and more attention.

脑电信号是由大脑皮层产生的一种非平稳、非线性的随机信号,因其普适性、便携性、可收集性、唯一性、无创性,被认为是最有前途、最可靠的进行生物特征识别的生物信号之一。与传统的身份认证技术相比,EEG具有很强的防伪能力和防盗性能,因为EEG信号是由人类个体的意识参与产生的,必须通过人的意识来捕获,并且用户不能故意泄露非自愿的信号信息。将脑电信号应用于身份识别的过程包括刺激诱发,脑电信号采集,脑电信号预处理,特征提取,特征分类以实现身份识别。在这些过程中,有效的特征提取和合适的特征分类器是决定身份识别性能的关键。脑电生物识别通常关注50Hz以下的5个频带的脑电信号,包括Delta波(0.5~3Hz)、Theta波(4~7Hz)、Alpha波(8~12Hz)、Beta波(13~30Hz)和Gamma波(>31Hz)。在脑电信号的特征提取过程中,具有代表性的特征提取方法可以分为时域分析(包括振幅、均值、方差等)、频域分析(包括功率谱分析、相干分析等)、时频域分析(包括小波变换、经验模态分解等)和空域分析(包括共空域模式法、独立分量分析等),为提高生物特征识别的准确率,一些研究将不同域的特征提取方法进行组合,提取EEG信号的多维度特征可以从多个域对EEG进行表征。近年来,一些研究将电极通道和脑区之间的功能连通性作为脑电信号的生物特征,使得基于脑电信号的受试者识别方法具有更高的辨识度和更强的鲁棒性。在脑电信号的特征分类过程中,分类器可以分为浅层分类器和深层分类器。浅层分类方法对原始EEG信号预处理,进行包括频域特征滤波、时域特征滤波、空域特征滤波在内的特征提取以增强信号质量,将加强的EEG信号作为模型的输入作为训练。浅层分类器以线性判别分析、支持向量机、隐式马尔可夫模型等为代表。深层分类方法直接将原始信号作为模型的输入进行端到端的训练,而不再需要提取信号的特征。因EEG信号的时域性、频域性和空间性质,卷积神经网络和循环神经网络常用作生物特征识别的深层分类方法。EEG signal is a non-stationary, nonlinear random signal generated by the cerebral cortex. One of the biological signals of biometric identification. Compared with the traditional identity authentication technology, EEG has strong anti-counterfeiting ability and anti-theft performance, because EEG signals are generated by the conscious participation of human individuals and must be captured by human consciousness, and users cannot deliberately reveal involuntary signals. information. The process of applying EEG signals to identity recognition includes stimulus induction, EEG signal acquisition, EEG signal preprocessing, feature extraction, and feature classification to realize identity recognition. In these processes, efficient feature extraction and suitable feature classifiers are the keys to determine the performance of identity recognition. EEG biometrics usually focus on EEG signals in 5 frequency bands below 50Hz, including Delta wave (0.5~3Hz), Theta wave (4~7Hz), Alpha wave (8~12Hz), Beta wave (13~30Hz) and Gamma waves (>31 Hz). In the process of feature extraction of EEG signals, representative feature extraction methods can be divided into time domain analysis (including amplitude, mean, variance, etc.), frequency domain analysis (including power spectrum analysis, coherence analysis, etc.), time-frequency domain analysis, etc. Analysis (including wavelet transform, empirical mode decomposition, etc.) and spatial analysis (including common spatial mode method, independent component analysis, etc.), in order to improve the accuracy of biometric identification, some researches The multidimensional features of EEG signals can characterize EEG from multiple domains. In recent years, some studies have used the functional connectivity between electrode channels and brain regions as a biological feature of EEG signals, which makes the subject identification method based on EEG signals more recognizable and robust. In the process of EEG feature classification, the classifiers can be divided into shallow classifiers and deep classifiers. The shallow classification method preprocesses the original EEG signal, performs feature extraction including frequency domain feature filtering, time domain feature filtering, and spatial domain feature filtering to enhance the signal quality, and uses the enhanced EEG signal as the input of the model as training. Shallow classifiers are represented by linear discriminant analysis, support vector machines, hidden Markov models, etc. The deep classification method directly uses the raw signal as the input of the model for end-to-end training, and no longer needs to extract the features of the signal. Due to the time-domain, frequency-domain and spatial properties of EEG signals, convolutional neural networks and recurrent neural networks are often used as deep classification methods for biometric recognition.

然而,传统的脑电特征仍然存在一定的局限性。首先,传统特征的矩阵数据大多侧重于数值信息,而不是区域信息和电极位置信息。这种特征会缺乏大脑功能区域的连通性,而这对生物识别尤其重要。另外,采集电极数量越多,采集的信息越全面,但采集设备的成本越高。这使得采集到的脑电信号受到采集设备电极数量的限制,缺乏全局信息。因此,对具有脑区域信息和潜在电极位置信息的脑电特征开展研究对提高生物识别系统的精确率具有重要的现实意义和实用价值。However, traditional EEG signatures still have certain limitations. First, the matrix data of traditional features mostly focus on numerical information rather than regional information and electrode location information. Such features can lack connectivity in functional brain regions, which are especially important for biometric identification. In addition, the greater the number of collecting electrodes, the more comprehensive the information collected, but the higher the cost of collecting equipment. This makes the collected EEG signals limited by the number of electrodes in the collection device and lacks global information. Therefore, research on EEG features with brain region information and potential electrode position information has important practical significance and practical value for improving the accuracy of biometric identification systems.

发明内容:Invention content:

为解决上述问题,本发明提供了一种基于特征可视化和多模态融合的脑电身份识别方法,构建能够表征脑区域和电极分布信息的新型EEG特征可视化表示方法,体现通道信息的同时挖掘未采集电极的潜在信息;在此基础上建立多模态模型,通过融合EEG的向量特征和可视化的图像特征,增加模型分类的特征维度,深层挖掘图像特征与传统EEG向量特征的互补关系,提高整体分类性能。In order to solve the above problems, the present invention provides an EEG identification method based on feature visualization and multi-modal fusion, constructs a new EEG feature visualization method that can represent brain regions and electrode distribution information, and reflects channel information at the same time. Collect the potential information of electrodes; build a multimodal model on this basis, increase the feature dimension of model classification by fusing EEG vector features and visualized image features, and deeply mine the complementary relationship between image features and traditional EEG vector features to improve the overall Classification performance.

1.一种基于特征可视化和多模态融合的脑电身份识别方法,其特征在于,包括以下步骤:1. an EEG identification method based on feature visualization and multimodal fusion, is characterized in that, comprises the following steps:

S101:对所采集到的运动想象脑电信号进行数据预处理,将预处理后的脑电数据根据时间窗分割为连续不重叠的样本并对其提取时频域特征,将时频域特征的频率分量根据频率分布划分为5个频带,对每个频带的特征频率分量计算其统计特征作为频带特征;S101: Perform data preprocessing on the collected motor imagery EEG signals, divide the preprocessed EEG data into continuous non-overlapping samples according to time windows, extract time-frequency domain features from them, and The frequency components are divided into 5 frequency bands according to the frequency distribution, and the statistical characteristics of the characteristic frequency components of each frequency band are calculated as the frequency band characteristics;

S102:将步骤S101所得到的频带特征去均值,针对每一个频带,根据人脑皮层的电极通道定位映射到脑图上,采用双调和样条插值方法进行插值生成可视化的脑地形图;S102: De-average the frequency band features obtained in step S101, map to the brain map according to the electrode channel positioning of the human cerebral cortex for each frequency band, and use the biharmonic spline interpolation method to perform interpolation to generate a visualized brain topographic map;

S103:使用神经网络分别对步骤S101提取的脑电时频域特征和步骤S102生成的脑电可视化图像提取深度信息:分别使用深度网络学习脑电向量特征和脑电可视化特征,使用归一化层代替分类层生成两个模态的平滑特征,并在同一维度上融合作为多模态深度特征;S103: Use a neural network to extract depth information from the EEG time-frequency domain features extracted in step S101 and the EEG visualization image generated in step S102: use a deep network to learn EEG vector features and EEG visualization features respectively, and use a normalization layer Instead of the classification layer, smooth features of two modalities are generated and fused in the same dimension as multi-modal depth features;

S104:对步骤S103所得的多模态深度特征训练模型;身份识别:将待识别的脑电数据样本输入已设计的特征提取模型和已训练的多模态多分类网络,对脑电向量特征和生成的可视化特征的融合深度特征进行分类,输出该样本所对应的用户标签。S104: train the model for the multi-modal depth feature obtained in step S103; identity recognition: input the EEG data samples to be identified into the designed feature extraction model and the trained multi-modal multi-classification network, and analyze the EEG vector features and The fused depth features of the generated visual features are classified, and the user label corresponding to the sample is output.

2.根据权利要求1所述的一种基于特征可视化和多模态融合的脑电身份识别方法,其特征在于,步骤S101中:对所述采集到的脑电信号需要先经过预处理重参考平均电极,使用带通滤波将可用频率范围限制在0~42Hz,并进行基线校正;所述预处理后的脑电信号使用长度为5秒的时间窗分割为连续不重叠的独立样本,作为在相同外部刺激下的不同观察结果。2. a kind of EEG identification method based on feature visualization and multi-modal fusion according to claim 1, is characterized in that, in step S101: to described collected EEG signal needs to be re-referenced through preprocessing first Average electrodes, use band-pass filtering to limit the available frequency range to 0-42 Hz, and perform baseline correction; the preprocessed EEG signal is divided into consecutive non-overlapping independent samples using a time window of 5 seconds in length, as Different observations under the same external stimulus.

接着,对所述对时间窗分割后的独立脑电样本基于短时傅里叶变换提取功率谱密度,此时频域特征为频率分量,对于时间窗为τ的频率分量为n的第m个独立样本X[m,n]=[x[m,n](1),…,x[m,n](t),…,x[m,n](τ)],其功率谱密度可以表示为,Next, extract the power spectral density based on the short-time Fourier transform of the independent EEG samples divided by the time window. At this time, the frequency domain feature is the frequency component, and the frequency component whose time window is τ is the mth of n. Independent samples X [m,n] = [x [m,n] (1),…,x [m,n] (t),…,x [m,n] (τ)], whose power spectral density can be Expressed as,

Figure BDA0003517149260000021
Figure BDA0003517149260000021

其中STFT(τ,s)(X)表示时间窗为τ、窗滑动长为s的短时傅里叶变换,H(·)表示滑动长为s的窗函数。所述短时傅里叶变换采用了移动窗长为

Figure BDA0003517149260000022
的采样频率、50%重叠的汉明窗。where STFT (τ,s) (X) represents the short-time Fourier transform with time window τ and window sliding length s, and H( ) represents the window function with sliding length s. The short-time Fourier transform uses a moving window length of
Figure BDA0003517149260000022
sampling frequency, 50% overlapping Hamming windows.

所述对特征提取后的频率分量表现为64电极、0~42Hz波段,根据Delta(0.5~3Hz)、Theta(4~7Hz)、Alpha(8~12Hz)、Beta(13~30Hz)和Gamma(31~42Hz)划分为5个频带;所述对每个频带的频率分量计算其通道平均值作为统计特征,对于通道i,所述统计特征d(i)可以定义为,The frequency components after feature extraction are represented by 64 electrodes, 0-42Hz band, according to Delta (0.5-3Hz), Theta (4-7Hz), Alpha (8-12Hz), Beta (13-30Hz) and Gamma ( 31 to 42 Hz) is divided into 5 frequency bands; the channel average value is calculated for the frequency components of each frequency band as a statistical feature, and for channel i, the statistical feature d(i) can be defined as,

Figure BDA0003517149260000023
Figure BDA0003517149260000023

其中,f(i,freq)表示通道i在第freq个频率分量的STFT特征,f1、f2表示该频带的频率范围。Among them, f(i, freq) represents the STFT feature of channel i at the freq th frequency component, and f 1 and f 2 represent the frequency range of this frequency band.

3.根据权利要求1所述的一种基于特征可视化和多模态融合的脑电身份识别方法,其特征在于,步骤S102中:所述每个频带的统计特征根据所有通道去均值后的数据为,3. a kind of EEG identification method based on feature visualization and multi-modal fusion according to claim 1, is characterized in that, in step S102: the statistical feature of each frequency band according to the data after all channels are de-averaged for,

Figure BDA0003517149260000024
Figure BDA0003517149260000024

其中,d(i)表示当取平均时电极i在频率范围为f1到f2Hz的平均值,N表示电极数量。where d(i) represents the average value of electrode i in the frequency range f 1 to f 2 Hz when averaged, and N represents the number of electrodes.

所述每个频带去均值后的脑电特征基于10-10系统的电极定位标准一一对应,映射到脑图上;针对每个频带,使用Green函数对所述不规则电极间隔的脑电数据点进行最小曲率插值,电极i和电极j上脑电特征数据的Green函数表示为,The de-averaged EEG features of each frequency band are mapped to the brain map based on the electrode positioning standard of the 10-10 system one-to-one; for each frequency band, the EEG data of the irregular electrode interval is analyzed using the Green function. The minimum curvature interpolation is performed at the point, and the Green function of the EEG feature data on electrode i and electrode j is expressed as,

g(xi,xj)=|xi,xj|2(ln|xi,xj|-1)g(x i ,x j )=|x i ,x j | 2 (ln|x i ,x j |-1)

以电极i为中心的曲面s(xi),对所述不规则间隔的电极i和电极j的Green函数求解n×n的线性方程组,得到人脑皮层N个电极的权重,Taking the electrode i as the center of the curved surface s(x i ), solve the n×n linear equation system for the Green function of the irregularly spaced electrode i and the electrode j, and obtain the weights of the N electrodes in the human cerebral cortex,

Figure BDA0003517149260000031
Figure BDA0003517149260000031

使用所述人脑皮层N个电极的权重ωj和已知电极的特征数据xj(1≤j≤N),得到未知电极的特征数据,人脑皮层的曲面特征定义为,Using the weight ω j of the N electrodes in the human cerebral cortex and the characteristic data x j of the known electrodes (1≤j≤N), the characteristic data of the unknown electrodes are obtained, and the surface feature of the human cerebral cortex is defined as,

Figure BDA0003517149260000032
Figure BDA0003517149260000032

采用如上所述双调和样条插值方法进行插值,可以得到人脑皮层已知和未知位置的特征数据;所述人脑皮层的特征数据可视化,生成RGB脑地形图。Using the above-mentioned biharmonic spline interpolation method for interpolation, characteristic data of known and unknown positions of the human cerebral cortex can be obtained; the characteristic data of the human cerebral cortex can be visualized to generate an RGB brain topographic map.

4.根据权利要求1所述的一种基于特征可视化和多模态融合的脑电身份识别方法,其特征在于,步骤S103中:对所述S101提取的脑电功率谱密度特征使用3D-CNN提取深度信息,使用BatchNorm层代替所述3D-CNN网络最后的Softmax层,得到脑电向量的平滑特征featurevector;对所述S102插值生成的脑地形图使用ResNet-18提取深度信息,使用BatchNorm层代替所述ResNet-18网络最后的Softmax层,得到脑电图像的平滑特征featureimage;为保持统一的维数,所述两个模态的深度平滑特征在同一维度上进行融合,得到深度融合特征,4. a kind of EEG identification method based on feature visualization and multimodal fusion according to claim 1, is characterized in that, in step S103: use 3D-CNN to extract the EEG power spectral density feature extracted by described S101 For depth information, use the BatchNorm layer to replace the last Softmax layer of the 3D-CNN network to obtain a smooth feature vector of the EEG vector; use ResNet-18 to extract the depth information for the brain topographic map generated by the S102 interpolation, and use the BatchNorm layer instead The last Softmax layer of the ResNet-18 network obtains the smoothing feature feature image of the EEG; In order to maintain a unified dimension, the deep smoothing features of the two modalities are fused on the same dimension to obtain a deep fusion feature,

featurecombined=[featurevector,featureimage]feature combined = [feature vector , feature image ]

对所述提取和融合的多模态深度特征featurecombined进行Softmax分类。Softmax classification is performed on the extracted and fused multimodal deep features featurecombined.

5.根据权利要求1所述的一种基于特征可视化和多模态融合的脑电身份识别方法,其特征在于,步骤S104中:对所述S103设计的深度融合特征和多模态分类器进行迭代训练直至模型收敛,针对每个频带得到一个有效的深度特征提取器和多模态分类器,使用性能最高的频带模型作为分类准则;使用所述深度特征提取器和多模态分类器对待识别的脑电数据样本进行预处理、提取向量特征和可视化特征、分类识别,确定该脑电数据样本所对应的用户标签。5. a kind of EEG identification method based on feature visualization and multimodal fusion according to claim 1, is characterized in that, in step S104: carry out the deep fusion feature and multimodal classifier designed by described S103 Iteratively train until the model converges, obtain an effective deep feature extractor and multimodal classifier for each frequency band, and use the highest-performing frequency band model as the classification criterion; use the deep feature extractor and multimodal classifier to identify The EEG data sample is preprocessed, vector features and visual features are extracted, classified and identified, and the user label corresponding to the EEG data sample is determined.

本发明有益的效果是:The beneficial effects of the present invention are:

1、本发明的一种基于特征可视化和多模态融合的脑电身份识别方法,将时序的脑电信号转换为体现时间分辨率和频率分辨率的时频域特征,将时频域特征根据波段分割为5个经典频带,针对每一个频带进行研究,综合有效地考虑了不同频率范围的脑电信号对生物特征识别性能的影响;针对每一个频带生成可视化的脑地形图,体现相同频带下样本图像的相似性和不同频带下样本图像的特异性,进一步证明了不同波段的频带携带的生理信号的差异,有效区分了不同频率范围的脑电信号对生物特征识别性能的影响;选择性能最佳的频带分类器,排除了低相关性的频带信号的影响,增强高相关性的频带信号的特征,从而提高身份识别的准确率。1. An EEG identification method based on feature visualization and multimodal fusion of the present invention converts sequential EEG signals into time-frequency domain features reflecting time resolution and frequency resolution, and converts time-frequency domain features according to The band is divided into 5 classic frequency bands, and research is carried out for each frequency band, which comprehensively and effectively considers the influence of EEG signals in different frequency ranges on biometric recognition performance; for each frequency band, a visual brain topographic map is generated, reflecting the same frequency band. The similarity of sample images and the specificity of sample images in different frequency bands further prove the differences in physiological signals carried by different frequency bands, and effectively distinguish the influence of EEG signals in different frequency ranges on biometric recognition performance; the selection performance is the best. The best frequency band classifier eliminates the influence of low-correlation frequency band signals and enhances the characteristics of high-correlation frequency band signals, thereby improving the accuracy of identification.

2、本发明的一种基于特征可视化和多模态融合的脑电身份识别方法,不同于传统的矩阵数值特征,采用插值可视化的脑地形图作为生理特征来进行生物特征识别。一方面,本发明的可视化特征是基于人脑皮层的电极通道生成的,在体现数值数据的同时,能够直观准确地体现大脑的电极位置和区域信息,具有大脑功能区域的连通性;另一方面,在采集设备电极数量的限制下,本发明的可视化特征使用插值生成未采集电极的特征数据,挖掘未采集电极的潜在信息,具有全局信息。本发明的可视化特征在确保高识别率的同时,可以弥补传统脑电特征在以上两个方面的局限性。2. An EEG identification method based on feature visualization and multi-modal fusion of the present invention is different from the traditional matrix numerical feature, and uses an interpolated visualized brain topographic map as a physiological feature for biometric identification. On the one hand, the visualization features of the present invention are generated based on the electrode channels of the human cerebral cortex, which can directly and accurately reflect the electrode positions and regional information of the brain while reflecting the numerical data, and have the connectivity of brain functional regions; on the other hand , under the limitation of the number of electrodes of the acquisition device, the visualization feature of the present invention uses interpolation to generate the characteristic data of the uncollected electrodes, mines the potential information of the uncollected electrodes, and has global information. The visualization feature of the present invention can make up for the limitations of the traditional EEG feature in the above two aspects while ensuring a high recognition rate.

3、本发明的一种基于特征可视化和多模态融合的脑电身份识别方法,将传统的脑电向量特征和本发明设计的新型可视化特征相结合,增加模型分类的特征维度,挖掘图像特征和传统EEG向量特征的互补关系,使用融合特征和多模态模型提高整体身份识别性能。3. An EEG identification method based on feature visualization and multimodal fusion of the present invention combines traditional EEG vector features with the new visualization features designed by the present invention, increases the feature dimension of model classification, and mines image features. Complementary relationship with traditional EEG vector features, using fusion features and multimodal models to improve overall identity recognition performance.

附图说明Description of drawings

为了更清楚地说明本发明实施例的技术方案,下面对实施例描述中所需要使用的附图作简单地介绍。In order to illustrate the technical solutions of the embodiments of the present invention more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments.

图1是本发明实施例一种基于特征可视化和多模态融合的脑电身份识别方法的结构示意图。FIG. 1 is a schematic structural diagram of an EEG identification method based on feature visualization and multimodal fusion according to an embodiment of the present invention.

图2是本发明实施例脑电特征可视化的过程示意图。FIG. 2 is a schematic diagram of a process of visualizing EEG features according to an embodiment of the present invention.

图3是本发明实施例用于脑地形图生成的插值方法示意图。FIG. 3 is a schematic diagram of an interpolation method for generating a brain topographic map according to an embodiment of the present invention.

图4是本发明实施例一种基于特征可视化和多模态融合的脑电身份识别方法的总体架构。FIG. 4 is an overall architecture of an EEG identification method based on feature visualization and multimodal fusion according to an embodiment of the present invention.

图5是本发明实施例多模态特征融合的模型结构。FIG. 5 is a model structure of multi-modal feature fusion according to an embodiment of the present invention.

具体实施方式Detailed ways

下面结合附图和具体实施例对本发明进行详细描述:本发明的方法共分为五个部分。The present invention is described in detail below with reference to the accompanying drawings and specific embodiments: the method of the present invention is divided into five parts.

第一部分:脑电信号预处理及特征提取Part 1: EEG signal preprocessing and feature extraction

第二部分:脑电特征可视化Part II: Visualization of EEG Features

第三部分:多模态特征融合Part III: Multimodal Feature Fusion

第四部分:脑电信号身份识别Part 4: EEG Signal Identification

根据这四个部分,本发明实施例一种基于特征可视化和多模态融合的脑电身份识别方法,如图1所示,包括以下步骤:According to these four parts, an EEG identification method based on feature visualization and multimodal fusion according to an embodiment of the present invention, as shown in FIG. 1 , includes the following steps:

S101:对所采集到的运动想象脑电信号进行数据预处理,将预处理后的脑电数据根据时间窗分割为连续不重叠的样本并对其提取时频域特征,将时频域特征的频率分量根据频率分布划分为5个频带,对每个频带的特征频率分量计算其统计特征作为频带特征;S101: Perform data preprocessing on the collected motor imagery EEG signals, divide the preprocessed EEG data into continuous non-overlapping samples according to time windows, extract time-frequency domain features from them, and The frequency components are divided into 5 frequency bands according to the frequency distribution, and the statistical characteristics of the characteristic frequency components of each frequency band are calculated as the frequency band characteristics;

读取所获取脑电信号,根据国际标准的10-10系统电极位置进行通道重定位;以平均电极为参考,对重定位的脑电数据进行重参考;使用带通滤波,将可用频率范围限制在0~42Hz;进行基线校正,移除平均基线值以防止预处理后的脑电信号由于低频漂移或伪影引起的基线差异;对于时间长度为timesignal的脑电信号,使用长度为length=5秒的时间窗分割为连续不重叠的独立样本共M个,Read the acquired EEG signals, and perform channel relocation according to the electrode positions of the international standard 10-10 system; take the average electrode as a reference, re-reference the relocated EEG data; use band-pass filtering to limit the available frequency range At 0~42Hz; perform baseline correction, remove the average baseline value to prevent the baseline difference of the preprocessed EEG signal due to low-frequency drift or artifacts; for the EEG signal whose time length is time signal , use length= The 5-second time window is divided into M consecutive non-overlapping independent samples,

input=[input0,inputlength,input2·length,…,input(M-1)·length]input=[input 0 ,input length ,input 2·length ,...,input (M-1)·length ]

其中,(M-1)·length≤timesignal<M·length。这M个连续不重叠的预处理脑电记录作为在相同外部刺激下的不同观察结果,扩大脑电数据集。Among them, (M-1)·length≤time signal <M·length. These M consecutive non-overlapping preprocessed EEG recordings expand the EEG dataset as different observations under the same external stimulus.

对于第m(1≤m≤M)个独立样本input(m-1)·length,针对频率分量范围为0≤n≤42Hz的脑电信号X[m,n]=[x[m,n](1),…,x[m,n](t),…,x[m,n](τ)],基于短时傅里叶变换使用窗长为

Figure BDA0003517149260000041
50%重叠的汉明窗提取功率谱密度For the mth (1≤m≤M) independent sample input (m-1)·length , X [m,n] =[x [m,n] for the EEG signal whose frequency component range is 0≤n≤42Hz (1),…,x [m,n] (t),…,x [m,n] (τ)], based on the short-time Fourier transform using a window length of
Figure BDA0003517149260000041
50% overlapping Hamming windows to extract power spectral density

Figure BDA0003517149260000042
Figure BDA0003517149260000042

所提取的功率谱密度是N×42的矩阵,表示N通道、42频率分量。针对每一个电极通道,根据Delta(0.5~3Hz)、Theta(4~7Hz)、Alpha(8~12Hz)、Beta(13~30Hz)和Gamma(31~42Hz)的频率分布,计算脑电时频域特征的平均值:The extracted power spectral density is an N×42 matrix representing N channels, 42 frequency components. For each electrode channel, the EEG time-frequency is calculated according to the frequency distribution of Delta (0.5~3Hz), Theta (4~7Hz), Alpha (8~12Hz), Beta (13~30Hz) and Gamma (31~42Hz). Average of domain features:

Delta频带的脑电特征为

Figure BDA0003517149260000051
大小为N×4,1≤i≤N表示通道序号;Delta频带的特征平均值为
Figure BDA0003517149260000052
大小为N×1;The EEG characteristics of the delta band are
Figure BDA0003517149260000051
The size is N×4, and 1≤i≤N represents the channel number; the characteristic average value of the Delta band is
Figure BDA0003517149260000052
The size is N×1;

Theta频带的脑电特征为

Figure BDA0003517149260000053
大小为N×4,1≤i≤N表示通道序号;Theta频带的特征平均值为
Figure BDA0003517149260000054
大小为N×1;The EEG characteristics of the Theta band are
Figure BDA0003517149260000053
The size is N×4, and 1≤i≤N represents the channel number; the characteristic average value of the Theta band is
Figure BDA0003517149260000054
The size is N×1;

依此类推。得到5个频带关于N个电极通道的脑电特征,Delta频带的向量特征featurevector大小为N×4,Theta频带的向量特征featurevector大小为N×4,Alpha频带的向量特征featurevector大小为N×5,Beta频带的向量特征featurevector大小为N×18,Gamma频带的向量特征featurevector大小为N×10。去均值得到分别为N×1的向量特征。So on and so forth. The EEG features of N electrode channels in 5 frequency bands are obtained. The size of the vector feature feature vector of the Delta band is N×4, the size of the vector feature feature vector of the Theta band is N×4, and the size of the vector feature feature vector of the Alpha band is N ×5, the size of the vector feature feature vector of the Beta band is N×18, and the size of the vector feature feature vector of the Gamma band is N×10. Remove the mean to get N×1 vector features respectively.

S102:将步骤S101所得到的频带特征去均值,针对每一个频带,根据人脑皮层的电极通道定位映射到脑图上,采用双调和样条插值方法进行插值生成可视化的脑地形图;S102: De-average the frequency band features obtained in step S101, map to the brain map according to the electrode channel positioning of the human cerebral cortex for each frequency band, and use the biharmonic spline interpolation method to perform interpolation to generate a visualized brain topographic map;

如图2所示,首先,针对5个频带分别对其脑电特征关于所有通道去均值,使得每个频带的脑电特征数据中心化到0。具体实施基于人脑皮层的所有电极通道,将每一通道的特征去均值后作为其特征向量:As shown in FIG. 2 , first, the EEG features of the 5 frequency bands are de-averaged with respect to all channels, so that the EEG feature data of each frequency band is centered to 0. The specific implementation is based on all electrode channels of the human cerebral cortex, and the features of each channel are de-averaged as their feature vectors:

令Delta频带的通道平均值

Figure BDA0003517149260000055
Figure BDA0003517149260000056
Delta频带去均值后的特征为
Figure BDA0003517149260000057
大小为N×1;Let the channel mean of the delta band
Figure BDA0003517149260000055
for
Figure BDA0003517149260000056
The characteristics of the Delta band after de-averaging are:
Figure BDA0003517149260000057
The size is N×1;

令Theta频带的通道平均值

Figure BDA0003517149260000058
Figure BDA0003517149260000059
Theta频带去均值后的特征为
Figure BDA00035171492600000510
大小为N×1;Let the channel average of the Theta band
Figure BDA0003517149260000058
for
Figure BDA0003517149260000059
The features of the Theta band after de-averaging are:
Figure BDA00035171492600000510
The size is N×1;

令Alpha频带的通道平均值

Figure BDA00035171492600000511
Figure BDA00035171492600000512
Alpha频带去均值后的特征为
Figure BDA00035171492600000513
大小为N×1;Let the channel mean of the alpha band
Figure BDA00035171492600000511
for
Figure BDA00035171492600000512
The features of the Alpha band after de-averaging are:
Figure BDA00035171492600000513
The size is N×1;

令Beta频带的通道平均值

Figure BDA00035171492600000514
Figure BDA00035171492600000515
Beta频带去均值后的特征为
Figure BDA00035171492600000516
大小为N×1;Let the channel mean of the beta band
Figure BDA00035171492600000514
for
Figure BDA00035171492600000515
The characteristics of the beta band after de-averaging are:
Figure BDA00035171492600000516
The size is N×1;

令Gamma频带的通道平均值

Figure BDA00035171492600000517
Figure BDA00035171492600000518
Gamma频带去均值后的特征为
Figure BDA00035171492600000519
大小为N×1。Let the channel average of the Gamma band
Figure BDA00035171492600000517
for
Figure BDA00035171492600000518
The features of the Gamma band after de-averaging are:
Figure BDA00035171492600000519
The size is N×1.

得到5个频带关于N个电极通道的去均值特征,分别为N×1的向量。The de-averaging features of the 5 frequency bands with respect to the N electrode channels are obtained, which are N×1 vectors respectively.

接着,针对每一个频带,将去均值的N×1向量映射于国际标准的10-10系统N电极位置。如图3所示,基于双调和样条插值法,根据已知位置的电极特征计算未知位置的特征数据,将人脑皮层的数据可视化,生成RGB脑地形图作为256×256×3的可视化特征featureimageNext, for each frequency band, the de-averaged N×1 vector is mapped to the international standard 10-10 system N-electrode positions. As shown in Figure 3, based on the biharmonic spline interpolation method, the feature data of the unknown position is calculated according to the electrode characteristics of the known position, the data of the human cerebral cortex is visualized, and an RGB brain topographic map is generated as a visual feature of 256×256×3 feature image .

S103:使用神经网络分别对步骤S101提取的脑电时频域特征和步骤S102生成的脑电可视化图像提取深度信息:分别使用深度网络学习脑电向量特征和脑电可视化特征,使用归一化层代替分类层生成两个模态的平滑特征,并在同一维度上融合作为多模态深度特征;S103: Use a neural network to extract depth information from the EEG time-frequency domain features extracted in step S101 and the EEG visualization image generated in step S102: use a deep network to learn EEG vector features and EEG visualization features respectively, and use a normalization layer Instead of the classification layer, smooth features of two modalities are generated and fused in the same dimension as multi-modal depth features;

本发明采用3D-CNN学习脑电向量特征,采用ResNet-18学习脑电可视化特征。针对每一个频带,分别对S101提取的脑电向量特征featurevector和S102可视化生成的脑电图像特征featureimage提取深度信息。然后,将向量和图像的深度特征在同一维度上融合,得到融合深度特征featuredeep=[deep_featurevector,deep_featureimage],作为后续身份识别分类的输入。The present invention adopts 3D-CNN to learn EEG vector features, and adopts ResNet-18 to learn EEG visualization features. For each frequency band, depth information is extracted from the feature vector of the EEG vector extracted in S101 and the feature image of the EEG image generated by visualization in S102 respectively. Then, the deep features of the vector and the image are fused in the same dimension to obtain the fused deep feature feature deep =[deep_feature vector , deep_feature image ], which is used as the input for subsequent identification and classification.

S104:对步骤S103所得的多模态深度特征训练模型;身份识别:将待识别的脑电数据样本输入已设计的特征提取模型和已训练的多模态多分类网络,对脑电向量特征和生成的可视化特征的融合深度特征进行分类,输出该样本所对应的用户标签。S104: train the model for the multi-modal depth feature obtained in step S103; identity recognition: input the EEG data samples to be identified into the designed feature extraction model and the trained multi-modal multi-classification network, and analyze the EEG vector features and The fused depth features of the generated visual features are classified, and the user label corresponding to the sample is output.

数据集:data set:

本发明训练模型的试验对象为大规模的标准脑电运动想象数据集EEG MotorMovement/Imagery Dataset。本实施例中,数据集包含1500多个1~2分钟的脑电信号记录,脑电信号样本来自于109名健康受试者,采样频率为160Hz;每名受试者执行不同的运动/想象任务,使用BCI2000系统记录64通道脑电信号;每位受试者进行了14次实验:2项1分钟的基线运动(第1次睁眼,第2次闭眼),以及以下4个任务中每一项进行了3次2分钟的运动:The test object of the training model of the present invention is the large-scale standard EEG MotorMovement/Imagery Dataset. In this embodiment, the data set contains more than 1,500 EEG signal records of 1-2 minutes. The EEG signal samples are from 109 healthy subjects, and the sampling frequency is 160 Hz; each subject performs different movements/imaginations. task, using the BCI2000 system to record 64-channel EEG signals; each subject performed 14 experiments: two 1-minute baseline movements (eyes open for the first time, eyes closed for the second time), and one of the following four tasks 3 2-minute workouts for each:

Task1:目标出现在屏幕的左侧或右侧。对象打开并合上相应的拳头,直到目标消失。然后主体放松。Task1: The target appears on the left or right side of the screen. The subject opens and closes the corresponding fist until the target disappears. Then the subject relaxes.

Task2:目标出现在屏幕的左侧或右侧。对象想象打开并合上相应的拳头,直到目标消失。然后主体放松。Task2: The target appears on the left or right side of the screen. Subject imagines opening and closing corresponding fists until the target disappears. Then the subject relaxes.

Task3:目标出现在屏幕的顶部或底部。对象打开或合上两个拳头(如果目标在顶部)或双脚(如果目标在底部),直到目标消失。然后主体放松。Task3: The target appears at the top or bottom of the screen. The subject opens or closes both fists (if the target is on top) or feet (if the target is on the bottom) until the target disappears. Then the subject relaxes.

Task4:目标出现在屏幕的顶部或底部。对象想象打开或合上两个拳头(如果目标在顶部)或双脚(如果目标在底部),直到目标消失。然后主体放松。Task4: The target appears at the top or bottom of the screen. The subject imagines opening or closing both fists (if the target is at the top) or feet (if the target is at the bottom) until the target disappears. Then the subject relaxes.

简称睁眼为EO(Eye Open),闭眼为EC(Eye Close),运动状态为PHY(Physical),想象运动状态为IMA(Image)。The abbreviation is EO (Eye Open) with eyes open, EC (Eye Close) with closed eyes, PHY (Physical) in motion state, and IMA (Image) in imaginary motion state.

实验设计:experimental design:

本发明为了实现在人脑不同状态时的稳定性,采用跨任务数据集训练身份识别模型:将EO和EC静息状态的数据作为训练,将PHY或IMA运动想象的数据作为测试。图4所示为本方法采用的一种基于特征可视化和多模态融合的脑电身份识别方法总体架构。In order to achieve stability in different states of the human brain, the present invention adopts cross-task data sets to train the identity recognition model: the data of EO and EC resting states are used as training, and the data of PHY or IMA motor imagery are used as testing. Figure 4 shows the overall architecture of an EEG identification method based on feature visualization and multimodal fusion used in this method.

首先,根据步骤S101对原始脑电信号进行数据预处理。然后,根据步骤S102对预处理后信号提取脑电特征和特征可视化。接着,根据步骤S103将向量特征和图像特征进行多模态特征融合。以Beta频带为例:First, data preprocessing is performed on the original EEG signal according to step S101. Then, the EEG features and feature visualization are extracted from the preprocessed signal according to step S102. Next, according to step S103, multi-modal feature fusion is performed on the vector feature and the image feature. Take the beta band as an example:

本发明采用的N=64。首先,使用如表1的CNN结构对64×18大小的向量特征提取深度信息,得到长度为512的深度特征向量deep_featurevector,其中bs表示批量数;The present invention adopts N=64. First, use the CNN structure as shown in Table 1 to extract depth information for vector features of size 64×18, and obtain a depth feature vector deep_feature vector with a length of 512, where bs represents the batch number;

表1 CNN网络结构详细参数Table 1 Detailed parameters of CNN network structure

Figure BDA0003517149260000061
Figure BDA0003517149260000061

Figure BDA0003517149260000071
Figure BDA0003517149260000071

接着,使用如表2的ResNet-18结构对256×256×3大小的脑电可视化特征featureimage提取深度信息,得到长度为512的深度特征向量deep_featureimage,其中bs表示批量数。残差块内的结构如图4所示;Next, use the ResNet-18 structure shown in Table 2 to extract depth information from the 256×256×3 EEG visualization feature image , and obtain a deep feature vector deep_feature image with a length of 512, where bs represents the batch number. The structure in the residual block is shown in Figure 4;

表2 ResNet-18网络结构详细参数Table 2 Detailed parameters of ResNet-18 network structure

Figure BDA0003517149260000072
Figure BDA0003517149260000072

然后,将向量和图像的深度特征在同一维度上融合,得到长度为1024的融合深度特征featuredeep=[deep_featurevector,deep_featureimage]。如图5所示为多模态特征融合的结构。对步骤S103所得的多模态深度特征训练模型。Then, the vector and the depth feature of the image are fused in the same dimension to obtain the fused depth feature feature deep =[deep_feature vector , deep_feature image ] with a length of 1024. Figure 5 shows the structure of multimodal feature fusion. The model is trained on the multimodal depth feature obtained in step S103.

实验结果:Experimental results:

为选择表现较好的波段进行特征融合,我们首先对单模态特征在各个频段上的性能进行验证。针对脑电向量特征,采用CNN(架构如表1所示,将第8层的归一化层替代为109神经元的分类层)进行分类;针对脑电图像特征,采用ResNet-18(架构如表2所示,将第8层的归一化层替代为109神经元的分类层)进行分类。表3所示为跨任务数据集的单模态实验结果,可以发现:当单特征为向量(Vector)时,Beta波段的性能最佳,测试状态为PHY时身份识别率为77.31%、测试状态为IMA时身份识别率为79.07%;当单特征为图像(Image)时,Alpha波段的性能最佳,测试状态为PHY时身份识别率为78.42%、测试状态为IMA时身份识别率为79.60%;本发明设计的可视化图像作为特征时,身份识别准确率要高于传统的向量特征。In order to select better-performing bands for feature fusion, we first verify the performance of single-modal features in each band. For EEG vector features, CNN (the architecture is shown in Table 1, the normalization layer of the 8th layer is replaced by a classification layer of 109 neurons) is used for classification; for EEG image features, ResNet-18 (architecture such as As shown in Table 2, the normalization layer of layer 8 is replaced by a classification layer of 109 neurons) for classification. Table 3 shows the single-modality experimental results of the cross-task dataset. It can be found that when the single feature is a vector (Vector), the performance of the Beta band is the best. When the test state is PHY, the identification rate is 77.31%, and the test state When it is IMA, the identification rate is 79.07%; when the single feature is an image (Image), the performance of the Alpha band is the best, when the test status is PHY, the identification rate is 78.42%, and when the test status is IMA, the identification rate is 79.60% ; When the visualized image designed by the present invention is used as a feature, the identification accuracy rate is higher than that of the traditional vector feature.

表3跨任务实验:在PHY和IMA测试状态下,单模态特征的身份识别性能(%)Table 3 Cross-task experiments: Identity recognition performance (%) for single-modality features under PHY and IMA test states

Figure BDA0003517149260000073
Figure BDA0003517149260000073

确定了向量特征的最优波段为Beta、图像特征的最优波段为Alpha后,本发明基于S103步骤将表现较好的波段进行特征融合,对多模态模型进行迭代训练直至模型收敛。为证明设计方法中的多模态模型优于单模态模型,本发明前期比较了单模态特征在单波段和多波段融合时的性能。表4所示为跨任务数据集时单模态核和多模态的性能比较结果,可以发现:图像(Image)和向量(Vector)的融合特征相较于单一的图像特征和向量特征,在准确率、精确率、召回率和F1评分指标上具有更好的可辨性。以上实验证明了融合特征提取器的有效性和多模态模型的高识别率,为本发明的设计提供了身份识别模型的确定。After it is determined that the optimal band of the vector feature is Beta and the optimal band of the image feature is Alpha, the present invention performs feature fusion based on the step S103 of the band with better performance, and iteratively trains the multimodal model until the model converges. In order to prove that the multi-modal model in the design method is superior to the single-modal model, the performance of the single-modal feature in the fusion of single-band and multi-band is compared in the early stage of the present invention. Table 4 shows the performance comparison results of single-modality kernel and multi-modality across task datasets. It can be found that the fusion features of image (Image) and vector (Vector) are compared with single image features and vector features. Better discriminability on precision, precision, recall, and F1 score metrics. The above experiments prove the effectiveness of the fusion feature extractor and the high recognition rate of the multimodal model, which provides the determination of the identity recognition model for the design of the present invention.

表4跨任务实验:在PHY和IMA测试状态下,多模态特征的身份识别性能(%)Table 4 Cross-task experiments: Identity recognition performance (%) for multimodal features under PHY and IMA test states

Figure BDA0003517149260000081
Figure BDA0003517149260000081

身份识别;基于上述模型建立和训练的过程,使用所述深度特征提取器和多模态分类器对待识别的脑电数据样本进行预处理、提取向量特征和可视化特征、分类识别,确定该脑电数据样本所对应的用户标签。Identity recognition; based on the above model establishment and training process, use the deep feature extractor and the multimodal classifier to preprocess the EEG data samples to be identified, extract vector features and visual features, classify and identify, and determine the EEG data sample. The user label corresponding to the data sample.

Claims (5)

1.一种基于特征可视化和多模态融合的脑电身份识别方法,其特征在于,包括以下步骤:1. an EEG identification method based on feature visualization and multimodal fusion, is characterized in that, comprises the following steps: S101:对所采集到的运动想象脑电信号进行数据预处理,将预处理后的脑电数据根据时间窗分割为连续不重叠的样本并对其提取时频域特征,将时频域特征的频率分量根据频率分布划分为5个频带,对每个频带的特征频率分量计算其统计特征作为频带特征;S101: Perform data preprocessing on the collected motor imagery EEG signals, divide the preprocessed EEG data into continuous non-overlapping samples according to time windows, extract time-frequency domain features from them, and The frequency components are divided into 5 frequency bands according to the frequency distribution, and the statistical characteristics of the characteristic frequency components of each frequency band are calculated as the frequency band characteristics; S102:将步骤S101所得到的频带特征去均值,针对每一个频带,根据人脑皮层的电极通道定位映射到脑图上,采用双调和样条插值方法进行插值生成可视化的脑地形图;S102: De-average the frequency band features obtained in step S101, map to the brain map according to the electrode channel positioning of the human cerebral cortex for each frequency band, and use the biharmonic spline interpolation method to perform interpolation to generate a visualized brain topographic map; S103:使用神经网络分别对步骤S101提取的脑电时频域特征和步骤S102生成的脑电可视化图像提取深度信息:分别使用深度网络学习脑电向量特征和脑电可视化特征,使用归一化层代替分类层生成两个模态的平滑特征,并在同一维度上融合作为多模态深度特征;S103: Use a neural network to extract depth information from the EEG time-frequency domain features extracted in step S101 and the EEG visualization image generated in step S102: use a deep network to learn EEG vector features and EEG visualization features respectively, and use a normalization layer Instead of the classification layer, smooth features of two modalities are generated and fused in the same dimension as multi-modal depth features; S104:对步骤S103所得的多模态深度特征训练模型;身份识别:将待识别的脑电数据样本输入已设计的特征提取模型和已训练的多模态多分类网络,对脑电向量特征和生成的可视化特征的融合深度特征进行分类,输出该样本所对应的用户标签。S104: train the model for the multi-modal depth feature obtained in step S103; identity recognition: input the EEG data samples to be identified into the designed feature extraction model and the trained multi-modal multi-classification network, and analyze the EEG vector features and The fused depth features of the generated visual features are classified, and the user label corresponding to the sample is output. 2.根据权利要求1所述的一种基于特征可视化的脑电信号身份识别方法,其特征在于,步骤S101中:对所述采集到的脑电信号需要先经过预处理重参考平均电极,使用带通滤波将可用频率范围限制在0~42Hz,并进行基线校正;所述预处理后的脑电信号使用长度为5秒的时间窗分割为连续不重叠的独立样本,作为在相同外部刺激下的不同观察结果;2. The method for identifying an EEG signal based on feature visualization according to claim 1, wherein in step S101: the collected EEG signal needs to be preprocessed and re-referenced to the average electrode, using Band-pass filtering limits the available frequency range to 0-42 Hz, and performs baseline correction; the preprocessed EEG signal is divided into consecutive non-overlapping independent samples using a time window of 5 seconds in length, as under the same external stimulus different observations; 接着,对所述对时间窗分割后的独立脑电样本基于短时傅里叶变换提取功率谱密度,此时频域特征为频率分量,对于时间窗为τ的频率分量为n的第m个独立样本X[m,n]=[x[m,n](1),…,x[m,n](t),…,x[m,n](τ)],其功率谱密度可以表示为,Next, extract the power spectral density based on the short-time Fourier transform of the independent EEG samples divided by the time window. At this time, the frequency domain feature is the frequency component, and the frequency component whose time window is τ is the mth of n. Independent samples X [m,n] = [x [m,n] (1),…,x [m,n] (t),…,x [m,n] (τ)], whose power spectral density can be Expressed as,
Figure FDA0003517149250000011
Figure FDA0003517149250000011
其中STFT(τ,s)(X)表示时间窗为τ、窗滑动长为s的短时傅里叶变换,H(·)表示滑动长为s的窗函数;所述短时傅里叶变换采用了移动窗长为
Figure FDA0003517149250000012
的采样频率、50%重叠的汉明窗;
where STFT (τ,s) (X) represents a short-time Fourier transform with a time window of τ and a window sliding length of s, and H( ) represents a window function with a sliding length of s; the short-time Fourier transform The moving window length is
Figure FDA0003517149250000012
sampling frequency, 50% overlapping Hamming window;
所述对特征提取后的频率分量表现为64电极、0~42Hz波段,根据Delta(0.5~3Hz)、Theta(4~7Hz)、Alpha(8~12Hz)、Beta(13~30Hz)和Gamma(31~42Hz)划分为5个频带;所述对每个频带的频率分量计算其通道平均值作为统计特征,对于通道i,所述统计特征d(i)可以定义为,The frequency components after feature extraction are represented by 64 electrodes, 0-42Hz band, according to Delta (0.5-3Hz), Theta (4-7Hz), Alpha (8-12Hz), Beta (13-30Hz) and Gamma ( 31 to 42 Hz) is divided into 5 frequency bands; the channel average value is calculated for the frequency components of each frequency band as a statistical feature, and for channel i, the statistical feature d(i) can be defined as,
Figure FDA0003517149250000013
Figure FDA0003517149250000013
其中,f(i,freq)表示通道i在第freq个频率分量的STFT特征,f1、f2表示该频带的频率范围。Among them, f(i, freq) represents the STFT feature of channel i at the freq th frequency component, and f 1 and f 2 represent the frequency range of this frequency band.
3.根据权利要求1所述的一种基于特征可视化的脑电信号身份识别方法,其特征在于,步骤S102中:所述每个频带的统计特征根据所有通道去均值后的数据为,3. a kind of EEG signal identification method based on feature visualization according to claim 1, is characterized in that, in step S102: the statistical feature of each frequency band according to the data after the mean value of all channels is,
Figure FDA0003517149250000014
Figure FDA0003517149250000014
其中,d(i)表示当取平均时电极i在频率范围为f1到f2Hz的平均值,N表示电极数量;where d(i) represents the average value of electrode i in the frequency range from f 1 to f 2 Hz when averaged, and N represents the number of electrodes; 所述每个频带去均值后的脑电特征基于10-10系统的电极定位标准一一对应,映射到脑图上;针对每个频带,使用Green函数对所述不规则电极间隔的脑电数据点进行最小曲率插值,电极i和电极j上脑电特征数据的Green函数表示为,The de-averaged EEG features of each frequency band are mapped to the brain map based on the electrode positioning standard of the 10-10 system one-to-one; for each frequency band, the EEG data of the irregular electrode interval is analyzed using the Green function. The minimum curvature interpolation is performed at the point, and the Green function of the EEG feature data on electrode i and electrode j is expressed as, g(xi,xj)=|xi,xj|2(ln|xi,xj|-1)g(x i ,x j )=|x i ,x j | 2 (ln|x i ,x j |-1) 以电极i为中心的曲面s(xi),对所述不规则间隔的电极i和电极j的Green函数求解n×n的线性方程组,得到人脑皮层N个电极的权重,Taking the electrode i as the center of the curved surface s(x i ), solve the n×n linear equation system for the Green function of the irregularly spaced electrode i and the electrode j, and obtain the weights of the N electrodes in the human cerebral cortex,
Figure FDA0003517149250000021
Figure FDA0003517149250000021
使用所述人脑皮层N个电极的权重ωj和已知电极的特征数据xj(1≤j≤N),得到未知电极的特征数据,人脑皮层的曲面特征定义为,Using the weight ω j of the N electrodes in the human cerebral cortex and the characteristic data x j of the known electrodes (1≤j≤N), the characteristic data of the unknown electrodes are obtained, and the surface feature of the human cerebral cortex is defined as,
Figure FDA0003517149250000022
Figure FDA0003517149250000022
采用如上所述双调和样条插值方法进行插值,可以得到人脑皮层已知和未知位置的特征数据;所述人脑皮层的特征数据可视化,生成RGB脑地形图。Using the above-mentioned biharmonic spline interpolation method for interpolation, characteristic data of known and unknown positions of the human cerebral cortex can be obtained; the characteristic data of the human cerebral cortex can be visualized to generate an RGB brain topographic map.
4.根据权利要求1所述的一种基于特征可视化的脑电信号身份识别方法,其特征在于,步骤S103中:对所述S101提取的脑电功率谱密度特征使用3D-CNN提取深度信息,使用BatchNorm层代替所述3D-CNN网络最后的Softmax层,得到脑电向量的平滑特征featurevector;对所述S102插值生成的脑地形图使用ResNet-18提取深度信息,使用BatchNorm层代替所述ResNet-18网络最后的Softmax层,得到脑电图像的平滑特征featureimage;为保持统一的维数,所述两个模态的深度平滑特征在同一维度上进行融合,得到深度融合特征,4. a kind of EEG signal identification method based on feature visualization according to claim 1, is characterized in that, in step S103: use 3D-CNN to extract depth information to the EEG power spectral density feature extracted in described S101, use The BatchNorm layer replaces the last Softmax layer of the 3D-CNN network to obtain the smooth feature feature vector of the EEG vector; ResNet-18 is used to extract the depth information for the brain topographic map generated by the S102 interpolation, and the BatchNorm layer is used to replace the ResNet- 18 The last Softmax layer of the network obtains the smooth feature feature image of the EEG image; in order to maintain a uniform dimension, the depth smooth features of the two modalities are fused on the same dimension to obtain a deep fusion feature, featurecombined=[featurevector,featureimage]feature combined = [feature vector , feature image ] 对所述提取和融合的多模态深度特征featurecombined进行Softmax分类。Softmax classification is performed on the extracted and fused multimodal depth feature feature combined . 5.根据权利要求1所述的一种基于特征可视化的脑电信号身份识别方法,其特征在于,步骤S104中:对所述S103设计的深度融合特征和多模态分类器进行迭代训练直至模型收敛,针对每个频带得到一个有效的深度特征提取器和多模态分类器,使用性能最高的频带模型作为分类准则;使用所述深度特征提取器和多模态分类器对待识别的脑电数据样本进行预处理、提取向量特征和可视化特征、分类识别,确定该脑电数据样本所对应的用户标签。5. A kind of EEG signal identification method based on feature visualization according to claim 1, is characterized in that, in step S104: carry out iterative training to the deep fusion feature and multimodal classifier designed in described S103 until the model Convergence, obtain an effective deep feature extractor and multimodal classifier for each frequency band, use the highest-performing frequency band model as the classification criterion; use the deep feature extractor and multimodal classifier EEG data to be identified The sample is preprocessed, vector features and visual features are extracted, classified and identified, and the user label corresponding to the EEG data sample is determined.
CN202210167636.XA 2022-02-23 2022-02-23 Electroencephalogram identity recognition method based on feature visualization and multi-mode fusion Active CN114578963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210167636.XA CN114578963B (en) 2022-02-23 2022-02-23 Electroencephalogram identity recognition method based on feature visualization and multi-mode fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210167636.XA CN114578963B (en) 2022-02-23 2022-02-23 Electroencephalogram identity recognition method based on feature visualization and multi-mode fusion

Publications (2)

Publication Number Publication Date
CN114578963A true CN114578963A (en) 2022-06-03
CN114578963B CN114578963B (en) 2024-04-05

Family

ID=81773466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210167636.XA Active CN114578963B (en) 2022-02-23 2022-02-23 Electroencephalogram identity recognition method based on feature visualization and multi-mode fusion

Country Status (1)

Country Link
CN (1) CN114578963B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115363603A (en) * 2022-09-05 2022-11-22 大连大学 Motor imagery electroencephalogram signal identification method based on improved ResNet18
CN116595455A (en) * 2023-05-30 2023-08-15 江南大学 Motor imagery electroencephalogram signal classification method and system based on space-time frequency feature extraction
CN116662742A (en) * 2023-06-28 2023-08-29 北京理工大学 Brain electrolysis code method based on hidden Markov model and mask empirical mode decomposition
CN118436317A (en) * 2024-07-08 2024-08-06 山东锋士信息技术有限公司 Sleep stage classification method and system based on multi-granularity feature fusion

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018014436A1 (en) * 2016-07-18 2018-01-25 天津大学 Emotion eeg recognition method providing emotion recognition model time robustness
CN109711383A (en) * 2019-01-07 2019-05-03 重庆邮电大学 Time-frequency domain-based convolutional neural network motor imagery EEG signal recognition method
CN111329474A (en) * 2020-03-04 2020-06-26 西安电子科技大学 Electroencephalogram identity recognition method and system based on deep learning and information updating method
WO2020151144A1 (en) * 2019-01-24 2020-07-30 五邑大学 Generalized consistency-based fatigue classification method for constructing brain function network and relevant vector machine
CN112353407A (en) * 2020-10-27 2021-02-12 燕山大学 Evaluation system and method based on active training of neurological rehabilitation
CN112784736A (en) * 2021-01-21 2021-05-11 西安理工大学 Multi-mode feature fusion character interaction behavior recognition method
CN113011239A (en) * 2020-12-02 2021-06-22 杭州电子科技大学 Optimal narrow-band feature fusion-based motor imagery classification method
CN113180659A (en) * 2021-01-11 2021-07-30 华东理工大学 Electroencephalogram emotion recognition system based on three-dimensional features and cavity full convolution network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018014436A1 (en) * 2016-07-18 2018-01-25 天津大学 Emotion eeg recognition method providing emotion recognition model time robustness
CN109711383A (en) * 2019-01-07 2019-05-03 重庆邮电大学 Time-frequency domain-based convolutional neural network motor imagery EEG signal recognition method
WO2020151144A1 (en) * 2019-01-24 2020-07-30 五邑大学 Generalized consistency-based fatigue classification method for constructing brain function network and relevant vector machine
CN111329474A (en) * 2020-03-04 2020-06-26 西安电子科技大学 Electroencephalogram identity recognition method and system based on deep learning and information updating method
CN112353407A (en) * 2020-10-27 2021-02-12 燕山大学 Evaluation system and method based on active training of neurological rehabilitation
CN113011239A (en) * 2020-12-02 2021-06-22 杭州电子科技大学 Optimal narrow-band feature fusion-based motor imagery classification method
CN113180659A (en) * 2021-01-11 2021-07-30 华东理工大学 Electroencephalogram emotion recognition system based on three-dimensional features and cavity full convolution network
CN112784736A (en) * 2021-01-21 2021-05-11 西安理工大学 Multi-mode feature fusion character interaction behavior recognition method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
冯津;王行愚;金晶;: "基于支持向量机多分类器的运动想象电位识别", 中国组织工程研究与临床康复, no. 09 *
杨豪;张俊然;蒋小梅;刘飞;: "基于深度信念网络脑电信号表征情绪状态的识别研究", 生物医学工程学杂志, no. 02 *
柴冰,李冬冬,王喆,高大启: "融合频率和通道卷积注意的脑电(EEG)情感识别", 《计算机科学》, vol. 48, no. 12 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115363603A (en) * 2022-09-05 2022-11-22 大连大学 Motor imagery electroencephalogram signal identification method based on improved ResNet18
CN116595455A (en) * 2023-05-30 2023-08-15 江南大学 Motor imagery electroencephalogram signal classification method and system based on space-time frequency feature extraction
CN116595455B (en) * 2023-05-30 2023-11-10 江南大学 Motor imagery electroencephalogram signal classification method and system based on space-time frequency feature extraction
CN116662742A (en) * 2023-06-28 2023-08-29 北京理工大学 Brain electrolysis code method based on hidden Markov model and mask empirical mode decomposition
CN118436317A (en) * 2024-07-08 2024-08-06 山东锋士信息技术有限公司 Sleep stage classification method and system based on multi-granularity feature fusion

Also Published As

Publication number Publication date
CN114578963B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
CN114578963B (en) Electroencephalogram identity recognition method based on feature visualization and multi-mode fusion
CN109784023B (en) Steady-state visually evoked EEG identification method and system based on deep learning
CN110353673B (en) An EEG channel selection method based on standard mutual information
CN107361766B (en) Emotion electroencephalogram signal identification method based on EMD domain multi-dimensional information
CN103714281B (en) A kind of personal identification method based on electrocardiosignal
KR101293446B1 (en) Electroencephalography Classification Method for Movement Imagination and Apparatus Thereof
CN109375776B (en) EEG signal action intention recognition method based on multi-task RNN model
CN112656431A (en) Electroencephalogram-based attention recognition method and device, terminal equipment and storage medium
Choi et al. User Authentication System Based on Baseline-corrected ECG for Biometrics.
CN111310656A (en) Single motor imagery electroencephalogram signal identification method based on multi-linear principal component analysis
CN110543831A (en) A brain pattern recognition method based on convolutional neural network
Li et al. Decoupling representation learning for imbalanced electroencephalography classification in rapid serial visual presentation task
CN115414051A (en) Emotion classification and recognition method of electroencephalogram signal self-adaptive window
CN116150670A (en) A task-independent brainprint recognition method based on feature decorrelation and decoupling
Li et al. Feature extraction based on high order statistics measures and entropy for eeg biometrics
Tabatabaei et al. Local binary patterns for noise-tolerant sEMG classification
CN106650685B (en) Identity recognition method and device based on electrocardiogram signal
Liu et al. Automated Machine Learning for Epileptic Seizure Detection Based on EEG Signals.
CN109117790B (en) A brain pattern recognition method based on frequency space index
CN118211182A (en) Identity recognition system and method based on pulse wave signal multi-index fusion analysis
Zhang et al. ATGAN: attention-based temporal GAN for EEG data augmentation in personal identification
Kim et al. A study on user recognition using 2D ECG image based on ensemble networks for intelligent vehicles
CN116595434A (en) Lie detection method based on dimension and classification algorithm
Chen et al. A stimulus-response based EEG biometric using mallows distance
CN115758207A (en) A motor imagery EEG signal classification algorithm based on non-uniform frequency band MFTSLR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant