CN114578963A - Electroencephalogram identity recognition method based on feature visualization and multi-mode fusion - Google Patents
Electroencephalogram identity recognition method based on feature visualization and multi-mode fusion Download PDFInfo
- Publication number
- CN114578963A CN114578963A CN202210167636.XA CN202210167636A CN114578963A CN 114578963 A CN114578963 A CN 114578963A CN 202210167636 A CN202210167636 A CN 202210167636A CN 114578963 A CN114578963 A CN 114578963A
- Authority
- CN
- China
- Prior art keywords
- electroencephalogram
- feature
- features
- frequency
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000004927 fusion Effects 0.000 title claims abstract description 35
- 238000012800 visualization Methods 0.000 title claims abstract description 29
- 239000013598 vector Substances 0.000 claims abstract description 54
- 210000004556 brain Anatomy 0.000 claims abstract description 38
- 230000000007 visual effect Effects 0.000 claims abstract description 23
- 238000012549 training Methods 0.000 claims abstract description 13
- 238000007781 pre-processing Methods 0.000 claims abstract description 11
- 238000013507 mapping Methods 0.000 claims abstract description 4
- 238000000605 extraction Methods 0.000 claims description 14
- 210000003710 cerebral cortex Anatomy 0.000 claims description 11
- 238000001914 filtration Methods 0.000 claims description 6
- 230000003595 spectral effect Effects 0.000 claims description 6
- 238000004613 tight binding model Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 3
- 230000002490 cerebral effect Effects 0.000 claims description 2
- 230000006870 function Effects 0.000 claims description 2
- 230000001788 irregular Effects 0.000 claims description 2
- 230000011218 segmentation Effects 0.000 claims description 2
- 238000001228 spectrum Methods 0.000 claims description 2
- 230000000295 complement effect Effects 0.000 abstract description 3
- 238000012360 testing method Methods 0.000 description 8
- 238000002474 experimental method Methods 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000004070 electrodeposition Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000001965 increasing effect Effects 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000000638 stimulation Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011840 criminal investigation Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000012880 independent component analysis Methods 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000000284 resting effect Effects 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
- A61B5/372—Analysis of electroencephalograms
- A61B5/374—Detecting the frequency distribution of signals, e.g. detecting delta, theta, alpha, beta or gamma waves
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/14—Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
- G06F17/141—Discrete Fourier transforms
- G06F17/142—Fast Fourier transforms, e.g. using a Cooley-Tukey type algorithm
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Software Systems (AREA)
- Animal Behavior & Ethology (AREA)
- Pathology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Analysis (AREA)
- Veterinary Medicine (AREA)
- Computational Mathematics (AREA)
- Public Health (AREA)
- Computational Linguistics (AREA)
- Mathematical Optimization (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Computing Systems (AREA)
- Psychiatry (AREA)
- Pure & Applied Mathematics (AREA)
- Databases & Information Systems (AREA)
- Algebra (AREA)
- Neurosurgery (AREA)
- Dermatology (AREA)
- Neurology (AREA)
- Discrete Mathematics (AREA)
- Human Computer Interaction (AREA)
- Fuzzy Systems (AREA)
- Psychology (AREA)
Abstract
The invention discloses an electroencephalogram identity recognition method based on feature visualization and multi-mode fusion, which comprises the following steps: firstly, preprocessing data of the motor imagery electroencephalogram signals; then, aiming at each frequency band, mapping the frequency band characteristics after mean value removal to a brain map according to the electrode positioning of the human brain cortex, and carrying out interpolation by adopting a bi-harmonic spline interpolation method to generate a visual brain topographic map; secondly, extracting depth information from the electroencephalogram time-frequency domain features and the electroencephalogram visual image features by using a depth network, and fusing the depth information on the same dimension to obtain a multi-mode depth feature; and (3) training to obtain an effective depth feature extractor and a multi-mode classifier for each frequency band, and using a frequency band model with the highest performance as an identity recognition model of the system. The electroencephalogram visual feature representation can reflect channel position information, can mine potential electroencephalogram information of an electrode which is not collected, and can mine the complementary relation between image features and traditional vector features in a deep layer.
Description
The technical field is as follows:
the invention relates to the technical field of electroencephalogram identity recognition, in particular to an electroencephalogram identity recognition method for carrying out biological feature recognition on an electroencephalogram signal based on feature visualization and multi-mode fusion.
Background art:
the wide demand and application of identity recognition to various aspects of life, such as monitoring and security, has led to an increasing need for more reliable identity authentication techniques to improve security. The identity authentication technology in the era of the internet of things comprises a password-based authentication technology and a mark-based authentication technology, and is widely applied to the fields of criminal investigation, bank transactions, certificate security, access control systems and the like. With the development of machine learning, biometric identification technologies such as fingerprint identification, voiceprint identification, face identification and the like are developed more maturely. However, the personal privacy information in these conventional authentication methods is easily stolen, copied, synthesized or forged, which may cause problems such as privacy disclosure and system insecurity. In order to implement an automatic user authentication system, especially under the condition of a high safety factor, people are receiving more and more attention to cognitive biometric identification by using a biological signal such as an Electroencephalogram (Electroencephalogram, abbreviated as EEG) and the like.
The electroencephalogram signal is a non-stationary, non-linear random signal generated by the cerebral cortex, and is considered to be one of the most promising and reliable biological signals for biological feature recognition due to universality, portability, collectability, uniqueness and non-invasiveness. Compared to traditional authentication techniques, EEG has strong anti-counterfeiting and anti-theft capabilities, because EEG signals are generated by the involvement of human individuals in consciousness, must be captured by human consciousness, and users cannot intentionally reveal unwanted signal information. The process of applying the electroencephalogram signals to identity recognition comprises stimulation induction, electroencephalogram signal acquisition, electroencephalogram signal preprocessing, feature extraction and feature classification so as to realize identity recognition. In these processes, efficient feature extraction and suitable feature classifiers are the key to determining the identity recognition performance. Electroencephalogram biological recognition generally focuses on electroencephalogram signals of 5 frequency bands below 50Hz, including Delta waves (0.5-3 Hz), Theta waves (4-7 Hz), Alpha waves (8-12 Hz), Beta waves (13-30 Hz), and Gamma waves (>31 Hz). In the feature extraction process of the electroencephalogram signals, representative feature extraction methods can be divided into time domain analysis (including amplitude, mean value, variance and the like), frequency domain analysis (including power spectrum analysis, coherent analysis and the like), time-frequency domain analysis (including wavelet transformation, empirical mode decomposition and the like) and space domain analysis (including common space domain mode method, independent component analysis and the like), in order to improve the accuracy of biological feature identification, some researches combine feature extraction methods of different domains, and multi-dimensional features of extracted EEG signals can characterize EEG from multiple domains. In recent years, some researches use functional connectivity between an electrode channel and a brain area as biological characteristics of an electroencephalogram signal, so that a subject identification method based on the electroencephalogram signal has higher identification degree and stronger robustness. In the feature classification process of the electroencephalogram signal, the classifier can be classified into a shallow classifier and a deep classifier. The shallow classification method preprocesses the original EEG signal, performs feature extraction including frequency domain feature filtering, time domain feature filtering and space domain feature filtering to enhance the signal quality, and takes the enhanced EEG signal as the input of a model as training. The shallow classifier is represented by linear discriminant analysis, a support vector machine, a hidden markov model, or the like. The deep classification method directly takes the original signal as the input of the model to carry out end-to-end training without extracting the characteristics of the signal. Because of the time, frequency, and spatial properties of EEG signals, convolutional and recurrent neural networks are often used as deep classification methods for biometric identification.
However, the traditional electroencephalogram features still have certain limitations. First, the matrix data of the conventional features mostly focuses on numerical information, not area information and electrode position information. Such features may lack connectivity to functional areas of the brain, which is particularly important for biological identification. In addition, the more the number of collecting electrodes, the more comprehensive the collected information, but the higher the cost of the collecting equipment. Therefore, the acquired electroencephalogram signals are limited by the number of electrodes of the acquisition equipment, and lack of global information. Therefore, the research on the electroencephalogram characteristics with the brain area information and the potential electrode position information has important practical significance and practical value for improving the accuracy of the biological recognition system.
The invention content is as follows:
in order to solve the problems, the invention provides an electroencephalogram identity recognition method based on feature visualization and multi-mode fusion, a novel EEG feature visualization representation method capable of representing brain area and electrode distribution information is constructed, and potential information of an electrode which is not collected is mined while channel information is reflected; on the basis, a multi-modal model is established, the feature dimension of model classification is increased by fusing the vector features of the EEG and the visual image features, the complementary relation between the image features and the vector features of the traditional EEG is deeply mined, and the overall classification performance is improved.
1. An electroencephalogram identity recognition method based on feature visualization and multi-mode fusion is characterized by comprising the following steps:
s101: carrying out data preprocessing on the collected motor imagery electroencephalogram signals, dividing the preprocessed electroencephalogram data into continuous and non-overlapping samples according to a time window, extracting time-frequency domain characteristics, dividing frequency components of the time-frequency domain characteristics into 5 frequency bands according to frequency distribution, and calculating statistical characteristics of the frequency components of the characteristics of each frequency band to serve as frequency band characteristics;
s102: removing the mean value of the frequency band characteristics obtained in the step S101, positioning and mapping the frequency band characteristics to a brain map according to an electrode channel of a human brain cortex aiming at each frequency band, and interpolating by adopting a bi-harmonic spline interpolation method to generate a visual brain topographic map;
s103: extracting depth information from the electroencephalogram time-frequency domain features extracted in the step S101 and the electroencephalogram visual images generated in the step S102 by using a neural network: learning electroencephalogram vector features and electroencephalogram visual features by using a depth network respectively, generating smooth features of two modes by using a normalization layer instead of a classification layer, and fusing the smooth features on the same dimension to serve as multi-mode depth features;
s104: training a model for the multi-modal depth features obtained in step S103; identity recognition: and inputting the electroencephalogram data sample to be identified into a designed feature extraction model and a trained multi-modal multi-classification network, classifying the fusion depth features of the electroencephalogram vector features and the generated visual features, and outputting a user label corresponding to the sample.
2. The electroencephalogram identification method based on feature visualization and multimodal fusion according to claim 1, wherein in the step S101: preprocessing the acquired electroencephalogram signals, re-referencing an average electrode, limiting the available frequency range to 0-42 Hz by using band-pass filtering, and correcting a base line; the preprocessed electroencephalogram signals are divided into continuous and non-overlapping independent samples by using a time window with the length of 5 seconds and used as different observation results under the same external stimulation.
Then, extracting power spectrum density of the independent brain wave samples after the time window segmentation based on short-time Fourier transform, wherein the frequency domain feature is frequency components, and the mth independent sample X with the frequency components of tau of the time window being n[m,n]=[x[m,n](1),…,x[m,n](t),…,x[m,n](τ)]The power spectral density of which can be expressed as,
wherein STFT(τ,s)(X) represents a short-time Fourier transform with a time window of τ and a window sliding length of s, and H (-) represents a window function with a sliding length of s. The short-time Fourier transform adopts a moving window length ofSampling frequency of (2), Chinese of 50% overlapAnd (5) opening the window.
The frequency component after feature extraction is represented as 64 electrodes and a wave band of 0-42 Hz, and is divided into 5 frequency bands according to Delta (0.5-3 Hz), Theta (4-7 Hz), Alpha (8-12 Hz), Beta (13-30 Hz) and Gamma (31-42 Hz); the channel average value of the frequency components of each frequency band is calculated as a statistical characteristic, and for a channel i, the statistical characteristic d (i) can be defined as,
wherein f (i, freq) represents the STFT characteristic of channel i at freq frequency component, and f1、f2Indicating the frequency range of the band.
3. The electroencephalogram identification method based on feature visualization and multimodal fusion according to claim 1, wherein in the step S102: the statistical characteristics of each frequency band are determined according to the data after the mean value of all channels is removed,
wherein d (i) represents the frequency range f of electrode i when averaged1To f2Average value in Hz, N represents the number of electrodes.
The electroencephalogram features after the mean value of each frequency band is removed are in one-to-one correspondence based on the electrode positioning standard of a 10-10 system and are mapped on a brain map; for each frequency band, performing minimum curvature interpolation on the electroencephalogram data points with irregular electrode intervals by using a Green function, wherein the Green function of the electroencephalogram characteristic data on the electrode i and the electrode j is expressed as,
g(xi,xj)=|xi,xj|2(ln|xi,xj|-1)
curved surface s (x) centered on electrode ii) Solving a linear equation set of N multiplied by N for the Green function of the irregularly spaced electrodes i and j to obtain the weight of N electrodes of the human cerebral cortex,
using the weight ω of the N electrodes of the human cerebral cortexjAnd characteristic data x of known electrodesj(j is more than or equal to 1 and less than or equal to N), obtaining the characteristic data of the unknown electrode, defining the curved surface characteristic of the human cerebral cortex as,
interpolation is carried out by adopting the bi-harmonic spline interpolation method, so that the characteristic data of the known and unknown positions of the human cerebral cortex can be obtained; and the characteristic data of the human cerebral cortex is visualized to generate an RGB brain topographic map.
4. The electroencephalogram identification method based on feature visualization and multimodal fusion according to claim 1, wherein in the step S103: extracting depth information from the electroencephalogram power spectral density features extracted by the S101 by using a 3D-CNN (three-dimensional-convolutional network), replacing the last Softmax layer of the 3D-CNN network by using a BatchNorm layer, and obtaining smooth feature of the electroencephalogram vectorvector(ii) a And extracting depth information from the brain topographic map generated by the S102 interpolation by using ResNet-18, and replacing the last Softmax layer of the ResNet-18 network by using a BatchNorm layer to obtain smooth feature of the electroencephalogram imageimage(ii) a In order to keep the uniform dimension, the depth smooth features of the two modes are fused on the same dimension to obtain a depth fusion feature,
featurecombined=[featurevector,featureimage]
and performing Softmax classification on the extracted and fused multi-modal depth feature featurecoded.
5. The electroencephalogram identification method based on feature visualization and multimodal fusion according to claim 1, wherein in the step S104: performing iterative training on the depth fusion features and the multi-modal classifier designed in the step S103 until the models are converged, obtaining an effective depth feature extractor and a multi-modal classifier for each frequency band, and using a frequency band model with the highest performance as a classification criterion; and preprocessing the electroencephalogram data sample to be recognized, extracting vector features and visual features, classifying and recognizing by using the depth feature extractor and the multi-modal classifier, and determining a user label corresponding to the electroencephalogram data sample.
The invention has the beneficial effects that:
1. the electroencephalogram identification method based on feature visualization and multi-mode fusion converts time-sequence electroencephalogram signals into time-frequency domain features reflecting time resolution and frequency resolution, divides the time-frequency domain features into 5 classical frequency bands according to wave bands, researches are carried out on each frequency band, and comprehensively and effectively considers the influence of electroencephalogram signals in different frequency ranges on biological feature identification performance; generating a visual brain topographic map for each frequency band, reflecting the similarity of sample images in the same frequency band and the specificity of the sample images in different frequency bands, further proving the difference of physiological signals carried by the frequency bands of different wave bands, and effectively distinguishing the influence of electroencephalogram signals in different frequency ranges on biological characteristic identification performance; and selecting the frequency band classifier with the best performance, eliminating the influence of the frequency band signal with low correlation, and enhancing the characteristics of the frequency band signal with high correlation, thereby improving the accuracy of identity recognition.
2. The electroencephalogram identity recognition method based on feature visualization and multi-mode fusion is different from the traditional matrix numerical features, and biological feature recognition is carried out by adopting an interpolation visualized brain topographic map as a physiological feature. On one hand, the visual characteristics of the invention are generated based on the electrode channel of the human cerebral cortex, and the invention can visually and accurately represent the electrode position and the area information of the brain while reflecting numerical data, and has the connectivity of the brain functional area; on the other hand, under the limitation of the number of the electrodes of the acquisition equipment, the visualization characteristic of the invention generates characteristic data of the non-acquired electrodes by interpolation, and potential information of the non-acquired electrodes is mined, so that the potential information has global information. The visualization characteristic of the invention can make up the limitations of the traditional electroencephalogram characteristic in the two aspects while ensuring high recognition rate.
3. The electroencephalogram identity recognition method based on feature visualization and multi-mode fusion combines the traditional electroencephalogram vector features with the novel visualization features designed by the invention, increases the feature dimension of model classification, excavates the complementary relation between the image features and the traditional EEG vector features, and improves the overall identity recognition performance by using the fusion features and the multi-mode models.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments are briefly introduced below.
FIG. 1 is a schematic structural diagram of an electroencephalogram identification method based on feature visualization and multi-modal fusion in an embodiment of the present invention.
FIG. 2 is a schematic diagram of a process of visualizing electroencephalogram features according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of an interpolation method for brain map generation according to an embodiment of the present invention.
Fig. 4 is a general architecture of an electroencephalogram identification method based on feature visualization and multi-modal fusion according to an embodiment of the present invention.
FIG. 5 is a model structure of multi-modal feature fusion according to an embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the following figures and specific embodiments: the method of the invention is divided into five parts.
A first part: electroencephalogram signal preprocessing and feature extraction
A second part: electroencephalogram feature visualization
And a third part: multimodal feature fusion
The fourth part: electroencephalogram signal identification
According to the four parts, the electroencephalogram identification method based on feature visualization and multi-mode fusion disclosed by the embodiment of the invention comprises the following steps as shown in fig. 1:
s101: carrying out data preprocessing on the collected motor imagery electroencephalogram signals, dividing the preprocessed electroencephalogram data into continuous and non-overlapping samples according to a time window, extracting time-frequency domain characteristics, dividing frequency components of the time-frequency domain characteristics into 5 frequency bands according to frequency distribution, and calculating statistical characteristics of the frequency components of the characteristics of each frequency band to serve as frequency band characteristics;
reading the obtained brain electrical signals, and performing channel relocation according to the position of an international standard 10-10 system electrode; re-referencing the re-positioned electroencephalogram data by taking the average electrode as a reference; using band-pass filtering to limit the available frequency range to 0-42 Hz; performing baseline correction, and removing an average baseline value to prevent baseline difference of the preprocessed electroencephalogram signal due to low-frequency drift or artifact; for a time length of timesignalThe electroencephalogram signal is divided into M continuous non-overlapping independent samples by using a time window with the length of 5 seconds,
input=[input0,inputlength,input2·length,…,input(M-1)·length]
wherein (M-1) length is less than or equal to timesignal<M.length. The M continuous non-overlapping preprocessed electroencephalogram records are used as different observation results under the same external stimulus, and an electroencephalogram data set is enlarged.
For the M (1. ltoreq. M. ltoreq.M) independent sample input(m-1)·lengthAiming at the EEG signal X with the frequency component range of 0-42 Hz[m,n]=[x[m,n](1),…,x[m,n](t),…,x[m,n](τ)]Using a window length ofHamming window extraction power spectral density with 50% overlap
The extracted power spectral density is an N × 42 matrix representing N-channel, 42 frequency components. Aiming at each electrode channel, calculating the average value of the electroencephalogram time-frequency domain characteristics according to the frequency distribution of Delta (0.5-3 Hz), Theta (4-7 Hz), Alpha (8-12 Hz), Beta (13-30 Hz) and Gamma (31-42 Hz):
the EEG characteristics of Delta frequency band areThe size is Nx 4, and i is more than or equal to 1 and less than or equal to N represents the channel serial number; the characteristic mean value of Delta frequency band isThe size is Nx 1;
the brain electricity characteristic of Theta frequency band isThe size is Nx 4, i is more than or equal to 1 and less than or equal to N represents the serial number of the channel; the mean value of the characteristics of the Theta band isThe size is Nx 1;
and so on. Obtaining 5 frequency bands of electroencephalogram characteristics related to N electrode channels and Delta frequency band vector characteristic featurevectorFeature of vector with size of Nx 4, Theta bandvectorVector feature of size Nx 4, Alpha bandvectorFeature of vector of size Nx 5, Beta bandvectorFeature of vector with size of Nx 18 and Gamma bandvectorThe size is N × 10. The averaging results in N × 1 vector features, respectively.
S102: removing the mean value of the frequency band characteristics obtained in the step S101, positioning and mapping the frequency band characteristics to a brain map according to an electrode channel of a human brain cortex aiming at each frequency band, and interpolating by adopting a bi-harmonic spline interpolation method to generate a visual brain topographic map;
as shown in FIG. 2, first, its brain electrical features are individually de-averaged for all channels for 5 bands, such that the brain electrical feature data for each band is centered to 0. Specifically, all electrode channels based on human cerebral cortex are implemented, and the mean value of the characteristics of each channel is removed to be used as the characteristic vector:
let channel average of Delta bandIs composed ofThe average value of Delta frequency band is removedThe size is Nx 1;
let channel average of Theta bandIs composed ofThe mean value of the Theta frequency band is removedThe size is Nx 1;
let the channel average of Alpha frequency bandIs composed ofThe average value of Alpha frequency band is removedThe size is Nx 1;
let channel average of Beta bandIs composed ofThe mean value of Beta frequency band is removed byThe size is Nx 1;
let Gamma frequencyChannel average of the stripIs composed ofThe mean value of Gamma frequency band is removedThe size is N × 1.
The mean-removed characteristics of the 5 frequency bands with respect to the N electrode channels are obtained, which are N × 1 vectors, respectively.
Next, for each frequency band, the de-averaged Nx 1 vector is mapped to the 10-10 system N electrode locations of the International Standard. As shown in FIG. 3, based on the bi-harmonic spline interpolation method, feature data of an unknown position is calculated from electrode features of a known position, data of a human brain cortex is visualized, and an RGB brain topographic map is generated as a 256 × 256 × 3 visualized featureimage。
S103: extracting depth information from the electroencephalogram time-frequency domain features extracted in the step S101 and the electroencephalogram visual images generated in the step S102 by using a neural network: learning electroencephalogram vector features and electroencephalogram visual features by using a depth network respectively, generating smooth features of two modes by using a normalization layer instead of a classification layer, and fusing the smooth features on the same dimension to serve as multi-mode depth features;
the method adopts 3D-CNN to learn the electroencephalogram vector characteristics and ResNet-18 to learn the electroencephalogram visual characteristics. For each frequency band, the electroencephalogram vector feature extracted in the S101 is respectively processedvectorAnd S102, visually generating electroencephalogram image featureimageDepth information is extracted. Then, the depth features of the vector and the image are fused on the same dimension to obtain a fused depth featuredeep=[deep_featurevector,deep_featureimage]As input for subsequent identification classification.
S104: training a model for the multi-modal depth features obtained in step S103; identity recognition: and inputting the electroencephalogram data sample to be identified into a designed feature extraction model and a trained multi-modal multi-classification network, classifying the fusion depth features of the electroencephalogram vector features and the generated visual features, and outputting a user label corresponding to the sample.
Data set:
the test object of the training model is a large-scale standard EEG Motor motion/image Dataset. In the embodiment, the data set comprises 1500 electroencephalogram signal records of 1-2 minutes, electroencephalogram signal samples come from 109 healthy subjects, and the sampling frequency is 160 Hz; each subject performed a different motor/imagination task, using the BCI2000 system to record 64-channel electroencephalogram signals; each subject performed 14 experiments: 2 baseline movements of 1 minute (1 st open eye, 2 nd closed eye), and 3 movements of 2 minutes for each of the following 4 tasks:
task 1: the target appears on the left or right side of the screen. The subject opens and closes the corresponding fist until the target disappears. The subject then relaxes.
Task 2: the target appears on the left or right side of the screen. The subject imagines opening and closing the corresponding fist until the target disappears. The subject then relaxes.
Task 3: the target appears at the top or bottom of the screen. The subject opens or closes both fist (if the target is on the top) or feet (if the target is on the bottom) until the target disappears. The subject then relaxes.
Task 4: the target appears at the top or bottom of the screen. The subject wants to open or close both fist (if the target is on the top) or feet (if the target is on the bottom) until the target disappears. The subject then relaxes.
The eye opening is EO (eye open), the eye closing is EC (eye close), the motion state is PHY (physical), and the imagination motion state is IMA (image).
Experiment design:
in order to realize the stability of the human brain in different states, the invention adopts a cross-task data set to train an identity recognition model: the data of EO and EC resting states are used as training, and the data of PHY or IMA motor imagery are used as testing. Fig. 4 shows a general architecture of the electroencephalogram identification method based on feature visualization and multi-modal fusion adopted by the method.
First, data preprocessing is performed on the original brain electrical signal according to step S101. Then, according to step S102, electroencephalogram features and feature visualizations are extracted from the preprocessed signals. Next, multi-modal feature fusion is performed on the vector features and the image features according to step S103. Take Beta band as an example:
n-64 is used in the present invention. First, depth information is extracted for vector features of size 64 × 18 using the CNN structure as in table 1, resulting in a depth feature vector deep _ feature of length 512vectorWherein bs represents batch number;
table 1 CNN network architecture details parameters
Next, the brain electrical visualization feature of 256X 3 size is used with the ResNet-18 structure as in Table 2imageExtracting depth information to obtain a depth feature vector deep _ feature with the length of 512imageWhere bs denotes batch number. The structure within the residual block is shown in fig. 4;
TABLE 2 ResNet-18 network architecture details parameters
Then, the depth features of the vector and the image are fused on the same dimension to obtain a fused depth feature with the length of 1024deep=[deep_featurevector,deep_featureimage]. Fig. 5 shows the structure of multi-modal feature fusion. And (5) training a model for the multi-modal depth features obtained in the step (S103).
The experimental results are as follows:
in order to select a band with better performance for feature fusion, the performance of the single-mode features on each frequency band is verified firstly. Classifying the electroencephalogram vector characteristics by adopting CNN (the structure is shown in Table 1, and a normalization layer of a layer 8 is replaced by a classification layer of 109 neurons); for electroencephalogram features, ResNet-18 (the structure is shown in Table 2, and a normalization layer of the 8 th layer is replaced by a classification layer of 109 neurons) is adopted for classification. Table 3 shows the result of a single-mode experiment across task data sets, and it can be found that: when the single feature is a Vector (Vector), the performance of the Beta waveband is optimal, the identification rate is 77.31% when the test state is PHY, and the identification rate is 79.07% when the test state is IMA; when the single feature is an Image (Image), the performance of an Alpha wave band is optimal, and the identification rate is 78.42% when the test state is PHY and 79.60% when the test state is IMA; when the visual image designed by the invention is used as the feature, the accuracy rate of identity recognition is higher than that of the traditional vector feature.
Table 3 cross-task experiments: identity recognition performance (%) -of single modal features under PHY and IMA test conditions
After the optimal wave band of the vector features is Beta and the optimal wave band of the image features is Alpha, the method performs feature fusion on the wave band with better performance based on the step S103, and performs iterative training on the multi-mode model until the model converges. In order to prove that the multi-modal model in the design method is superior to the single-modal model, the invention compares the performance of the single-modal characteristic in the fusion of a single band and a multiband in the earlier stage. Table 4 shows the performance comparison results of single-modality core and multi-modality core across task data sets, and it can be found that: compared with single Image features and Vector features, the fusion features of the Image (Image) and the Vector (Vector) have better identifiability on accuracy, precision, recall and F1 scoring indexes. The experiments prove that the effectiveness of the fusion feature extractor and the high recognition rate of the multi-modal model are combined, and the identity recognition model is determined for the design of the invention.
Table 4 cross-task experiments: identity recognition performance (%) -of multimodal features under PHY and IMA test conditions
Identity recognition; based on the process of establishing and training the model, the electroencephalogram data sample to be recognized is preprocessed, vector features and visual features are extracted, classification recognition is carried out by using the depth feature extractor and the multi-mode classifier, and a user label corresponding to the electroencephalogram data sample is determined.
Claims (5)
1. An electroencephalogram identity recognition method based on feature visualization and multi-mode fusion is characterized by comprising the following steps:
s101: carrying out data preprocessing on the collected motor imagery electroencephalogram signals, dividing the preprocessed electroencephalogram data into continuous and non-overlapping samples according to a time window, extracting time-frequency domain characteristics, dividing frequency components of the time-frequency domain characteristics into 5 frequency bands according to frequency distribution, and calculating statistical characteristics of the frequency components of the characteristics of each frequency band to serve as frequency band characteristics;
s102: removing the mean value of the frequency band characteristics obtained in the step S101, positioning and mapping the frequency band characteristics to a brain map according to an electrode channel of a human brain cortex aiming at each frequency band, and interpolating by adopting a bi-harmonic spline interpolation method to generate a visual brain topographic map;
s103: extracting depth information from the electroencephalogram time-frequency domain features extracted in the step S101 and the electroencephalogram visual images generated in the step S102 by using a neural network: learning electroencephalogram vector features and electroencephalogram visual features by using a depth network respectively, generating smooth features of two modes by using a normalization layer instead of a classification layer, and fusing the smooth features on the same dimension to serve as multi-mode depth features;
s104: training a model for the multi-modal depth features obtained in step S103; identity recognition: and inputting the electroencephalogram data sample to be identified into a designed feature extraction model and a trained multi-modal multi-classification network, classifying the fusion depth features of the electroencephalogram vector features and the generated visual features, and outputting a user label corresponding to the sample.
2. The electroencephalogram signal identity recognition method based on feature visualization as claimed in claim 1, wherein in step S101: preprocessing the acquired electroencephalogram signals, re-referencing average electrodes, limiting the available frequency range to 0-42 Hz by using band-pass filtering, and correcting a base line; the preprocessed electroencephalogram signal is divided into continuous non-overlapping independent samples by using a time window with the length of 5 seconds and used as different observation results under the same external stimulus;
then, extracting power spectrum density of the independent brain wave samples after the time window segmentation based on short-time Fourier transform, wherein the frequency domain feature is frequency components, and the mth independent sample X with the frequency components of tau of the time window being n[m,n]=[x[m,n](1),…,x[m,n](t),…,x[m,n](τ)]The power spectral density of which can be expressed as,
wherein STFT(τ,s)(X) a short-time Fourier transform with a time window τ and a window sliding length s, and H (-) a window function with a sliding length s; the short-time Fourier transform adopts a moving window length ofSampling frequency of (1), hamming windows of 50% overlap;
the frequency component after feature extraction is represented as 64 electrodes and a wave band of 0-42 Hz, and is divided into 5 frequency bands according to Delta (0.5-3 Hz), Theta (4-7 Hz), Alpha (8-12 Hz), Beta (13-30 Hz) and Gamma (31-42 Hz); the channel average value of the frequency components of each frequency band is calculated as a statistical characteristic, and for a channel i, the statistical characteristic d (i) can be defined as,
wherein f (i, freq) represents the STFT characteristic of channel i at freq frequency component, and f1、f2Indicating the frequency range of the band.
3. The electroencephalogram signal identity recognition method based on feature visualization, according to claim 1, characterized in that in step S102: the statistical characteristics of each frequency band are determined according to the data after the mean value of all channels is removed,
wherein d (i) represents the frequency range f of electrode i when averaged1To f2Average value of Hz, N represents the number of electrodes;
the electroencephalogram features after the mean value of each frequency band is removed are in one-to-one correspondence based on the electrode positioning standard of a 10-10 system and are mapped on a brain map; for each frequency band, performing minimum curvature interpolation on the electroencephalogram data points with irregular electrode intervals by using a Green function, wherein the Green function of the electroencephalogram characteristic data on the electrode i and the electrode j is expressed as,
g(xi,xj)=|xi,xj|2(ln|xi,xj|-1)
curved surface s (x) centered on electrode ii) Solving a linear equation set of N multiplied by N for the Green function of the irregularly spaced electrodes i and j to obtain the weight of N electrodes of the human cerebral cortex,
using the weight ω of the N electrodes of the human cerebral cortexjAnd characteristic data x of known electrodesj(j is more than or equal to 1 and less than or equal to N), obtaining the characteristic data of the unknown electrode, defining the curved surface characteristic of the human cerebral cortex as,
interpolation is carried out by adopting the bi-harmonic spline interpolation method, so that the characteristic data of the known and unknown positions of the human cerebral cortex can be obtained; and the characteristic data of the human cerebral cortex is visualized to generate an RGB brain topographic map.
4. The electroencephalogram signal identity recognition method based on feature visualization as claimed in claim 1, wherein in step S103: extracting depth information from the electroencephalogram power spectral density features extracted by the S101 by using a 3D-CNN (three-dimensional-convolutional network), replacing the last Softmax layer of the 3D-CNN network by using a BatchNorm layer, and obtaining smooth feature of the electroencephalogram vectorvector(ii) a And extracting depth information from the brain topographic map generated by the S102 interpolation by using ResNet-18, and replacing the last Softmax layer of the ResNet-18 network by using a BatchNorm layer to obtain smooth feature of the electroencephalogram imageimage(ii) a In order to keep the uniform dimension, the depth smooth features of the two modes are fused on the same dimension to obtain a depth fusion feature,
featurecombined=[featurevector,featureimage]
multimodal depth feature for the extraction and fusioncombinedA Softmax classification is performed.
5. The electroencephalogram signal identity recognition method based on feature visualization, according to claim 1, characterized in that in step S104: performing iterative training on the depth fusion features and the multi-modal classifier designed in the step S103 until the models are converged, obtaining an effective depth feature extractor and a multi-modal classifier for each frequency band, and using a frequency band model with the highest performance as a classification criterion; and preprocessing the electroencephalogram data sample to be recognized, extracting vector features and visual features, classifying and recognizing by using the depth feature extractor and the multi-modal classifier, and determining a user label corresponding to the electroencephalogram data sample.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210167636.XA CN114578963B (en) | 2022-02-23 | 2022-02-23 | Electroencephalogram identity recognition method based on feature visualization and multi-mode fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210167636.XA CN114578963B (en) | 2022-02-23 | 2022-02-23 | Electroencephalogram identity recognition method based on feature visualization and multi-mode fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114578963A true CN114578963A (en) | 2022-06-03 |
CN114578963B CN114578963B (en) | 2024-04-05 |
Family
ID=81773466
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210167636.XA Active CN114578963B (en) | 2022-02-23 | 2022-02-23 | Electroencephalogram identity recognition method based on feature visualization and multi-mode fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114578963B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116595455A (en) * | 2023-05-30 | 2023-08-15 | 江南大学 | Motor imagery electroencephalogram signal classification method and system based on space-time frequency feature extraction |
CN116662742A (en) * | 2023-06-28 | 2023-08-29 | 北京理工大学 | Brain electrolysis code method based on hidden Markov model and mask empirical mode decomposition |
CN118436317A (en) * | 2024-07-08 | 2024-08-06 | 山东锋士信息技术有限公司 | Sleep stage classification method and system based on multi-granularity feature fusion |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018014436A1 (en) * | 2016-07-18 | 2018-01-25 | 天津大学 | Emotion eeg recognition method providing emotion recognition model time robustness |
CN109711383A (en) * | 2019-01-07 | 2019-05-03 | 重庆邮电大学 | Convolutional neural networks Mental imagery EEG signal identification method based on time-frequency domain |
CN111329474A (en) * | 2020-03-04 | 2020-06-26 | 西安电子科技大学 | Electroencephalogram identity recognition method and system based on deep learning and information updating method |
WO2020151144A1 (en) * | 2019-01-24 | 2020-07-30 | 五邑大学 | Generalized consistency-based fatigue classification method for constructing brain function network and relevant vector machine |
CN112353407A (en) * | 2020-10-27 | 2021-02-12 | 燕山大学 | Evaluation system and method based on active training of neurological rehabilitation |
CN112784736A (en) * | 2021-01-21 | 2021-05-11 | 西安理工大学 | Multi-mode feature fusion character interaction behavior recognition method |
CN113011239A (en) * | 2020-12-02 | 2021-06-22 | 杭州电子科技大学 | Optimal narrow-band feature fusion-based motor imagery classification method |
CN113180659A (en) * | 2021-01-11 | 2021-07-30 | 华东理工大学 | Electroencephalogram emotion recognition system based on three-dimensional features and cavity full convolution network |
-
2022
- 2022-02-23 CN CN202210167636.XA patent/CN114578963B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018014436A1 (en) * | 2016-07-18 | 2018-01-25 | 天津大学 | Emotion eeg recognition method providing emotion recognition model time robustness |
CN109711383A (en) * | 2019-01-07 | 2019-05-03 | 重庆邮电大学 | Convolutional neural networks Mental imagery EEG signal identification method based on time-frequency domain |
WO2020151144A1 (en) * | 2019-01-24 | 2020-07-30 | 五邑大学 | Generalized consistency-based fatigue classification method for constructing brain function network and relevant vector machine |
CN111329474A (en) * | 2020-03-04 | 2020-06-26 | 西安电子科技大学 | Electroencephalogram identity recognition method and system based on deep learning and information updating method |
CN112353407A (en) * | 2020-10-27 | 2021-02-12 | 燕山大学 | Evaluation system and method based on active training of neurological rehabilitation |
CN113011239A (en) * | 2020-12-02 | 2021-06-22 | 杭州电子科技大学 | Optimal narrow-band feature fusion-based motor imagery classification method |
CN113180659A (en) * | 2021-01-11 | 2021-07-30 | 华东理工大学 | Electroencephalogram emotion recognition system based on three-dimensional features and cavity full convolution network |
CN112784736A (en) * | 2021-01-21 | 2021-05-11 | 西安理工大学 | Multi-mode feature fusion character interaction behavior recognition method |
Non-Patent Citations (3)
Title |
---|
冯津;王行愚;金晶;: "基于支持向量机多分类器的运动想象电位识别", 中国组织工程研究与临床康复, no. 09 * |
杨豪;张俊然;蒋小梅;刘飞;: "基于深度信念网络脑电信号表征情绪状态的识别研究", 生物医学工程学杂志, no. 02 * |
柴冰,李冬冬,王喆,高大启: "融合频率和通道卷积注意的脑电(EEG)情感识别", 《计算机科学》, vol. 48, no. 12 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116595455A (en) * | 2023-05-30 | 2023-08-15 | 江南大学 | Motor imagery electroencephalogram signal classification method and system based on space-time frequency feature extraction |
CN116595455B (en) * | 2023-05-30 | 2023-11-10 | 江南大学 | Motor imagery electroencephalogram signal classification method and system based on space-time frequency feature extraction |
CN116662742A (en) * | 2023-06-28 | 2023-08-29 | 北京理工大学 | Brain electrolysis code method based on hidden Markov model and mask empirical mode decomposition |
CN118436317A (en) * | 2024-07-08 | 2024-08-06 | 山东锋士信息技术有限公司 | Sleep stage classification method and system based on multi-granularity feature fusion |
Also Published As
Publication number | Publication date |
---|---|
CN114578963B (en) | 2024-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114578963B (en) | Electroencephalogram identity recognition method based on feature visualization and multi-mode fusion | |
CN109784023B (en) | Steady-state vision-evoked electroencephalogram identity recognition method and system based on deep learning | |
Palaniappan et al. | Biometrics from brain electrical activity: A machine learning approach | |
El-Fiqi et al. | Convolution neural networks for person identification and verification using steady state visual evoked potential | |
CN109497990B (en) | Electrocardiosignal identity recognition method and system based on canonical correlation analysis | |
CN114533086B (en) | Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation | |
CN101828921A (en) | Identity identification method based on visual evoked potential (VEP) | |
Keshishzadeh et al. | Improved EEG based human authentication system on large dataset | |
Nait-Ali | Hidden biometrics: Towards using biosignals and biomedical images for security applications | |
CN111898526A (en) | Myoelectric gesture recognition method based on multi-stream convolution neural network | |
CN110543831A (en) | brain print identification method based on convolutional neural network | |
Lederman et al. | Alternating diffusion for common manifold learning with application to sleep stage assessment | |
CN108470182B (en) | Brain-computer interface method for enhancing and identifying asymmetric electroencephalogram characteristics | |
Jianfeng et al. | Multi-feature authentication system based on event evoked electroencephalogram | |
CN115414051A (en) | Emotion classification and recognition method of electroencephalogram signal self-adaptive window | |
Dharmaprani et al. | A comparison of independent component analysis algorithms and measures to discriminate between EEG and artifact components | |
Kuila et al. | Feature extraction of electrocardiogram signal using machine learning classification | |
Zhou et al. | Phase space reconstruction, geometric filtering based Fisher discriminant analysis and minimum distance to the Riemannian means algorithm for epileptic seizure classification | |
CN106650685B (en) | Identity recognition method and device based on electrocardiogram signal | |
CN109117790B (en) | Brain print identification method based on frequency space index | |
CN116595434A (en) | Lie detection method based on dimension and classification algorithm | |
Li et al. | Feature extraction based on high order statistics measures and entropy for eeg biometrics | |
CN111444489B (en) | Double-factor authentication method based on photoplethysmography sensor | |
Thentu et al. | Ecg biometric using 2d deep convolutional neural network | |
Raghavendra | Sparse representation for accurate person recognition using hand vein biometrics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |