CN114912492A - Cross-time individual identification method and system based on electroencephalogram signal deep learning - Google Patents

Cross-time individual identification method and system based on electroencephalogram signal deep learning Download PDF

Info

Publication number
CN114912492A
CN114912492A CN202210557181.2A CN202210557181A CN114912492A CN 114912492 A CN114912492 A CN 114912492A CN 202210557181 A CN202210557181 A CN 202210557181A CN 114912492 A CN114912492 A CN 114912492A
Authority
CN
China
Prior art keywords
domain
target domain
loss
target
source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210557181.2A
Other languages
Chinese (zh)
Inventor
左年明
蒋田仔
缪一帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202210557181.2A priority Critical patent/CN114912492A/en
Publication of CN114912492A publication Critical patent/CN114912492A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • A61B5/374Detecting the frequency distribution of signals, e.g. detecting delta, theta, alpha, beta or gamma waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Psychology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention belongs to the field of cross-time dimension electroencephalogram signal identification, and particularly relates to a cross-time individual identification method, a system and equipment based on electroencephalogram signal deep learning, aiming at solving the problem. The invention comprises the following steps: respectively obtaining a source domain EEG signal with a label and a target domain EEG signal without a label, and performing signal preprocessing to obtain a source domain differential entropy and a target domain differential entropy; inputting a joint domain adaptive individual recognition network, and iteratively performing network training; acquiring an electroencephalogram signal to be identified of a target domain, and performing signal preprocessing to obtain a target domain differential entropy to be identified; and obtaining an individual identification result of the electroencephalogram signal to be identified in the target domain by adapting the trained joint domain to an individual identification network. The invention overcomes the insecurity of individual identification by the traditional method and the difference caused by different electroencephalogram signal acquisition time, and can ensure the individual identification precision and improve the identification efficiency.

Description

Cross-time individual identification method and system based on electroencephalogram signal deep learning
Technical Field
The invention belongs to the field of cross-time dimension electroencephalogram signal identification, and particularly relates to a cross-time individual identification method and a system based on electroencephalogram signal deep learning.
Background
At present, individual identification is mainly carried out by means of account password identification, fingerprint identification, face identification and the like. These approaches often present a risk of leakage or theft and security is not guaranteed.
With the progress of electroencephalogram signal acquisition technology, electroencephalogram signals of multiple channels of brain epidermis can be measured in a high-resolution mode. At present, electroencephalogram signal acquisition and identification are widely applied to individual identity identification.
However, the difference of the electroencephalogram signals acquired at different times in the time dimension can cause the reduction of the identification accuracy. Therefore, the electroencephalogram individual identification method capable of reducing the time difference of the electroencephalogram signals is further needed in the field, the difference caused by the difference of the acquisition time of the electroencephalogram signals is overcome, and meanwhile, the identification precision is also guaranteed.
Disclosure of Invention
In order to solve the problems in the prior art that the traditional identification method is low in safety, and the existing electroencephalogram signal identification method cannot ensure the identification precision due to the difference of the electroencephalogram signal existing in time, the invention provides a time-crossing individual identification method based on electroencephalogram signal deep learning, which comprises the following steps:
respectively obtaining a source domain EEG signal with a label and a target domain EEG signal without a label, and performing signal preprocessing to obtain a source domain differential entropy and a target domain differential entropy;
inputting the source domain differential entropy and the target domain differential entropy into a joint domain to adapt to an individual recognition network, and iteratively performing network training;
acquiring an electroencephalogram signal to be identified of a target domain, and performing signal preprocessing to obtain a target domain differential entropy to be identified;
and based on the target domain differential entropy to be recognized, obtaining an individual recognition result of the electroencephalogram signal to be recognized of the target domain through a trained joint domain adaptive individual recognition network.
In some preferred embodiments, the signal preprocessing comprises electroencephalogram bad segment removal, 1-75Hz filtering, frequency band signal extraction, frequency band signal differential entropy calculation, and feature smoothing.
In some preferred embodiments, the frequency band signal extraction comprises 1-3Hz delta signal extraction, 4-7Hz theta signal extraction, 8-13Hz alpha signal extraction, 14-30Hz beta signal extraction, and 31-50Hz gamma signal extraction.
In some preferred embodiments, the joint domain adapted individual recognition network comprises a feature extractor, a classifier and a domain discriminator, and the training process is as follows:
respectively extracting the features of the source domain differential entropy and the target domain differential entropy through a feature extractor;
inputting the feature extraction result into a classifier, and calculating the classification loss of a source domain and a target domain; inputting the feature extraction result into a domain discriminator, and calculating the domain discrimination loss of the source domain and the target domain; inputting the feature extraction result into a full connection layer, and calculating the joint loss of a source domain and a target domain;
obtaining a total loss of a joint domain adaptive individual identification network based on the classification loss, the domain discrimination loss and the joint loss;
and performing network iterative training in the total loss descending direction based on the obtained training data set until the total loss value is lower than a set threshold value or reaches a set training frequency, so as to obtain a trained joint domain adaptive individual recognition network.
In some preferred embodiments, the classification penalty for the source domain and the target domain is expressed as:
Figure BDA0003655381140000031
wherein L is class Representing the division of a source domain and a target domainClass loss, H represents cross entropy loss, M is the mth training sample in the current training batch, M is the total number of training samples in the current training batch,
Figure BDA0003655381140000032
individual labels representing training samples predicted by the classifier,
Figure BDA0003655381140000033
representing the individual label of the m-th training sample predicted by the classifier, y representing the individual label of the actual training sample, y m Individual labels representing the actual mth training sample.
In some preferred embodiments, the domain discrimination loss of the source domain and the target domain is expressed as:
Figure BDA0003655381140000034
wherein L is domain Representing the domain discrimination loss of the source domain and the target domain, H represents the cross entropy loss, M is the mth training sample in the current training batch, M is the total number of the training samples in the current training batch,
Figure BDA0003655381140000035
a domain label representing a training sample predicted by the classifier,
Figure BDA0003655381140000036
representing the domain label of the m-th training sample predicted by the classifier, p representing the domain label of the actual training sample, p m A domain label representing the actual mth training sample.
In some preferred embodiments, the joint loss of the source domain and the target domain includes an access probability loss and a transition probability loss, which are expressed as:
L association =L visit +αL trans
wherein L is association Representing the joint loss of the source and target domains, L visit Representative access profileRate loss, L trans Representing the transition probability loss, α is adjustable for a trade-off of L visit And L trans At L association The weight of the ratio.
In some preferred embodiments, the access probability loss is expressed as:
L visit =H(V,P ab )
Figure BDA0003655381140000041
M ij =<A i ,B j >
V=1/|B j |
wherein H represents the cross entropy loss, P ab For the probability matrix of the access of the vector from the source domain to the target domain category,
Figure BDA0003655381140000042
is a matrix P ab Represents the source domain vector A i Accessing to target Domain vector B j Probability of (M) ij Represents A i And B j The degree of similarity between the two components,<A i ,B j >represents vector A i Sum vector B j V represents the weight vector.
In some preferred embodiments, the transition probability loss is expressed as:
L trans =H(T,P aba )
Figure BDA0003655381140000043
Figure BDA0003655381140000044
wherein, P ab Probability matrix for accessing a vector from a source domain to a target domain class, P ba Probability matrix for accessing a vector from a target domain to a source domain class, P aba For vector access from source domain toProbability of the target domain being followed by a revisit back to the source domain category,
Figure BDA0003655381140000045
is a matrix P aba Value of (a), source domain vector A i Accessing to target Domain vector B j Post-revisit back to source domain vector A j T represents a weight vector, class (A) i )=class(A j ) Represents vector A i Sum vector A j The categories are equal.
In another aspect of the present invention, a time-crossing individual recognition system based on electroencephalogram deep learning is provided, the individual recognition system comprising:
the data acquisition module is configured to respectively acquire the electroencephalogram signals with the labels of the source domain and the electroencephalogram signals without the labels of the target domain, and acquire the electroencephalogram signals to be identified of the target domain without the labels;
the preprocessing module is configured to respectively preprocess the acquired source domain signals and the acquired target domain signals to acquire a source domain differential entropy and a target domain differential entropy, and preprocess the acquired electroencephalogram signals to be identified of the target domain to acquire a target domain differential entropy to be identified;
the network training module is configured to input the source domain differential entropy and the target domain differential entropy into a joint domain to adapt to an individual recognition network, and iteratively perform network training;
the individual recognition module is configured to adapt to an individual recognition network through a trained united domain based on the target domain differential entropy to be recognized, and acquire an individual recognition result of the electroencephalogram signal to be recognized of the target domain;
and the output module is configured to output the acquired individual identification result of the electroencephalogram signal to be identified in the target domain.
The invention has the beneficial effects that:
(1) according to the cross-time individual recognition method based on electroencephalogram signal deep learning, electroencephalogram signals are used as individual recognition signals, cross-time combined domain adaptive deep learning training is carried out, and edge distribution and condition distribution differences between source domain data and target domain data are reduced, so that the time difference between the source domain and the target domain is reduced, the insecurity of individual identification by means of a traditional mode is overcome, and the application scene of electroencephalogram signal individual recognition is realized.
(2) According to the cross-time individual identification method based on electroencephalogram signal deep learning, the newly acquired unlabeled electroencephalogram signals are subjected to individual identification by utilizing the labeled electroencephalogram signals acquired at different times in the past, the influence of time difference of the electroencephalogram signals is reduced, the accuracy of individual identification by utilizing the electroencephalogram signals is further improved, and the time required by identification is reduced.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of the cross-time individual identification method based on electroencephalogram deep learning;
FIG. 2 is a schematic diagram of a joint domain adaptive individual recognition network of the cross-time individual recognition method based on electroencephalogram deep learning.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The invention provides a time-span individual recognition method based on electroencephalogram signal deep learning, which not only overcomes the insecurity of individual identification by a traditional mode, but also overcomes the difference caused by different electroencephalogram signal acquisition times, and can ensure necessary recognition accuracy. .
The invention relates to a time-span individual recognition method based on electroencephalogram signal deep learning, which comprises the following steps:
respectively obtaining a source domain EEG signal with a label and a target domain EEG signal without a label, and performing signal preprocessing to obtain a source domain differential entropy and a target domain differential entropy;
inputting the source domain differential entropy and the target domain differential entropy into a joint domain to adapt to an individual recognition network, and iteratively performing network training;
acquiring an electroencephalogram signal to be identified of a target domain, and performing signal preprocessing to obtain a target domain differential entropy to be identified;
and based on the target domain differential entropy to be recognized, obtaining an individual recognition result of the electroencephalogram signal to be recognized of the target domain through a trained joint domain adaptive individual recognition network.
In order to more clearly explain the cross-time individual identification method based on electroencephalogram deep learning, the following describes each step in the embodiment of the present invention in detail with reference to fig. 1.
The time-crossing individual recognition method based on electroencephalogram signal deep learning in the first embodiment of the invention comprises the steps of S10-S40, and the steps are described in detail as follows:
the individual identification of the cross-time based on the brain electrical signal mainly comprises two layers: 1. extracting the characteristics of the electroencephalogram signals; 2. a deep learning method for cross-time individual recognition of electroencephalogram signal features.
And step S10, respectively obtaining the EEG signals with the labels of the source domain and the EEG signals without the labels of the target domain, and performing signal preprocessing to obtain a source domain differential entropy and a target domain differential entropy.
The signal preprocessing comprises brain electrical signal bad section removal, 1-75Hz filtering, frequency band signal extraction, frequency band signal differential entropy calculation and feature smoothing.
The frequency band signal extraction comprises 1-3Hz delta signal extraction, 4-7Hz theta signal extraction, 8-13Hz alpha signal extraction, 14-30Hz beta signal extraction and 31-50Hz gamma signal extraction.
The Differential entropy calculation of the frequency band signals can be performed by using a calculation method proposed in 2013 of b. -l.lu et al (l. -c.shi, y. -y.jiao, and b. -l.lu, "Differential entry strategy for EEG-based vision estimation," in proc.35th annu. int.conf. ieee embs.ieee,2013), as shown in formula (1):
Figure BDA0003655381140000071
wherein x is the time sequence of the electroencephalogram signal of the current frequency band, h (X) is the differential entropy calculation result of x, mu, and delta are the Gaussian distribution N (mu, delta) of x 2 ) A location parameter and a degree of dispersion parameter.
The electroencephalogram sequence of a time window is provided with n channels, each channel is provided with 5 frequency bands, and each frequency band can calculate a differential entropy, so that n x 5 differential entropies can be calculated in one time window.
And smoothing the differential entropy to obtain smoothed differential entropy, and respectively recording the preprocessed data of the source domain and the target domain as the source domain differential entropy and the target domain differential entropy.
And step S20, inputting the source domain differential entropy and the target domain differential entropy into a joint domain to adapt to an individual recognition network, and iteratively performing network training.
The smoothed differential entropy is converted into one-dimensional vectors, each time window corresponds to one vector, the vectors of a source domain and a target domain are used for training a joint domain adaptive individual recognition network, the joint domain adaptive individual recognition network comprises a feature extractor, a classifier and a domain discriminator, and the training process comprises the following steps:
and step A10, respectively extracting the features of the source domain differential entropy and the target domain differential entropy through a feature extractor.
Only the source domain data has individual identity labels, each window data, namely one feature vector corresponds to one individual identity label, and the feature vectors of different individuals correspond to different individual labels.
Step A20, inputting the feature extraction result into a classifier, and calculating the classification loss of the source domain and the target domain, as shown in formula (2):
Figure BDA0003655381140000081
wherein L is class Representing the classification loss of the source domain and the target domain, H represents the cross entropy loss, M is the mth training sample in the current training batch, M is the total number of the training samples in the current training batch,
Figure BDA0003655381140000082
individual labels representing training samples predicted by the classifier,
Figure BDA0003655381140000083
representing the individual label of the m-th training sample predicted by the classifier, y representing the individual label of the actual training sample, y m Individual labels representing the actual mth training sample.
Loss of domain discrimination L domain The domain discriminator firstly performs gradient inversion on the output characteristics of the characteristic extractor and then obtains the output characteristics through a plurality of layers of full-connection layers. By gradient inversion, the domain discrimination loss L domain The smaller the domain discriminator, the less likely it will be possible to characterize from the source domain or the target domain, respectively, thereby reducing the difference in the edge distribution of the source and target domain data in the time dimension. All the source domain data are marked as source domains, all the target domain data are marked as target domains, the domain discriminator discriminates the data and identifies the data from the source domains or the target domains, and therefore the domain discrimination loss L is calculated domain
Inputting the feature extraction result into a domain discriminator, and calculating the domain discrimination loss of the source domain and the target domain, as shown in formula (3):
Figure BDA0003655381140000091
wherein L is domain Representing the domain discrimination loss of the source domain and the target domain, H representing the cross entropy loss, M being the mth training sample in the current training batch, M being the total number of training samples in the current training batch,
Figure BDA0003655381140000092
a domain label representing a training sample predicted by the classifier,
Figure BDA0003655381140000093
representing the domain label of the m-th training sample predicted by the classifier, p representing the domain label of the actual training sample, p m A domain label representing the actual mth training sample.
Combined loss L association The probability that one vector of the source domain matrix is still the original vector after being converted into the target domain vector is called access probability, and the probability that the vector is converted into the source domain vector from the target domain is still the original vector is called conversion probability.
Inputting the feature extraction result into a full connection layer, and calculating the joint loss of the source domain and the target domain, as shown in formula (4):
L association =L visit +αL trans (4)
wherein L is association Represents the joint loss of the source and target domains, L visit Representing the loss of access probability, L trans Representing the transition probability loss, α is adjustable for a trade-off of L visit And L trans At L association The weight of the ratio.
An access probability loss, which is expressed as shown in equation (5):
L visit =H(V,P ab ) (5)
wherein H represents the cross entropy loss, P ab V represents a weight vector for a probability matrix of vector accesses from a source domain to a target domain class.
Figure BDA0003655381140000095
Is a matrix P ab Represents the source domain vector A i Accessing to target Domain vector B j M, the calculation method is shown in equation (6):
Figure BDA0003655381140000094
wherein M is ij Represents A i And B j The degree of similarity between them.
A weight vector V, which is represented as shown in equation (7):
V=1/|B j | (7)
wherein, | B j I represents the target Domain vector B j Die (2).
A i And B j Degree of similarity M between ij By calculating A i And B j The dot product of (a) is obtained as shown in equation (8):
M ij =<A i ,B j > (8)
wherein, the first and the second end of the pipe are connected with each other,<A i ,B j >representative calculation A i And B j The dot product of (a).
Transition probability loss, which is expressed as shown in equation (9):
L trans =H(T,P aba ) (9)
wherein, P ab Probability matrix for accessing a vector from a source domain to a target domain class, P ba Probability matrix for accessing a vector from a target domain to a source domain class, P aba T represents a weight vector for the probability that a vector will be visited back to the source domain class after visiting from the source domain to the target domain.
Figure BDA0003655381140000101
Is a matrix P aba Represents the source domain vector A i Accessing to target Domain vector B j Then, accessing, and calculating the method as shown in formula (10):
Figure BDA0003655381140000102
a weight vector T, which is represented as shown in equation (11):
Figure BDA0003655381140000103
wherein, | A i I represents the source domain vector A i Of (A), class (A) i )=class(A j ) Represents vector A i Sum vector A j Class equality, i.e. vector A i Sum vector A j When the categories are equal, the weight vector T is the source domain vector A i Otherwise the weight vector T is 0.
Step A30, obtaining the total loss of the joint domain adaptive individual identification network based on the classification loss, the domain identification loss and the joint loss, as shown in formula (12):
L=L class +L domain +L association (12)
step A40, based on the obtained training data set, carrying out network iterative training in the total loss decreasing direction until the total loss value is lower than a set threshold value or reaches a set training frequency, and obtaining a trained joint domain adaptive individual recognition network.
Through multiple iterations of training data, the total loss value is reduced, the obtained trained combined domain adapts to an individual recognition network, and the time difference of the electroencephalogram data of the source domain and the target domain can be reduced, so that the time-span electroencephalogram signal individual recognition accuracy is improved.
And step S30, acquiring the electroencephalogram signal to be recognized of the target domain, and performing signal preprocessing to acquire the target domain differential entropy to be recognized.
For the electroencephalogram signals to be identified in the target domain, preprocessing is also performed through the method of the step S10, including electroencephalogram signal bad section removal, 1-75Hz filtering, frequency band signal extraction, frequency band signal differential entropy calculation and feature smoothing, and the target domain differential entropy to be identified is obtained.
And step S40, based on the target domain differential entropy to be recognized, adapting to an individual recognition network through a trained joint domain, and acquiring an individual recognition result of the electroencephalogram signal to be recognized of the target domain.
In summary, the method for the cross-time individual recognition deep learning of electroencephalogram features of the invention is to take the previously collected electroencephalogram signals with tags as source domain data, take the electroencephalogram signals without tags to be recognized as target domain data, and reduce the edge distribution and condition distribution difference between the source domain data and the target domain data through a deep learning network framework, thereby reducing the time difference between the source domain and the target domain.
Therefore, newly acquired unlabeled electroencephalogram signals can be subjected to individual identification by utilizing the labeled electroencephalogram signals acquired at different times in the past, the influence of time difference of the electroencephalogram signals is reduced, the accuracy of the individual identification by utilizing the electroencephalogram signals is improved, and the time required by identification is reduced.
Although the foregoing embodiments have described the steps in the foregoing sequence, those skilled in the art will understand that, in order to achieve the effect of the present embodiment, different steps are not necessarily performed in such a sequence, and may be performed simultaneously (in parallel) or in an inverse sequence, and these simple variations are within the scope of the present invention.
The invention provides a time-crossing individual recognition system based on electroencephalogram deep learning, which comprises:
the data acquisition module is configured to respectively acquire the electroencephalogram signals with the labels of the source domain and the electroencephalogram signals without the labels of the target domain, and acquire the electroencephalogram signals to be identified of the target domain without the labels;
the preprocessing module is configured to respectively preprocess the acquired source domain signals and the acquired target domain signals to acquire a source domain differential entropy and a target domain differential entropy, and preprocess the acquired electroencephalogram signals to be identified of the target domain to acquire a target domain differential entropy to be identified;
the network training module is configured to input the source domain differential entropy and the target domain differential entropy into a joint domain to adapt to an individual recognition network, and iteratively perform network training;
the individual recognition module is configured to adapt to an individual recognition network through a trained joint domain based on the target domain differential entropy to be recognized, and acquire an individual recognition result of the electroencephalogram signal to be recognized of the target domain;
and the output module is configured to output the acquired individual identification result of the electroencephalogram signal to be identified in the target domain.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process and related description of the system described above may refer to the corresponding process in the foregoing method embodiments, and will not be described herein again.
It should be noted that, the time-crossing individual recognition system based on electroencephalogram deep learning provided in the foregoing embodiment is only illustrated by the division of the functional modules, and in practical applications, the functions may be allocated to different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the foregoing embodiment may be combined into one module, or may be further split into multiple sub-modules, so as to complete all or part of the functions described above. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
An electronic apparatus according to a third embodiment of the present invention includes:
at least one processor; and
a memory communicatively coupled to at least one of the processors; wherein the content of the first and second substances,
the memory stores instructions executable by the processor for execution by the processor to implement the above-described cross-time individual recognition method based on electroencephalogram signal deep learning.
A computer-readable storage medium of a fourth embodiment of the present invention stores computer instructions for being executed by the computer to implement the above-mentioned cross-time individual identification method based on electroencephalogram signal deep learning.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of skill in the art would appreciate that the various illustrative modules, method steps, and modules described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the software modules, method steps may be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (10)

1. A time-crossing individual recognition method based on electroencephalogram signal deep learning is characterized by comprising the following steps:
respectively obtaining a source domain EEG signal with a label and a target domain EEG signal without a label, and performing signal preprocessing to obtain a source domain differential entropy and a target domain differential entropy;
inputting the source domain differential entropy and the target domain differential entropy into a joint domain to adapt to an individual recognition network, and iteratively performing network training;
acquiring an electroencephalogram signal to be identified of a target domain, and performing signal preprocessing to acquire a target domain differential entropy to be identified;
and based on the target domain differential entropy to be recognized, obtaining an individual recognition result of the electroencephalogram signal to be recognized of the target domain through a trained joint domain adaptive individual recognition network.
2. The electroencephalogram signal deep learning-based time-crossing individual recognition method according to claim 1, wherein the signal preprocessing comprises electroencephalogram signal bad section removal, 1-75Hz filtering, frequency band signal extraction, frequency band signal differential entropy calculation and feature smoothing.
3. The method for cross-time individual recognition based on electroencephalogram signal deep learning of claim 2, wherein the frequency band signal extraction comprises 1-3Hz delta signal extraction, 4-7Hz theta signal extraction, 8-13Hz alpha signal extraction, 14-30Hz beta signal extraction, and 31-50Hz gamma signal extraction.
4. The electroencephalogram signal deep learning-based time-crossing individual recognition method of claim 1, wherein the joint domain-adaptive individual recognition network comprises a feature extractor, a classifier and a domain discriminator, and the training process comprises:
respectively extracting the features of the source domain differential entropy and the target domain differential entropy through a feature extractor;
inputting the feature extraction result into a classifier, and calculating the classification loss of a source domain and a target domain; inputting the feature extraction result into a domain discriminator, and calculating the domain discrimination loss of the source domain and the target domain; inputting the feature extraction result into a full connection layer, and calculating the joint loss of a source domain and a target domain;
obtaining a total loss of a joint domain adaptive individual identification network based on the classification loss, the domain discrimination loss and the joint loss;
and performing network iterative training in the total loss descending direction based on the obtained training data set until the total loss value is lower than a set threshold value or reaches a set training frequency, so as to obtain a trained joint domain adaptive individual recognition network.
5. The method of claim 4, wherein the classification loss of the source domain and the target domain is expressed as:
Figure FDA0003655381130000021
wherein L is class Representing the classification loss of the source domain and the target domain, H represents the cross entropy loss, M is the mth training sample in the current training batch, M is the total number of the training samples in the current training batch,
Figure FDA0003655381130000022
individual labels representing training samples predicted by the classifier,
Figure FDA0003655381130000023
representing the individual label of the m-th training sample predicted by the classifier, y representing the individual label of the actual training sample, y m Individual labels representing the actual mth training sample.
6. The method of claim 4, wherein the domain discrimination loss of the source domain and the target domain is expressed as:
Figure FDA0003655381130000024
wherein L is domain Representing the domain discrimination loss of the source domain and the target domain, H represents the cross entropy loss, M is the mth training sample in the current training batch, M is the total number of the training samples in the current training batch,
Figure FDA0003655381130000025
a domain label representing a training sample predicted by the classifier,
Figure FDA0003655381130000026
representing the domain label of the m-th training sample predicted by the classifier, p representing the domain label of the actual training sample, p m A domain label representing the actual first training sample.
7. The method of claim 4, wherein the joint loss of the source domain and the target domain comprises an access probability loss and a transition probability loss, which are expressed as:
L association =L visit +αL trans
wherein L is association Representing the joint loss of the source and target domains, L visit Representing the loss of access probability, L trans Representing the transition probability loss, α is adjustable for a trade-off of L visit And L trans At L association The weight of the ratio.
8. The method of claim 7, wherein the access probability loss is expressed as:
L visit =H(V,P ab )
Figure FDA0003655381130000031
M ij =<A i ,B j >
V=1/|B j |
wherein H represents the cross entropy loss, P ab For a probability matrix of vector accesses from the source domain to the target domain class,
Figure FDA0003655381130000032
is a matrix P ab Represents the source domain vector A i Accessing to a target Domain vector B j Probability of (M) ij Represents A i And B j The degree of similarity between the two components,<A i ,B j >represents vector A i Sum vector B j V represents a weight vector.
9. The method of claim 8, wherein the transition probability loss is expressed as:
L trans =H(T,P aba )
Figure FDA0003655381130000033
Figure FDA0003655381130000041
wherein, P ab Probability matrix for accessing a vector from a source domain to a target domain class, P ba Probability matrix for accessing a vector from a target domain to a source domain class, P aba As the probability that a vector will visit back to the source domain category after visiting from the source domain to the target domain,
Figure FDA0003655381130000042
is a matrix P aba Represents the source domain vector A i AccessTo target domain vector B j Post-revisit back to source domain vector A j T represents a weight vector, class (A) i )=class(A j ) Represents vector A i Sum vector A j The categories are equal.
10. An inter-time individual recognition system based on electroencephalogram signal deep learning, characterized in that the individual recognition system comprises:
the data acquisition module is configured to respectively acquire the electroencephalogram signals with the labels of the source domain and the electroencephalogram signals without the labels of the target domain, and acquire the electroencephalogram signals to be identified of the target domain without the labels;
the preprocessing module is configured to respectively preprocess the acquired source domain signals and the acquired target domain signals to acquire a source domain differential entropy and a target domain differential entropy, and preprocess the acquired electroencephalogram signals to be identified of the target domain to acquire a target domain differential entropy to be identified;
the network training module is configured to input the source domain differential entropy and the target domain differential entropy into a joint domain to adapt to an individual recognition network, and iteratively perform network training;
the individual recognition module is configured to adapt to an individual recognition network through a trained united domain based on the target domain differential entropy to be recognized, and acquire an individual recognition result of the electroencephalogram signal to be recognized of the target domain;
and the output module is configured to output the acquired individual identification result of the electroencephalogram signal to be identified in the target domain.
CN202210557181.2A 2022-05-20 2022-05-20 Cross-time individual identification method and system based on electroencephalogram signal deep learning Pending CN114912492A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210557181.2A CN114912492A (en) 2022-05-20 2022-05-20 Cross-time individual identification method and system based on electroencephalogram signal deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210557181.2A CN114912492A (en) 2022-05-20 2022-05-20 Cross-time individual identification method and system based on electroencephalogram signal deep learning

Publications (1)

Publication Number Publication Date
CN114912492A true CN114912492A (en) 2022-08-16

Family

ID=82769077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210557181.2A Pending CN114912492A (en) 2022-05-20 2022-05-20 Cross-time individual identification method and system based on electroencephalogram signal deep learning

Country Status (1)

Country Link
CN (1) CN114912492A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740221A (en) * 2023-08-16 2023-09-12 之江实验室 Method, device, computer equipment and medium for generating real-time brain function activation graph

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740221A (en) * 2023-08-16 2023-09-12 之江实验室 Method, device, computer equipment and medium for generating real-time brain function activation graph
CN116740221B (en) * 2023-08-16 2023-10-20 之江实验室 Method, device, computer equipment and medium for generating real-time brain function activation graph

Similar Documents

Publication Publication Date Title
Fang et al. A new spatial–spectral feature extraction method for hyperspectral images using local covariance matrix representation
Kumar et al. Classification of seizure and seizure-free EEG signals using local binary patterns
Joshi et al. Classification of ictal and seizure-free EEG signals using fractional linear prediction
CN111127387B (en) Quality evaluation method for reference-free image
Keshishzadeh et al. Improved EEG based human authentication system on large dataset
CN114176607B (en) Electroencephalogram signal classification method based on vision transducer
CN111798440A (en) Medical image artifact automatic identification method, system and storage medium
AU2021101300A4 (en) A hybrid system for skin burn image classification and severity grading and its method thereof
CN115359066B (en) Focus detection method and device for endoscope, electronic device and storage medium
CN114912492A (en) Cross-time individual identification method and system based on electroencephalogram signal deep learning
CN113486752A (en) Emotion identification method and system based on electrocardiosignals
CN113963193A (en) Method and device for generating vehicle body color classification model and storage medium
CN107679494A (en) Based on the fingerprint image matching method selectively to extend
CN114578963A (en) Electroencephalogram identity recognition method based on feature visualization and multi-mode fusion
CN116385467B (en) Cerebrovascular segmentation method based on self-supervision learning and related equipment
Azad et al. Real-time human face detection in noisy images based on skin color fusion model and eye detection
WO2016192213A1 (en) Image feature extraction method and device, and storage medium
CN110909678B (en) Face recognition method and system based on width learning network feature extraction
CN106326827A (en) Palm vein recognition system
CN107341519B (en) Support vector machine identification optimization method based on multi-resolution analysis
Isnanto et al. Face Recognition System Using Feature Extraction Method of 2-D Gabor Wavelet Filter Bank and Distance-Based Similarity Measures
CN112466311A (en) Voiceprint recognition method and device, storage medium and computer equipment
Misra et al. A study report on finger print image enhancement methods
CN111898531A (en) Satellite communication signal identification method and device and electronic equipment
CN113780107B (en) Radio signal detection method based on deep learning dual-input network model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination