CN114841192A - Electroencephalogram signal feature enhancement method based on reinforcement learning combined denoising and space-time relation modeling - Google Patents

Electroencephalogram signal feature enhancement method based on reinforcement learning combined denoising and space-time relation modeling Download PDF

Info

Publication number
CN114841192A
CN114841192A CN202210289437.6A CN202210289437A CN114841192A CN 114841192 A CN114841192 A CN 114841192A CN 202210289437 A CN202210289437 A CN 202210289437A CN 114841192 A CN114841192 A CN 114841192A
Authority
CN
China
Prior art keywords
space
electroencephalogram
denoising
signal
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210289437.6A
Other languages
Chinese (zh)
Inventor
秦翰林
张昱赓
马琳
王诚
卢长浩
刘嘉伟
王欣达
陈嘉欣
于跃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202210289437.6A priority Critical patent/CN114841192A/en
Publication of CN114841192A publication Critical patent/CN114841192A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention relates to an electroencephalogram signal feature enhancement method based on reinforcement learning combined denoising and space-time relation modeling, which comprises the following steps: acquiring a multi-channel electroencephalogram signal; carrying out interference removal decision based on multi-agent reinforcement learning on the obtained multi-channel electroencephalogram signals to obtain clean signals; and performing detail recovery on the clean signal by adopting space-time attention modeling to obtain an enhanced electroencephalogram signal. The denoising process of the electroencephalogram signals is selected and optimized based on a reinforcement learning mechanism, so that clean signals with multiple types of irrelevant interference removed are obtained, and the overall signal-to-noise ratio and the significance effect of the electroencephalogram characteristics are improved. On the basis, a time-space relation modeling method introducing a transformer attention mechanism is adopted, the inherent time sequence continuity and the space correlation of the electroencephalogram test signal flow are utilized, a signal with higher spatial resolution is obtained through reconstruction, and the channel characteristics of electroencephalogram source data are expanded. Finally, the purpose of improving the spatial resolution of the signal so as to improve the identification precision is achieved.

Description

Electroencephalogram signal feature enhancement method based on reinforcement learning combined denoising and space-time relation modeling
Technical Field
The invention belongs to the technical field of electroencephalogram intelligent recognition, and relates to an electroencephalogram signal feature enhancement method based on reinforcement learning combined denoising and space-time relation modeling.
Background
The brain electrical signal is an electrical signal collected and recorded on the scalp by using a non-invasive flexible electrode, and the electrical signal is formed by summing postsynaptic potentials synchronously generated by a large number of neurons during brain activity and is the overall reflection of the physiological activity of brain nerve cells on the surface of the cerebral cortex or the scalp. When a tester imagines limb movement without actual limb movement, the activity between the neurons can still generate electric signals, when the energy accumulation of the signals exceeds a certain threshold value, electroencephalogram signals can be generated, the electroencephalogram signals generated by the motor imagery have the characteristics of event-related synchronization and desynchronization, the electroencephalogram signals are subjected to characteristic classification by analyzing the motor imagery electroencephalogram signals, the motor intention of an imagee can be judged, and therefore the control of external equipment is achieved.
The existing research on brain electrical signal intelligent processing focuses on removing various interferences and artifacts in brain electrical signals, intelligently selecting efficient channels and enhancing brain electrical characteristics required by classification, so that the subsequent brain electrical signal intelligent classification and identification process can be efficiently realized. The existing artifact removing algorithm can only remove a single type of clutter signals, and superposition of multiple clutter suppression algorithms can also affect electroencephalogram source signals. The existing electroencephalogram signal enhancement method can enhance the characteristics of local related potentials of brain areas in electroencephalogram signals and the like, but lacks of enhancing timing information and electroencephalogram details of space areas except for acquisition electrodes.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an electroencephalogram signal feature enhancement method based on reinforcement learning joint denoising and space-time relation modeling. The technical problem to be solved by the invention is realized by the following technical scheme:
the embodiment of the invention provides an electroencephalogram signal characteristic enhancement method based on reinforcement learning combined denoising and space-time relation modeling, which comprises the following steps:
s1, acquiring multi-channel electroencephalogram signals;
s2, carrying out interference elimination decision based on multi-agent reinforcement learning on the obtained multichannel electroencephalogram signals to obtain clean signals;
and S3, performing detail recovery on the clean signal by adopting space-time attention modeling to obtain an enhanced electroencephalogram signal.
In an embodiment of the present invention, step S2 includes:
s21, inputting the clean signal into a cascade network containing all denoising algorithms to perform first denoising;
step S22, evaluating the obtained first denoising result;
step S23, according to the score of the first evaluation result, the result is fed back and the change of the network structure is guided;
s24, constructing a new denoising strategy by using the changed network structure to process the clean signal to obtain a secondary denoising result;
step S25, evaluating the obtained second denoising result;
step S26, feeding back results according to the scores of the second evaluation results and then guiding the change of the network structure;
and repeatedly executing the steps to obtain the optimal combined denoising method for the multi-channel electroencephalogram signals.
In an embodiment of the present invention, step S22 includes:
and step S221, evaluating the algorithm model by using a cross-validation strategy.
In one embodiment of the present invention, step S221 includes:
s2211, dividing all electroencephalogram signals of each subject into N equal data subsets, using one subset as a test set, and forming a training set by other N-1 subsets;
step S2212, evaluating the algorithm model according to the test set and the training set to obtain a score of the data of the subject;
in step S2213, the average score of all subjects is used as the final score, and the average score of cross-validation is used as the score of one subject.
In an embodiment of the present invention, step S3 includes:
step S31, applying a spatial attention transformer to encode spatial features on the feature channel dimension;
step S32, slicing the data in the time dimension to perform attention conversion, and obtaining attention characteristics containing time sequence association;
step S33, under the guidance of time sequence characteristics, enhancing the space characteristics;
and step S34, acquiring the enhanced electroencephalogram signal according to the space-time two-dimensional characteristics.
In an embodiment of the present invention, step S31 includes:
step S311, independently dividing and encoding the clean signal;
s312, extracting the characteristics of the electroencephalogram signals of the plurality of independent channels after the spatial coding;
step 313, compressing the extracted feature set into feature vectors through a compression module;
and S314, carrying out multi-dimensional feature aggregation on the compressed feature vectors through activation operation of an activation module, wherein different channels correspond to respective activation factors.
In one embodiment of the present invention, the expression of the compression module is:
Figure BDA0003561077590000041
wherein X ∈ R H×V×C Is the feature map output extracted from the previous layer of the module; x c ∈R H×V ,c∈[1,2,····C]Representing a feature map of the c channel in X, X c (i, j) represents X c A data point at intermediate position (i, j); f sq (. to) represent feature compression along a spatial dimension, compressing each two-dimensional feature X c Compressed to a real number z c The real number z c To a certain extent, has a global perception domain, and the dimension output is matched with the number of characteristic channel inputs.
In an embodiment of the present invention, an expression of an activation operation of the activation module is:
S=σ(W 2 δ(W 1 Z))
in the formula, S is an activation factor, delta represents sigmoid function operation, and Z is space amount obtained after compression of H and V; w 1 And W 2 Respectively representing the overall correlation of two different channels;
acquiring an aggregation result of the multi-dimensional features through the activation factors;
the expression for the aggregation of the multidimensional features is:
Figure BDA0003561077590000042
wherein after the multi-dimensional features are aggregated, a vector x is input c Is changed into
Figure BDA0003561077590000043
s c The activating factor, x, for the c channel c Representing the values of the input matrix at c-channel.
In an embodiment of the present invention, step S32 includes:
step S321, carrying out coding marking on the position of the clean signal in a time sequence;
step S322, compressing the clean signal after the coding marking into one-dimensional data;
step S323, dividing the one-dimensional data into a plurality of small slices with overlapping in the time dimension;
step S324, a multi-head attention-allowed model is adopted for the plurality of small slices to obtain attention features containing time sequence correlation.
In an embodiment of the present invention, step S34 includes:
step S341, combining the time-space two dimensional features;
step S342, carrying out fragment division on the combined characteristics to obtain a plurality of time sequence fragments;
and S343, combining the time sequence segments to carry out flattened block mapping reconstruction on the independent channels so as to obtain the enhanced electroencephalogram signal.
Compared with the prior art, the invention has the beneficial effects that:
the invention aims to overcome the defects of the existing method, provides electroencephalogram original signal interference removal based on reinforcement learning and electroencephalogram signal space-time relation extraction and reconstruction enhancement based on transformer, and realizes high-quality electroencephalogram signal recovery so as to improve the intelligent classification capability of electroencephalogram signals.
The denoising process of the electroencephalogram signals is selected and optimized based on a reinforcement learning mechanism, so that clean signals with multiple types of irrelevant interference removed are obtained, and the overall signal-to-noise ratio and the significance effect of the electroencephalogram characteristics are improved. On the basis, aiming at the high specificity among subjects in the EEG extraction process, a time-space relation modeling introducing a transformer attention mechanism is adopted, the inherent time sequence continuity and space correlation of the EEG test signal flow are utilized, a signal with higher spatial resolution is obtained through reconstruction, and the channel characteristics of EEG source data are expanded. Finally, the purpose of improving the spatial resolution of the signal so as to improve the identification precision is achieved.
Other aspects and features of the present invention will become apparent from the following detailed description, which proceeds with reference to the accompanying drawings. It is to be understood, however, that the drawings are designed solely for purposes of illustration and not as a definition of the limits of the invention, for which reference should be made to the appended claims. It should be further understood that the drawings are not necessarily drawn to scale and that, unless otherwise indicated, they are merely intended to conceptually illustrate the structures and procedures described herein.
Drawings
FIG. 1 is a schematic flow chart of an electroencephalogram signal feature enhancement method based on reinforcement learning joint denoising and space-time relationship modeling provided by an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an electroencephalogram signal feature enhancement method based on reinforcement learning joint denoising and space-time relationship modeling provided by an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a denoising module according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of brain electrical spatial feature enhancement under spatio-temporal feature modeling according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a channel attention feature extraction process according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a time attention feature extraction process according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a decoder according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but the embodiments of the present invention are not limited thereto.
Example one
Referring to fig. 1 and fig. 2, fig. 1 and fig. 2 are schematic flow diagrams of an electroencephalogram feature enhancement method based on reinforcement learning joint denoising and space-time relationship modeling according to an embodiment of the present invention. The invention provides an electroencephalogram signal characteristic enhancement method based on reinforcement learning combined denoising and space-time relation modeling, which comprises the following steps:
s1, acquiring multi-channel electroencephalogram signals;
as shown in fig. 2, a subject is subjected to acquisition of multi-channel electroencephalogram signals through a motor imagery evoked paradigm and a multi-channel non-invasive acquisition device to acquire the multi-channel electroencephalogram signals.
And S2, carrying out interference elimination decision based on multi-agent reinforcement learning on the obtained multichannel electroencephalogram signals to obtain clean signals.
The method has the advantages that the interference removal decision based on multi-agent reinforcement learning is carried out on the obtained multi-channel electroencephalogram signals, the characteristics such as artifacts and the like are extracted from the whole multi-channel electroencephalogram signals, the corresponding denoising module is constructed in a self-adaptive mode to carry out combined noise removal, and the introduction of other interference information influencing multi-channel characteristic classification when single-type artifacts are removed in the existing method is avoided. The existing method has insufficient adaptability to various noises, possibly causes serious distortion of the denoised bioelectricity signal, and cannot fully balance the local characteristics and the global characteristics of the signal.
In one embodiment, step S2 includes:
s21, inputting the clean signal into a cascade network containing all denoising algorithms to perform primary denoising;
and the noise removing mechanism is respectively targeted for different forms of environmental motion noise, myoelectricity artifact noise and obvious emotion fluctuation interference of electroencephalogram testers. Meanwhile, under different scenes, the order of removing the three types of interference is inconsistent, so that the denoising effects are different. Therefore, in the step, a reward and punishment mechanism of the whole denoising process is constructed by reinforcement learning, a combined denoising module comprising three denoising mechanisms is subjected to model structure adjustment, and then a structure decision method for denoising the electroencephalogram signal in different scenes of different subjects is trained end to end.
In the training process, a basic denoising network containing all denoising method modules is established to form an initial denoising strategy, and then an optimal denoising network structure aiming at the current input electroencephalogram signal sequence is iteratively searched after a reward and punishment result is formed by combining an evaluation method. After a large-scale data set training process is utilized, a network model which can realize the quick decision of an optimized denoising method under various interference conditions such as environments and the like aiming at different subjects is obtained, and the high-efficiency removal of artifacts such as emotion clutter, noise and the like in the electroencephalogram test process can be realized.
As shown in fig. 3, a complete denoising network is composed of three layers, which respectively represent a plurality of processing processes for interference of three categories, i.e., electroencephalogram emotion clutter, electromyogram artifacts, and environmental noise, and in the first step, an electroencephalogram signal is input into a cascaded network including all denoising algorithms, and after one-time denoising, step S22 is executed.
Step S22, evaluating the obtained first denoising result;
step S23, according to the score of the first evaluation result, the result is fed back and the change of the network structure is guided;
and (4) according to the score of the first evaluation result, the result is fed back, and then the change of the network structure is guided, so that a new denoising strategy is obtained.
S24, constructing a new denoising strategy by using the changed network structure to process the clean signal to obtain a secondary denoising result;
step S25, evaluating the obtained secondary denoising result;
the evaluation method of this step is similar to that of the upper step S12, and is not described here again.
And constructing a new denoising strategy by using the changed network structure to process the electroencephalogram signals of the same batch, and evaluating the obtained result and feeding back to guide the optimization of the network sequence structure.
Step S26, according to the score of the second evaluation result, the result is fed back and the change of the network structure is guided;
and repeatedly executing the steps to obtain the optimal combined denoising method for the multi-channel electroencephalogram signals.
And repeating iteration according to the process to finally obtain the optimal combined denoising method for the batch of electroencephalogram signals.
Through training in a large number of data sets, self-adaptive denoising structure decisions suitable for different scenes and different testees can be obtained, and a denoising strategy is dynamically generated in practical application and used for efficiently denoising and recovering clean electroencephalogram signals with high signal-to-noise ratio and significance.
In the training process, the correction of the denoising strategy depends on an evaluation method to a great extent, and in order to effectively utilize a large number of data sets to obtain a result with strong universality, a cross validation method is adopted.
In one embodiment, step S22 includes:
and step S221, evaluating the algorithm model by using a cross-validation strategy.
Specifically, step S221 includes:
s2211, dividing all electroencephalogram signals of each subject into N equal data subsets, using one subset as a test set, and forming a training set by other N-1 subsets;
step S2212, evaluating the algorithm model according to the test set and the training set to obtain a score of the data of the subject;
step S2213, the average score of all subjects is used as the final score, wherein the average score of cross-validation is used as the score of one subject.
After the clean EEG signal is obtained by the combined denoising method, the optimization degree of the algorithm model on the original EEG signal is evaluated by using a cross validation strategy. The strategy trains the model with a data set that encompasses all the data, which can provide more reliable accuracy. For each subject, dividing all electroencephalogram signals of the subject into N equal data subsets, using one subset as a test set, and forming other N-1 subsets into a training set. This process was repeated N times to obtain a score for the subject data. The average score of the cross-validation was taken as the result for one subject, and then the average score for all subjects was taken as the final score.
And S3, performing detail recovery on the clean signal by adopting space-time attention modeling to obtain an enhanced electroencephalogram signal.
Aiming at the high specificity among subjects in the EEG extraction process, a transform attention mechanism is introduced for time-space relationship modeling, and the signals with higher spatial resolution are reconstructed by utilizing the inherent time sequence continuity and spatial correlation of EEG test signal flow
After the corresponding denoising process is called by making a corresponding decision for the individual multichannel signals by using reinforcement learning, a high-expression clean signal can be obtained, and signal attenuation can still be caused to a certain degree, so that the characteristic information of the electroencephalogram signals on the time and space dimensions is further enhanced. And (3) establishing an electroencephalogram time-space relation model by utilizing a transformer attention mechanism, and realizing the combination of time sequence continuity and spatial correlation for enhancing the spatial resolution.
Specifically, as shown in fig. 4, step S3 includes:
step S31, applying a spatial attention transformer to encode spatial features on the feature channel dimension;
specifically, step S31 includes:
step S311, independently dividing and encoding the clean signal;
s312, extracting the characteristics of the electroencephalogram signals of the plurality of independent channels after the spatial coding;
step 313, compressing the extracted feature set into feature vectors through a compression module;
and S314, carrying out multi-dimensional feature aggregation on the compressed feature vectors through activation operation of an activation module, wherein different channels correspond to respective activation factors.
As shown in fig. 5, the transform space encoder is composed of compression-excitation, and can extract the channel attention feature, thereby improving the expressive ability of the framework and increasing the sensitivity of the model to the information feature. An intra-block and inter-block attention mechanism is added into the dense fusion module, so that the network can selectively amplify valuable feature channels and suppress useless feature channels based on global information.
The clean electroencephalogram signals after denoising processing are still in the form of multi-channel data. Performing transform spatial coding, namely, independently dividing and coding multi-channel signals, then performing feature extraction on the electroencephalogram signals of a plurality of independent channels after spatial coding, inputting an extracted feature set into a compression module to compress the feature set into a feature vector, and performing multi-dimensional feature aggregation on the compressed features through excitation modules which distribute corresponding activation factors for different channels.
Wherein, the expression of the compression module is:
Figure BDA0003561077590000101
wherein X ∈ R H×V×C Is the feature graph output extracted from the previous layer of the module, H, V represents two coordinate axes on a two-dimensional space, and C represents the channel dimension; x c ∈R H × V ,c∈[1,2,····C]Representing a feature map of the c channel in X, X c (i, j) represents X c A data point at intermediate position (i, j); f sq (. to) represent feature compression along a spatial dimension, compressing each two-dimensional feature X c Compressed to a real number z c The real number z c To a certain extent, has a global perception domain, and the dimension output is matched with the number of characteristic channel inputs.
The expression of the activation operation of the activation module is as follows:
S=σ(W 2 δ(W 1 Z))
in the formula, S is an activation factor, delta represents sigmoid function operation, and Z is space amount obtained after compression of H and V; w 1 And W 2 Respectively representing the overall correlation of two different channels, wherein two sets of parameters
Figure BDA0003561077590000111
And
Figure BDA0003561077590000112
correlation between channels was simulated.
The activation module forms a bottleneck, parameters are controlled by two fully connected layers, between the fully connected layers, the dimension of a channel which is input by conversion is returned through a dimension reduction layer, and an aggregation result of multi-dimensional features is obtained through adjustment of activation factors;
the expression for the aggregation of multidimensional features is:
Figure BDA0003561077590000113
wherein after the multi-dimensional features are aggregated, a vector x is input c Is changed into
Figure BDA0003561077590000114
s c Represents c throughActivation factor of the tract, x c Representing the values of the input matrix at c-channel.
Through the spatial coding mode, the channel attention characteristics of the multi-dimensional electroencephalogram signals can be effectively extracted, and the relevance between the global characteristics and the local characteristics of the electroencephalogram signals on the space can be fully reflected. Besides the spatial characteristics, the electroencephalogram test signal also has time sequence change characteristics as a complete dynamic behavior, and the relationship among all parts in the signal sequence can be more effectively utilized by establishing a time sequence dependency relationship, so that parameters are provided for the enhancement and the expansion of the spatial characteristics. To achieve this, a multi-headed time-sequential attention mechanism is used to sense the global time dependence of the EEG signal, as shown in fig. 6.
Step S32, slicing the data in the time dimension to perform attention conversion, and obtaining attention characteristics containing time sequence association;
specifically, as shown in fig. 6, step S32 includes:
step S321, coding and marking the position of the clean signal on the time sequence;
step S322, compressing the clean signal subjected to coding marking into one-dimensional data;
step S323, dividing the one-dimensional data into a plurality of small slices with overlapping in the time dimension;
in step S324, a multi-head attention-allowed model is adopted for the plurality of small slices to obtain the attention features including the time-series correlation.
After the position of the data in time sequence is coded and marked, the data is compressed into 1 × T one-dimensional data, and then the data is divided into a plurality of small slices with overlap in the time dimension (the non-overlap division can lose part of the context information). Note that the mechanism is used to obtain a representation of the fit classification by perceiving global temporal features.
Unlike spatial transformers, multi-head attention allows models to learn dependencies from different angles. The input is divided into h smaller parts, attention is exercised in parallel, and the outputs are concatenated and then linearly transformed to obtain the original size. This process can be expressed as:
MHA(X Q ,X K .X V )=[head 0 ,...,head h-1 ]W o
head i =Attention(X Q W i Q ,X K W i K ,X V W i V )
wherein Q, K, V represent vector queries, keys and values, respectively,
Figure BDA0003561077590000121
and
Figure BDA0003561077590000122
representing the query, key and value matrices obtained after linear transformation of the input state vector.
Figure BDA0003561077590000123
Linear transformation, head, representing the final output obtained i The attention matrix of the ith part is shown, and MHA is the final vector representation obtained after the parts are aggregated. The feed-forward module contains two fully connected layers, and the GeLU activation function is connected behind the multi-head attention to enhance the perception and nonlinear learning ability of the model. The input and output sizes of the feedforward characteristic block are the same, and the internal size is multiplied. Layer normalization is provided before the multi-head attention and feed-forward module. The remaining connections are also used for better training. The module with multi-headed attention and feedforward was repeated 3 times to obtain the overall effect.
Step S33, under the guidance of time sequence characteristics, enhancing the space characteristics;
and step S34, acquiring the enhanced electroencephalogram signal according to the space-time two-dimensional characteristics.
Specifically, as shown in fig. 7, step S34 includes:
step S341, combining the time-space two dimensional features;
step S342, carrying out fragment division on the combined characteristics to obtain a plurality of time sequence fragments;
and S343, combining the plurality of time sequence segments to carry out flattened block mapping reconstruction on the N independent channels so as to obtain the enhanced electroencephalogram signal.
The spatial coding and the time coding respectively carry out extraction modeling and generation expansion on the characteristics of different dimensions of a section of input electroencephalogram multichannel signal to obtain an electroencephalogram multichannel sequence with enhanced spatial resolution (electrode channel number expansion). The decoder part firstly carries out segment division on the combination characteristics, and carries out flattened block mapping reconstruction on the N independent channels by combining the time sequence segments to obtain the recovered electroencephalogram signals.
After signal reconstruction, the discriminator extracts and reconstructs the difference between the characteristics and the characteristics of matched high-quality electroencephalogram signals (the number of channels acquired by a subject is wider, the sampling frequency is higher, and the clean electroencephalogram signals are preprocessed) so as to guide the reconstruction optimization of space and time sequence coding in the generator.
For the electroencephalogram signal of a multi-channel long sequence, the effectiveness of position induction deviation generated by a space transformer block sequence is combined with the expressiveness of time sequence segment position attention coding generated by a space transformer module, the signal generated by a decoder is compared with the signal acquired by the same subject under the ideal condition in contrast sampling, cross entropy loss is calculated, and joint training is carried out to obtain network parameters which can be used for electroencephalogram signal reverse modeling recovery.
The method comprises the steps that a spatial attention transformer is applied to feature channel dimensions to carry out coding on spatial features, and feature channel attention blocks give weights to different channels, so that a model can selectively pay attention to more related channels, and then channel compression is carried out to reduce the calculation cost; slicing the data in the time dimension to perform attention conversion, obtaining an attention feature containing time sequence correlation, and enhancing the spatial feature under the guidance of the time sequence feature; and after the space-time two dimensional characteristics are combined, mapping and reconstructing the enhanced independent part by using a transform decoder to obtain an enhanced electroencephalogram signal.
The invention aims to overcome the defects of the existing method, provides electroencephalogram original signal interference removal based on reinforcement learning and electroencephalogram signal space-time relation extraction and reconstruction enhancement based on transformer, and realizes high-quality electroencephalogram signal recovery so as to improve the intelligent classification capability of electroencephalogram signals.
The denoising process of the electroencephalogram signals is selected and optimized based on a reinforcement learning mechanism, so that clean signals with multiple types of irrelevant interference removed are obtained, and the overall signal-to-noise ratio and the significance effect of the electroencephalogram characteristics are improved. On the basis, aiming at the high specificity among subjects in the EEG extraction process, a time-space relation modeling introducing a transformer attention mechanism is adopted, the inherent time sequence continuity and space correlation of the EEG test signal flow are utilized, a signal with higher spatial resolution is obtained through reconstruction, and the channel characteristics of EEG source data are expanded. Finally, the purpose of improving the spatial resolution of the signal so as to improve the identification precision is achieved.
In the traditional deep learning processing method of the bioelectricity data, the method steps adopting the recurrent neural network or the long-term memory network cannot be parallelized, the efficiency is low, and partial time sequence information can be lost by the processing method adopting the convolutional neural network as a basic framework. Meanwhile, the electroencephalogram signals have the characteristics of multiple channels and multiple dimensions. Some existing multi-classification integration strategies that simply stack the feature channels obtained from the multiple results of one-to-many processing largely ignore the importance of the different feature channels, and thus collaborative optimization is not well performed.
The existing depth network model with multi-dimensional feature extraction and classification capability can improve the extraction and classification capability of target features, but is easily subjected to individual difference among different testers and clutter interference caused by emotion electroencephalogram signals. According to the method, a depth strengthening mechanism is adopted to suppress various multi-dimensional interferences, the part which contributes a large amount to interpretation in the electroencephalogram signal is highlighted, a countermeasure mechanism and a time-space feature modeling model are adopted, and the channel weighting after self-attention space coding can represent the global features and also take the correlation characteristics of adjacent regions into account, so that compared with the existing method, a more flexible and dynamic receptive field can be generated; on the time scale, due to the fact that overlapped sequence slicing and attention extraction are carried out, self-adaptive cross-domain connection on the electroencephalogram motor imagery signal time sequence can be achieved, spatial local correlation and brain region global time sequence change characteristics can be integrated on the space scale, the electroencephalogram signals can be restored and restored, effective enhancement of spatial global distribution is achieved, and high-quality signals are provided for follow-up research.
In the description of the present invention, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic data point described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples described in this specification can be combined and combined by those skilled in the art.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (10)

1. An electroencephalogram signal feature enhancement method based on reinforcement learning combined denoising and space-time relation modeling is characterized by comprising the following steps:
s1, acquiring multi-channel electroencephalogram signals;
s2, carrying out interference elimination decision based on multi-agent reinforcement learning on the obtained multichannel electroencephalogram signals to obtain clean signals;
and S3, performing detail recovery on the clean signal by adopting space-time attention modeling to obtain an enhanced electroencephalogram signal.
2. The electroencephalogram signal feature enhancement method based on reinforcement learning joint denoising and space-time relationship modeling as claimed in claim 1, wherein the step S2 includes:
s21, inputting the clean signal into a cascade network containing all denoising algorithms to perform primary denoising;
step S22, evaluating the obtained first denoising result;
step S23, according to the score of the first evaluation result, the result is fed back and the change of the network structure is guided;
s24, constructing a new denoising strategy by using the changed network structure to process the clean signal to obtain a secondary denoising result;
step S25, evaluating the obtained second denoising result;
step S26, according to the score of the second evaluation result, the result is fed back and the change of the network structure is guided;
and repeatedly executing the steps to obtain the optimal combined denoising method for the multi-channel electroencephalogram signals.
3. The electroencephalogram signal feature enhancement method based on reinforcement learning joint denoising and space-time relationship modeling according to claim 2, wherein the step S22 includes:
and step S221, evaluating the algorithm model by using a cross-validation strategy.
4. The EEG signal feature enhancement method based on reinforcement learning joint denoising and space-time relation modeling according to claim 3, wherein step S221 comprises:
s2211, dividing all electroencephalogram signals of each subject into N equal data subsets, using one subset as a test set, and forming a training set by other N-1 subsets;
step S2212, evaluating the algorithm model according to the test set and the training set to obtain a score of the data of the subject;
step S2213, the average score of all subjects is used as the final score, wherein the average score of cross-validation is used as the score of one subject.
5. The electroencephalogram signal feature enhancement method based on reinforcement learning joint denoising and space-time relationship modeling as claimed in claim 1, wherein the step S3 includes:
step S31, applying a spatial attention transformer to encode spatial features on the feature channel dimension;
step S32, slicing the data in the time dimension to perform attention conversion, and obtaining attention characteristics containing time sequence association;
step S33, under the guidance of time sequence characteristics, enhancing the space characteristics;
and step S34, acquiring the enhanced electroencephalogram signal according to the space-time two-dimensional characteristics.
6. The electroencephalogram feature enhancement method based on reinforcement learning combined denoising and space-time relationship modeling according to claim 5, wherein the step S31 includes:
step S311, independently dividing and encoding the clean signal;
s312, extracting the characteristics of the electroencephalogram signals of the plurality of independent channels after the spatial coding;
step 313, compressing the extracted feature set into feature vectors through a compression module;
and S314, carrying out multi-dimensional feature aggregation on the compressed feature vectors through activation operation of an activation module, wherein different channels correspond to respective activation factors.
7. The EEG signal feature enhancement method based on reinforcement learning joint denoising and space-time relation modeling according to claim 6, wherein the expression of the compression module is:
Figure FDA0003561077580000031
wherein X ∈ R H×V×C Is the feature graph output extracted from the previous layer of the module, H, V represents two coordinate axes on a two-dimensional space, and C represents the channel dimension; x c ∈R H×V ,c∈[1,2,····C]Representing a feature map of the c channel in X, X c (i, j) represents X c A data point at intermediate position (i, j); f sq (. to) represent feature compression along a spatial dimension, compressing each two-dimensional feature X c Compressed into real numbers.
8. The electroencephalogram signal feature enhancement method based on reinforcement learning joint denoising and space-time relationship modeling as claimed in claim 7, wherein the expression of the activation operation of the activation module is as follows:
S=σ(W 2 δ(W 1 Z));
s is an activation factor, delta represents sigmoid function operation, and Z is a space amount obtained after H and V are compressed; w 1 And W 2 Respectively representing the overall correlation of two different channels;
acquiring an aggregation result of the multi-dimensional features through the activation factors;
the expression for the aggregation of the multidimensional features is:
Figure FDA0003561077580000032
wherein after the multi-dimensional features are aggregated, a vector x is input c Is changed into
Figure FDA0003561077580000033
s c The activating factor, x, for the c channel c Representing the values of the input matrix at c-channel.
9. The EEG signal feature enhancement method based on reinforcement learning joint denoising and space-time relation modeling according to claim 5, wherein step S32 comprises:
step S321, carrying out coding marking on the position of the clean signal in a time sequence;
step S322, compressing the clean signal subjected to coding marking into one-dimensional data;
step S323, the one-dimensional data is divided into a plurality of small slices in a time dimension in an overlapping manner;
step S324, a multi-head attention-allowed model is adopted for the plurality of small slices to obtain the attention features containing the time sequence association.
10. The EEG signal feature enhancement method based on reinforcement learning joint denoising and space-time relation modeling according to claim 5, wherein step S34 comprises:
step S341, combining the time-space two dimensional features;
step S342, carrying out fragment division on the combined characteristics to obtain a plurality of time sequence fragments;
and S343, combining the time sequence segments to carry out flattened block mapping reconstruction on the independent channels so as to obtain the enhanced electroencephalogram signal.
CN202210289437.6A 2022-03-23 2022-03-23 Electroencephalogram signal feature enhancement method based on reinforcement learning combined denoising and space-time relation modeling Pending CN114841192A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210289437.6A CN114841192A (en) 2022-03-23 2022-03-23 Electroencephalogram signal feature enhancement method based on reinforcement learning combined denoising and space-time relation modeling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210289437.6A CN114841192A (en) 2022-03-23 2022-03-23 Electroencephalogram signal feature enhancement method based on reinforcement learning combined denoising and space-time relation modeling

Publications (1)

Publication Number Publication Date
CN114841192A true CN114841192A (en) 2022-08-02

Family

ID=82561520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210289437.6A Pending CN114841192A (en) 2022-03-23 2022-03-23 Electroencephalogram signal feature enhancement method based on reinforcement learning combined denoising and space-time relation modeling

Country Status (1)

Country Link
CN (1) CN114841192A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115422983A (en) * 2022-11-04 2022-12-02 智慧眼科技股份有限公司 Emotion classification method and device based on brain wave signals
CN118503686A (en) * 2024-07-19 2024-08-16 徐州医科大学 Multichannel electroencephalogram signal artifact removal method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115422983A (en) * 2022-11-04 2022-12-02 智慧眼科技股份有限公司 Emotion classification method and device based on brain wave signals
CN118503686A (en) * 2024-07-19 2024-08-16 徐州医科大学 Multichannel electroencephalogram signal artifact removal method

Similar Documents

Publication Publication Date Title
CN114841192A (en) Electroencephalogram signal feature enhancement method based on reinforcement learning combined denoising and space-time relation modeling
CN112244873A (en) Electroencephalogram time-space feature learning and emotion classification method based on hybrid neural network
CN110781945A (en) Electroencephalogram signal emotion recognition method and system integrating multiple features
CN114533086B (en) Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation
CN101221554A (en) Brain wave characteristic extraction method based on wavelet translation and BP neural network
Montazerin et al. ViT-HGR: vision transformer-based hand gesture recognition from high density surface EMG signals
CN110018739B (en) Electroencephalogram signal feature optimization method based on dimension reduction mechanism
CN109375776B (en) Electroencephalogram action intention recognition method based on multi-task RNN model
CN113158964A (en) Sleep staging method based on residual learning and multi-granularity feature fusion
CN103761424A (en) Electromyography signal noise reducing and aliasing removing method based on second-generation wavelets and ICA (independent component analysis)
CN115381466A (en) Motor imagery electroencephalogram signal classification method based on AE and Transformer
CN111310656A (en) Single motor imagery electroencephalogram signal identification method based on multi-linear principal component analysis
CN115859185A (en) Electroencephalogram emotion recognition method based on pulse convolution neural network
CN115969392A (en) Cross-period brainprint recognition method based on tensor frequency space attention domain adaptive network
CN117860271A (en) Classifying method for motor imagery electroencephalogram signals
CN117113015A (en) Electroencephalogram signal identification method and device based on space-time deep learning
CN1744073A (en) Method for extracting imagination action poteutial utilizing rpplet nerve net
CN116595434A (en) Lie detection method based on dimension and classification algorithm
CN114041808A (en) Transfer entropy coupling analysis method based on multi-channel surface electromyogram signal decomposition
CN114936583A (en) Teacher-student model-based two-step field self-adaptive cross-user electromyogram pattern recognition method
Sokhal et al. Classification of EEG signals using empirical mode decomposition and lifting wavelet transforms
Ince et al. ECoG based brain computer interface with subset selection
AU2021105283A4 (en) Method for Extracting EEG Feature Based on LBP and SSA
Zhao et al. Multilinear generalization of common spatial pattern
Johari et al. Noise removal methods on ambulatory EEG: A Survey

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination