CN112668424B - RBSAGAN-based data augmentation method - Google Patents
RBSAGAN-based data augmentation method Download PDFInfo
- Publication number
- CN112668424B CN112668424B CN202011509929.9A CN202011509929A CN112668424B CN 112668424 B CN112668424 B CN 112668424B CN 202011509929 A CN202011509929 A CN 202011509929A CN 112668424 B CN112668424 B CN 112668424B
- Authority
- CN
- China
- Prior art keywords
- data
- convolution
- layer
- resblock
- self
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000013434 data augmentation Methods 0.000 title claims abstract description 9
- 238000011176 pooling Methods 0.000 claims abstract description 19
- 230000007246 mechanism Effects 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 22
- 230000004913 activation Effects 0.000 claims description 17
- 239000008186 active pharmaceutical agent Substances 0.000 claims description 16
- 238000010586 diagram Methods 0.000 claims description 16
- 238000012549 training Methods 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000013461 design Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 238000009827 uniform distribution Methods 0.000 claims description 3
- 230000009286 beneficial effect Effects 0.000 claims description 2
- 238000009826 distribution Methods 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 230000003416 augmentation Effects 0.000 abstract description 6
- 238000013527 convolutional neural network Methods 0.000 description 10
- 210000004556 brain Anatomy 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000000537 electroencephalography Methods 0.000 description 2
- 230000008485 antagonism Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Abstract
The invention discloses an electroencephalogram signal data augmentation method based on RBSAGAN, which is characterized in that an Up ResBlock network structure and a Down ResBlock network structure are designed, features under different scale receptive fields are extracted through two 1D convolution layers of a trunk and one 1D convolution layer of a branch, and a 1D deconvolution layer and an average pooling layer are respectively adopted to respectively expand and contract data dimensions. The 1D Self-Attention network is designed based on Self-Attention mechanism. The network structure can directly obtain global time sequence characteristics by calculating the similarity between the discrete time data in parallel regardless of the distance between the discrete time data, and is suitable for electroencephalogram signals with rich time sequence information. The Down ResBlock and 1D Self-attribute and other networks form a discriminator of RBSAGA, and the output loss value updates the parameters of the generator and the discriminator until Nash balance is achieved. The new data generated by the generator and the original data form an augmentation data set, and the augmentation data set is input into the 1D CNN for classification so as to evaluate the quality of the generated data.
Description
Technical Field
The invention relates to the technical field of motor imagery electroencephalogram (Motor Imagery Electroencephalography, MI-EEG) data augmentation, in particular to a Deep Learning (DL) method for generating motor imagery electroencephalogram signals. The method specifically relates to the following steps: the method comprises the steps of designing an Up ResBlock network and a Down ResBlock network to respectively increase and decrease data dimension, designing a 1D Self-Attention network based on a Self-Attention mechanism aiming at the long-distance dependence problem of longer data, constructing RBSAGAN (ResBlock Self-Attention Generative Adversarial Networks) through the network and generating electroencephalogram data, carrying out feature extraction and classification on an expanded data set by adopting a convolutional neural network (Convolutional Neural Networks, CNN), and evaluating the quality of generated data.
Background
A brain-computer interface (Brain computer Interface, BCI) is a system that directly provides communication and control between the human brain and external devices for a patient. Electroencephalogram (EEG) signals acquire human brain cortex activity via non-invasive devices, and brain-computer interface-based brain-electrical signal research is gaining increasing attention. In recent years, a good classification effect is obtained by using a deep learning method to identify EEG signals, but the method has the defect of high data volume requirement. However, the electroencephalogram signal has high requirements on acquisition environment and high acquisition cost, so that the data volume of a training network is insufficient, and difficulty is brought to a method for identifying EEG signals based on a deep learning technology. Acquiring a large amount of high quality EEG data with rich temporal information is key to achieving good recognition results.
Given the great success of the Generated Antagonism Network (GAN) in the field of image augmentation, generating EEG signals based on GAN has good prospects. How to make the generated data have key characteristics of EEG is important, but the existing EEG signal augmentation method cannot capture the relation between the data at each discrete moment and global information, and does not fully utilize the time sequence characteristics of the signals, so that the characteristics of the generated EEG signals in a long-distance range are blurred. Moreover, the traditional multilayer convolution network stacking method has the problems that the characteristic information of the electroencephalogram signals is limited and the characteristic is lost, so that the quality of the generated electroencephalogram signals is not ideal.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides an electroencephalogram signal data augmentation method based on RBSAGAN.
(1) The method comprises the steps of designing Up Resblock and Down Resblock network structures, extracting features under different scale receptive fields through two 1D convolution layers of a trunk and one 1D convolution layer of a branch, and respectively expanding and contracting data dimensions by adopting a 1D deconvolution layer and an average pooling layer.
(2) To capture the relation between the data at each discrete time and the global data, a 1D Self-Attention network is designed based on a Self-Attention mechanism. The network structure can directly obtain global time sequence characteristics by calculating the similarity between the discrete time data in parallel regardless of the distance between the discrete time data, and is suitable for electroencephalogram signals with rich time sequence information.
(3) Constructing a generator of RBSAGAN by using networks such as Up Resblock and 1D Self-attribute and the like, and generating new data; the Down ResBlock and 1D Self-attribute and other networks form a discriminator of RBSAGA, and the output loss value updates the parameters of the generator and the discriminator until Nash balance is achieved. The new data generated by the generator and the original data form an augmentation data set, and the augmentation data set is input into the 1D CNN for classification so as to evaluate the quality of the generated data.
The specific steps of the invention are as follows:
step 1, preprocessing an electroencephalogram signal, carrying out band-pass filtering on the electroencephalogram signal of each lead by using a 8-30Hz fourth-order Butterworth band-pass filter, and normalizing the data;
step 2RBSAGAN network
Step 2.1 the Up ResBlock network structure is designed to expand the data dimension. The trunk of the Up ResBlock structure sequentially comprises a batch normalization layer (BatchNormalization, BN), a 1D deconvolution layer, a 1D convolution layer, a BN layer and a 1D convolution layer, and the branch part comprises the 1D deconvolution layer and the 1D convolution layer. The introduction of the BN layer is beneficial to the problem of poor weight initialization of the stability generator in the training process; the convolution kernel sizes of the 1D convolution layers are 3, and the step sizes are 1; the 1D deconvolution layer has the functions of up-sampling, deconvolution kernel sizes are all 7, and step sizes are all n us The method comprises the steps of carrying out a first treatment on the surface of the The number of the characteristic diagrams is n uc The method comprises the steps of carrying out a first treatment on the surface of the PReLU is adopted as the activation function to avoid sparse gradients;
step 2.2 the Down ResBlock network structure is designed to reduce the dimensionality of the data. The backbone of the Down ResBlock structure consists of two 1D convolutional layers and one 1D average pooling layer, and the branches consist of 1D convolutional layers and 1D average pooling layers. The convolution kernel sizes of the 1D convolution layers are 3, and the step length is 1; the average pooling layer performs downsampling treatment on the data, the pooling window sizes are all 2, and the step sizes are all n respectively ds The method comprises the steps of carrying out a first treatment on the surface of the The number of the characteristic diagrams is n respectively dc The method comprises the steps of carrying out a first treatment on the surface of the The activation functions are all leak ReLU;
step 2.3 design 1D Self-attribute network based on Self-attribute mechanism for directly calculating similarity between discrete time points, and all dataThe time characteristics are weighted, so that key information of the data is obtained more effectively. The convolution kernels of the 1D convolution layers are 1 in size, 1 in step length and n in number from left to right c /8、n c /8 and n c The output feature vector f is transposed and then carries out matrix multiplication operation with g, then the similarity among discrete moments is obtained through an activation function softmax, namely attention force diagram is obtained through matrix multiplication of the attention force diagram and h, the attention force diagram is processed by a scaling factor and then added with data input into the structure, and finally the dimension of the output is the same as the dimension of the input;
step 2.4RBSAGAN consists of two opposing networks that attempt to override each other. One network is a arbiter that distinguishes between real and spurious input data through training. Another network is a generator that takes as input a noise vector and attempts to generate spurious data that is not recognized as spurious by the arbiter. The arbiter drives the generator to generate better samples through the very small and very large games.
The generator of RBSAGAN is mainly composed of Up ResBlock and 1D Self-Attention, and is marked as US. Random noise vectors with dimension 64 are selected from uniform distribution (-1, 1) to be used as input of US, the random noise vectors are connected with a full connection layer with dimension 12800, data are Up-sampled by two Up ResBlock networks after dimension is converted into 100×128 through reshape operation, the 1D Self-Attention network enables the US to construct more effective connection between time samples, and finally the output data dimension is identical to brain electrical signals through 1D convolution. The step sizes of the two Up ResBlock are respectively 5 and 2, and the number of the characteristic graphs is respectively 64 and 32; the number of the feature graphs of the 1D Self-Attention network is 32; the convolution kernel size of the 1D convolution layer is 4, the step length is 1, the activation function is sigmoid, and the number of the feature images is the same as the lead number of the electroencephalogram signals.
The construction of the RBSAGAN discriminator is similar to that of a generator and mainly consists of Down ResBlock and 1D Self-Attention, and is marked as DS. And taking the real data and the generated data as input, sequentially passing through a 1D convolution layer, two Down Resblock networks, a 1D Self-Attention network and two full connection layers, and finally judging the authenticity of the data according to the distance between the calculated real data distribution and the generated data and taking the authenticity as the basis of optimizing the US and DS parameters. The convolution kernel size of the first 1D convolution layer is 3, the step length is 1, and the number of feature images is 16; the step sizes of the two Down ResBlocks are respectively 5 and 2, and the number of the characteristic graphs is respectively 64 and 128; the number of feature graphs of the 1D Self-Attention network is 128; the dimensions of the two full-connection layers are 128 and 1 respectively, and the activation function is a leakage ReLU;
step 2.5 to enhance the stability of the DS during training of RBSAGAN, the same loss function as WGAN is used and US will train 5 times per 1 DS training. The optimizer adopts Adam, the initial learning rate is set to be 0.0001, and the momentum beta is set to be 1 And beta 2 The values are respectively 0.1 and 0.999;
step 3 evaluation of generated data quality
The trained US generates new data, combines the new data with the existing data, extracts time domain and space domain features through the 1D CNN and realizes automatic classification. The designed 1D CNN consists of a 1D convolution layer, a BN layer, a maximum pooling layer and three full connection layers. The convolution kernel size of the 1D convolution is 5, the step length is 2, and the number of the feature images is 16; the size of the largest pooling layer is 2, and the step length is 1; the dimensions of the three fully connected layers are 600, 60 and n respectively m ,n m Dropout is added between all connecting layers to reduce the over-fitting phenomenon for the number of the electroencephalogram signal categories, and the activation functions are all ReLU. Finally, the probability of each category is output through softmax.
Compared with the prior art, the invention has the following advantages:
the RBSAGAN designed by the invention extracts rich features contained in the electroencephalogram signals through residual error learning of the Up Resblock and Down Resblock networks, overcomes the problem of incomplete feature information caused by a plurality of convolution layers, and realizes the change of data dimension. The 1D Self-Attention network is more suitable for the electroencephalogram signals with longer time span, so that the electroencephalogram signals learn the characteristic information of the mutual connection between different moments. The generated data is combined with the existing data and is evaluated through 1D CNN, and the RBSAGAN is verified to generate data which is closer to a real signal, so that the problem of small electroencephalogram signal sample size is solved.
Drawings
FIG. 1 is a schematic diagram of RBSAGAN;
FIG. 2 is a RBSAGAN network architecture;
FIG. 3 is an Up ResBlock and Down ResBlock network;
FIG. 4 is a 1D Self-attach network;
FIG. 5 is a 1D CNN network architecture;
fig. 6 is a flow chart of a method of data augmentation based on RBSAGAN.
Detailed Description
The experiments of the present invention were performed in the following hardware environment: intel Xeon E5-2683 2.00Hz CPU and GeForce 1070GPU with 8GB memory. All neural networks are implemented using the Pytorch framework.
The data used in the present invention is the "BCI Competition IV 2a" public data set. The brain electrical signals are collected through a 22-conductor cap with the specification of 10-20 systems and the sampling frequency of 250 Hz. Four classes of motor imagery tasks were performed by 9 subjects: left hand, right hand, foot, tongue. Each subject was subjected to two days of experiment, comprising 288 groups of experiments per day, and 576 groups of experiments. The brain electrical signal is filtered through a band pass filter of 0.5Hz to 100Hz and a notch filter of 50 Hz. Each experiment was indicated by an arrow at 2s, the direction was one of left, right, up or down (corresponding to the left hand, right hand, tongue or foot of the four classes of tasks), and held for 1.25s, the subject performed the corresponding motor imagery task according to the arrow direction displayed on the screen, and the subject had a rest at 6 s.
The invention is described in further detail below with reference to the accompanying drawings.
Step 1 electroencephalogram signal preprocessing
The original electroencephalogram data has dimensions of 576×1000×22, and contains 576 groups of experiments each acquired by 22 leads, including 1000 sampling points. Carrying out band-pass filtering on the electroencephalogram signals of each lead by using a 8-30Hz fourth-order Butterworth band-pass filter, and normalizing the data;
step 2RBSAGAN network
The schematic of Step 2.1 RBSAGAN is shown in fig. 1, where the network is mainly composed of US and DS, which are trained to distinguish between real and generated input data. US takes as input a noise vector and attempts to generate spurious data that will not be recognized as spurious by the DS, updating the parameters of US and DS with constant training.
As shown in FIG. 2 (a), the structure of the US of RBSAGAN is that samples with dimension 64 are randomly selected from uniform distribution (-1, 1) as the input of the US, the input is connected with a full connection layer with dimension 12800, the dimension is converted into 100×128 through reshape operation, the data is Up-sampled through two Up Resblock networks, the 1D Self-Attention network enables the US to more effectively construct the connection between time samples, and finally the dimension of the output data is identical with the electroencephalogram signal through 1D convolution. The Up ResBlock structure is shown in fig. 3 (a), and the trunk of the Up ResBlock structure comprises a BN layer, a 1D deconvolution layer, a 1D convolution layer, a BN layer and a 1D convolution layer, and the branching part consists of the 1D deconvolution layer and the 1D convolution layer. The convolution kernel sizes of the 1D convolution layers contained in the two Up Resblock networks are 3, and the step sizes are 1; the deconvolution kernel sizes of the 1D deconvolution layers are 7, and the step sizes are 5 and 2 respectively; the activation functions are PReLU; the feature map numbers are 64 and 32, respectively. The sizes of the output data are 64×500 and 32×1003, respectively. The 1D Self-Attention network weights the data at all times as shown in fig. 4, thereby enabling US to more efficiently construct the links between the times. The convolution kernels are 1 in size, 1 in step length and 4, 4 and 32 in number from left to right respectively, the output feature vector f is transposed and then subjected to matrix multiplication operation with g, attention force diagram is obtained through an activation function softmax, the attention force diagram is subjected to matrix multiplication with h to obtain a self-attention feature diagram, then the self-attention force diagram is multiplied by a scaling factor and added with data input into the structure, and finally the output dimension is 32 multiplied by 1003. Finally, the dimension of the output is 22 multiplied by 1000 and is the same as the electroencephalogram signal through 1D convolution, the convolution kernel size is 4, the step length is 1, and the number of the feature images is 22.
The DS structure of RBSAGAN is shown in FIG. 2 (b), and real data and generated data are used as input and sequentially pass through a 1D convolution layer, two Down Resblock networks, a 1D Self-Attention network and two full connection layers. The convolution kernel size of the 1D convolution layer is 3, the step size is 1, the number of feature maps is 16, and the dimension of the output data is 16×1000. The Down ResBlock network is shown in fig. 3 (b), where the trunk consists of two 1D convolutional layers and a 1D average pooling layer, and the branches consist of a 1D convolutional layer and a 1D average pooling layer. The convolution kernel sizes of the 1D convolution layers of the two Down Resblock structures are 3, and the step sizes are 1; the sizes of the 1D average pooling layers are 2, the step sizes are 5 and 2 respectively, the number of the feature graphs is 64 and 128 respectively, and the dimensions of output data are 64 multiplied by 200 and 128 multiplied by 100 respectively; the activation functions are all leak ReLU. The dimensions of the two subsequent full-connection layers are 128 and 1 respectively, the activation function is a leakage ReLU, and the final output is used as the basis for optimizing the parameters of the US and the DS;
the Step 2.2 RBSAGAN optimizer adopts Adam, the initial learning rate is 0.0001, and the momentum beta is 1 ,β 2 0.1 and 0.999, respectively. And obtaining a loss value according to the result of DS output, and respectively optimizing the network parameters of the US and the DS through back propagation. The RBSAGAN uses the same loss function as the WGAN in terms of losses. In order to keep DS and US in a balanced state during training, the US trains 5 times every 1 time the arbiter is trained. The number of network training is 100, the batch size is 10, and new data are generated by each category of data through RBSAGAN;
step 3 evaluation of generated data quality
The generated data are combined with the existing data to be used as a data set of 1D CNN, and the designed 1D CNN is composed of a 1D convolution layer, a BN layer, a maximum pooling layer and three full connection layers as shown in figure 5. The convolution kernel size of the 1D convolution layer is 5 steps of 2, and the number of the feature images is 16; the size of the largest pooling layer is 2, and the step length is 1; the dimensions of the three fully connected layers are 600, 60 and 4, respectively, and the activation functions are all ReLU. Dropout is added between all connection layers to reduce the overfitting phenomenon, and finally the probability of each category is output. The experimental results are shown in the following table.
TABLE 1 results of individual subject classification
Claims (3)
1. The data augmentation method based on RBSAGAN is characterized by comprising the following steps of: the method comprises the following specific steps:
step 1, preprocessing an electroencephalogram signal, carrying out band-pass filtering on the electroencephalogram signal of each lead by using a 8-30Hz fourth-order Butterworth band-pass filter, and normalizing the data;
step 2RBSAGAN network
Step 2.1 designs an Up ResBlock network structure for expanding the dimension of data; the trunk of the Up ResBlock structure sequentially comprises a batch normalization layer BN, a 1D deconvolution layer, a 1D convolution layer, a BN layer and a 1D convolution layer, and the branch part comprises the 1D deconvolution layer and the 1D convolution layer; the introduction of the BN layer is beneficial to the problem of poor weight initialization of the stability generator in the training process; the convolution kernel sizes of the 1D convolution layers are 3, and the step sizes are 1; the 1D deconvolution layer has the functions of up-sampling, deconvolution kernel sizes are all 7, and step sizes are all n us The method comprises the steps of carrying out a first treatment on the surface of the The number of the characteristic diagrams is n uc The method comprises the steps of carrying out a first treatment on the surface of the PReLU is adopted as the activation function to avoid sparse gradients;
step 2.2 designs a Down ResBlock network structure for reducing the dimension of data; the trunk of the Down ResBlock structure consists of two 1D convolution layers and a 1D average pooling layer, and the branch consists of the 1D convolution layers and the 1D average pooling layer; the convolution kernel sizes of the 1D convolution layers are 3, and the step length is 1; the average pooling layer performs downsampling treatment on the data, the pooling window sizes are all 2, and the step sizes are all n respectively ds The method comprises the steps of carrying out a first treatment on the surface of the The number of the characteristic diagrams is n respectively dc The method comprises the steps of carrying out a first treatment on the surface of the The activation functions are all leak ReLU;
step 2.3 designs a 1D Self-attribute network based on a Self-attribute mechanism to directly calculate the similarity between discrete moments, and weights the characteristics of all the moments of the data so as to more effectively acquire key information of the data; the convolution kernels of the 1D convolution layers are 1 in size, 1 in step length and n in number from left to right c /8、n c /8 and n c The output eigenvector f is transposed and then is subjected to matrix multiplication operation with g, obtaining similarity between discrete moments through an activation function softmax, namely attention force diagram, attention force diagram and hThe self-attention characteristic diagram is obtained through line matrix multiplication, the self-attention characteristic diagram is added with the data input into the structure after being processed by a scaling factor, and the dimension of final output is the same as that of input;
step 2.4RBSAGAN consists of two opposing networks that attempt to override each other; one network is a discriminator, which distinguishes between true and false input data through training; the other network is a generator that takes as input a noise vector and attempts to generate spurious data that is not recognized as spurious by the arbiter; the arbiter drives the generator to generate better samples through the minimum and maximum games;
step 2.5 is to enhance the stability of DS in the training process of RBSAGAN, the same loss function as that of WGAN is adopted, and DS is trained 5 times for each 1 time of training of US; the optimizer adopts Adam, the initial learning rate is set to be 0.0001, and the momentum beta is set to be 1 And beta 2 The values are respectively 0.1 and 0.999;
step 3 evaluation of generated data quality
The trained US generates new data, combines the new data with the existing data, extracts time domain and space domain characteristics through 1D CNN and realizes automatic classification; the designed 1D CNN consists of a 1D convolution layer, a BN layer, a maximum pooling layer and three full connection layers; the convolution kernel size of the 1D convolution is 5, the step length is 2, and the number of the feature images is 16; the size of the largest pooling layer is 2, and the step length is 1; the dimensions of the three fully connected layers are 600, 60 and n respectively m ,n m Dropout is added between all connecting layers to reduce the over-fitting phenomenon for the number of the electroencephalogram signal categories, and the activation functions are all ReLU; finally, the probability of each category is output through softmax.
2. The RBSAGAN-based data augmentation method of claim 1, wherein: the generator of RBSAGAN is composed of Up ResBlock and 1D Self-Attention, and is marked as US; random noise vectors with dimension 64 are selected from uniform distribution (-1, 1) to be used as input of US, and are connected with a full-connection layer with dimension 12800, after the dimension is converted into 100×128 through reshape operation, data are Up-sampled by two Up Resblock networks, the 1D Self-Attention network enables the US to construct more effective connection between time samples, and finally the dimension of the output data is identical to the electroencephalogram signal through 1D convolution; the step sizes of the two Up ResBlock are respectively 5 and 2, and the number of the characteristic graphs is respectively 64 and 32; the number of the feature graphs of the 1D Self-Attention network is 32; the convolution kernel size of the 1D convolution layer is 4, the step length is 1, the activation function is sigmoid, and the number of the feature images is the same as the lead number of the electroencephalogram signals.
3. The RBSAGAN-based data augmentation method of claim 1, wherein: the structure of a discriminator of RBSAGAN is similar to that of a generator, and consists of Down ResBlock and 1D Self-attribute and is marked as DS; the real data and the generated data are used as input, sequentially pass through a 1D convolution layer, two Down Resblock networks, a 1D Self-Attention network and two full connection layers, and finally judge the authenticity of the data according to the distance between the calculated real data distribution and the generated data and serve as the basis for optimizing US and DS parameters; the convolution kernel size of the first 1D convolution layer is 3, the step length is 1, and the number of feature images is 16; the step sizes of the two Down ResBlocks are respectively 5 and 2, and the number of the characteristic graphs is respectively 64 and 128; the number of feature graphs of the 1D Self-Attention network is 128; the dimensions of the two fully connected layers are 128 and 1, respectively, and the activation function is a leak ReLU.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011509929.9A CN112668424B (en) | 2020-12-19 | 2020-12-19 | RBSAGAN-based data augmentation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011509929.9A CN112668424B (en) | 2020-12-19 | 2020-12-19 | RBSAGAN-based data augmentation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112668424A CN112668424A (en) | 2021-04-16 |
CN112668424B true CN112668424B (en) | 2024-02-06 |
Family
ID=75407208
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011509929.9A Active CN112668424B (en) | 2020-12-19 | 2020-12-19 | RBSAGAN-based data augmentation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112668424B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109934282A (en) * | 2019-03-08 | 2019-06-25 | 哈尔滨工程大学 | A kind of SAR objective classification method expanded based on SAGAN sample with auxiliary information |
CN111833359A (en) * | 2020-07-13 | 2020-10-27 | 中国海洋大学 | Brain tumor segmentation data enhancement method based on generation of confrontation network |
-
2020
- 2020-12-19 CN CN202011509929.9A patent/CN112668424B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109934282A (en) * | 2019-03-08 | 2019-06-25 | 哈尔滨工程大学 | A kind of SAR objective classification method expanded based on SAGAN sample with auxiliary information |
CN111833359A (en) * | 2020-07-13 | 2020-10-27 | 中国海洋大学 | Brain tumor segmentation data enhancement method based on generation of confrontation network |
Also Published As
Publication number | Publication date |
---|---|
CN112668424A (en) | 2021-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112308158B (en) | Multi-source field self-adaptive model and method based on partial feature alignment | |
CN110069958B (en) | Electroencephalogram signal rapid identification method of dense deep convolutional neural network | |
CN108615010B (en) | Facial expression recognition method based on parallel convolution neural network feature map fusion | |
CN109726751B (en) | Method for recognizing electroencephalogram based on deep convolutional neural network | |
CN110163180A (en) | Mental imagery eeg data classification method and system | |
CN111950455B (en) | Motion imagery electroencephalogram characteristic identification method based on LFFCNN-GRU algorithm model | |
CN110353675A (en) | The EEG signals emotion identification method and device generated based on picture | |
CN107194426A (en) | A kind of image-recognizing method based on Spiking neutral nets | |
CN113627401A (en) | Myoelectric gesture recognition method of feature pyramid network fused with double-attention machine system | |
CN110717423B (en) | Training method and device for emotion recognition model of facial expression of old people | |
CN110399846A (en) | A kind of gesture identification method based on multichannel electromyography signal correlation | |
CN113934302B (en) | Myoelectric gesture recognition method based on SeNet and gating time sequence convolution network | |
CN110097029B (en) | Identity authentication method based on high way network multi-view gait recognition | |
CN110503082A (en) | A kind of model training method and relevant apparatus based on deep learning | |
CN111694977A (en) | Vehicle image retrieval method based on data enhancement | |
CN113116361A (en) | Sleep staging method based on single-lead electroencephalogram | |
CN112668486A (en) | Method, device and carrier for identifying facial expressions of pre-activated residual depth separable convolutional network | |
CN113180692A (en) | Electroencephalogram signal classification and identification method based on feature fusion and attention mechanism | |
CN115049814A (en) | Intelligent eye protection lamp adjusting method adopting neural network model | |
CN112800882A (en) | Mask face posture classification method based on weighted double-flow residual error network | |
CN111428601B (en) | P300 signal identification method, device and storage medium based on MS-CNN | |
CN113128384A (en) | Brain-computer interface software key technical method of stroke rehabilitation system based on deep learning | |
CN113076878A (en) | Physique identification method based on attention mechanism convolution network structure | |
CN114863572B (en) | Myoelectric gesture recognition method of multi-channel heterogeneous sensor | |
CN112668424B (en) | RBSAGAN-based data augmentation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |