CN112668424A - Data augmentation method based on RBSAGAN - Google Patents

Data augmentation method based on RBSAGAN Download PDF

Info

Publication number
CN112668424A
CN112668424A CN202011509929.9A CN202011509929A CN112668424A CN 112668424 A CN112668424 A CN 112668424A CN 202011509929 A CN202011509929 A CN 202011509929A CN 112668424 A CN112668424 A CN 112668424A
Authority
CN
China
Prior art keywords
data
layer
convolution
self
resblock
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011509929.9A
Other languages
Chinese (zh)
Other versions
CN112668424B (en
Inventor
李明爱
彭伟民
刘有军
孙炎珺
杨金福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202011509929.9A priority Critical patent/CN112668424B/en
Publication of CN112668424A publication Critical patent/CN112668424A/en
Application granted granted Critical
Publication of CN112668424B publication Critical patent/CN112668424B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention discloses an electroencephalogram data augmentation method based on RBSAGAN, which comprises the steps of designing Up ResBlock and Down ResBlock network structures, extracting characteristics under different scale receptive fields through two 1D convolutional layers of a main body and one 1D convolutional layer of a branch, and respectively adopting a 1D deconvolution layer and an average pooling layer to respectively enlarge and reduce data dimensions. The 1D Self-authorization network is designed based on the Self-authorization mechanism. The network structure is independent of the distance between the discrete moment data, can directly obtain the global time sequence characteristic by calculating the similarity between the discrete moment data in parallel, and is suitable for electroencephalogram signals with rich time sequence information. And the networks such as Down ResBlock, 1D Self-orientation and the like form a discriminator of the RBSAGAN, and the output loss value updates the parameters of the generator and the discriminator until Nash balance is achieved. The new data generated by the generator and the original data form an augmented data set, and the input 1D CNN is classified to evaluate the quality of the generated data.

Description

Data augmentation method based on RBSAGAN
Technical Field
The invention relates to the technical field of data amplification of Motor Imagery electroencephalogram (MI-EEG), in particular to a method for generating the Motor Imagery electroencephalogram by adopting Deep Learning (DL). In particular to: designing an Up ResBlock network and a Down ResBlock network to respectively increase and reduce data dimensions, designing a 1D Self-attachment network based on a Self-attachment mechanism aiming at long-distance dependence problems existing in longer data, constructing a RBSAGAN (ResBlock Self-attachment general adaptive network) through the network and using the RBSAGAN for generation of electroencephalogram signal data, adopting a Convolutional Neural Network (CNN) to perform feature extraction and classification on an expanded data set, and evaluating the quality of generated data.
Background
Brain Computer Interface (BCI) is a system that provides communication and control between the human Brain and external devices directly to a patient. Electroencephalogram (EEG) signals acquire human cerebral cortex activity through non-invasive devices, and research on electroencephalogram signals based on brain-computer interfaces is gaining more and more attention. In recent years, the deep learning method is used for identifying EEG signals to obtain good classification effect, but the deep learning method has the defect of large demand on data volume. However, the electroencephalogram signals have high requirements on acquisition environments and high acquisition cost, so that the data volume of a training network is insufficient, and difficulty is brought to methods for recognizing EEG signals based on deep learning technology. Obtaining a large amount of high quality EEG data with rich temporal information is key to obtaining good recognition results.
Given the tremendous success of generative countermeasure networks (GANs) in the field of image augmentation, GAN-based generation of EEG signals holds great promise. How to enable the generated data to have the key characteristics contained in the EEG is important, but the existing EEG signal amplification method cannot capture the relation between the data at each discrete moment and the global information, and the time sequence characteristics of the signals are not fully utilized, so that the characteristics of the generated EEG signals in a long-distance range are fuzzy. Moreover, the traditional method for stacking multilayer convolution networks has limited extraction of electroencephalogram characteristic information and the problem of characteristic loss, so that the quality of generated electroencephalogram signals is not ideal.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a method for amplifying electroencephalogram data based on RBSAGAN.
(1) Designing Up ResBlock and Down ResBlock network structures, extracting characteristics under different scale receptive fields through two 1D convolution layers of a main body and one 1D convolution layer of a branch, and respectively adopting a 1D deconvolution layer and an average pooling layer to respectively expand and reduce data dimensions.
(2) In order to capture the relation between the data at each discrete moment and the global data, a 1D Self-authorization network is designed based on a Self-authorization mechanism. The network structure is independent of the distance between the discrete moment data, can directly obtain the global time sequence characteristic by calculating the similarity between the discrete moment data in parallel, and is suitable for electroencephalogram signals with rich time sequence information.
(3) Constructing a generator of RBSAGAN by networks such as Up ResBlock and 1D Self-orientation, and generating new data; and the networks such as Down ResBlock, 1D Self-orientation and the like form a discriminator of the RBSAGAN, and the output loss value updates the parameters of the generator and the discriminator until Nash balance is achieved. The new data generated by the generator and the original data form an augmented data set, and the input 1D CNN is classified to evaluate the quality of the generated data.
The method comprises the following specific steps:
preprocessing a Step 1 electroencephalogram signal, performing band-pass filtering on the electroencephalogram signal of each lead by using a four-order Butterworth band-pass filter of 8-30Hz, and normalizing data;
step 2RBSAGAN network
Step 2.1 designs a Up ResBlock network structure for expanding the dimensionality of data. The main body of the Up ResBlock structure sequentially comprises a batch normalization layer (BN), a 1D deconvolution layer, a 1D convolution layer, a BN layer and a 1D convolution layer, and a branch part comprises the 1D deconvolution layer and the 1D convolution layer. The introduction of the BN layer is beneficial to the problem that the weight initialization of the stable generator is poor in the training process; the convolution kernel sizes of the 1D convolution layers are all 3, and the step lengths are all 1; production of 1D deconvolution layer thereofFor up-sampling, the sizes of deconvolution kernels are all 7, and the step lengths are all nus(ii) a The number of the characteristic graphs is nuc(ii) a The activation functions all adopt PReLU to avoid sparse gradient;
step 2.2 designs a Down ResBlock network structure for reducing the dimensionality of data. The trunk of the Down ResBlock structure consists of two 1D convolutional layers and one 1D average pooling layer, and the branch consists of a 1D convolutional layer and a 1D average pooling layer. The convolution kernel size of the 1D convolution layer is 3, and the step length is 1; the average pooling layer carries out down-sampling processing on the data, the size of a pooling window is 2, the step length is nds(ii) a The number of the characteristic graphs is n respectivelydc(ii) a The activation functions are both Leaky ReLU;
step 2.3, a 1D Self-orientation network is designed based on a Self-orientation mechanism and is used for directly calculating the similarity between discrete moments and weighting the characteristics of all moments of data, so that the key information of the data is more effectively acquired. The sizes of convolution kernels of the 1D convolution layers are all 1, the step lengths are all 1, and the number of characteristic graphs is n from left to rightc/8、ncA combination of/8 and ncAfter the output eigenvector f is converted, matrix multiplication is carried out on the eigenvector f and the eigenvector g, the similarity between discrete moments is obtained through an activation function softmax, namely an attention diagram is obtained, matrix multiplication is carried out on the attention diagram and the attention diagram h to obtain a self-attention characteristic diagram, the self-attention characteristic diagram is added with data input into the structure after being processed by a scaling factor, and finally the output dimensionality is the same as the input dimensionality;
step 2.4RBSAGAN consists of two opposing networks trying to go beyond each other. One network is a discriminator, trained to distinguish between real and spurious input data. Another network is a generator that takes a noise vector as input and attempts to generate spurious data that is not recognized as false by the discriminator. The arbiter drives the generator to generate better samples through the mingma game.
The generator of RBSAGAN is mainly composed of Up ResBlock and 1D Self-orientation, and is marked as US. Selecting a random noise vector with a dimension of 64 from uniform distribution (-1,1) as the input of the US, connecting the random noise vector with a dimension of 12800, converting the dimension into 100 x 128 through reshape operation, then performing Up-sampling on data through two Up ResBlock networks, enabling the US to more effectively construct the relation between time samples through the 1D Self-Attention network, and finally enabling the output data dimension to be the same as the electroencephalogram signal through 1D convolution. The step sizes of two Up resblocks are 5 and 2 respectively, and the number of characteristic graphs is 64 and 32 respectively; the number of feature maps of the 1D Self-Attention network is 32; the convolution kernel size of the 1D convolution layer is 4, the step length is 1, the activation function is sigmoid, and the number of the characteristic graphs is the same as the number of the lead connections of the electroencephalogram signal.
The structure of the arbiter of RBSAGAN is similar to the generator, and mainly comprises Down ResBlock and 1D Self-orientation, and is marked as DS. The real data and the generated data are used as input, sequentially pass through a 1D convolutional layer, two Down ResBlock networks, a 1D Self-authorization network and two full connection layers, and finally the authenticity of the data is judged according to the distance between the real data distribution and the generated data, and the authenticity is used as the basis for optimizing US and DS parameters. The convolution kernel size of the first 1D convolutional layer is 3, the step size is 1, and the number of feature maps is 16; the step sizes of the two Down resblocks are 5 and 2 respectively, and the number of the characteristic maps is 64 and 128 respectively; the number of feature maps of the 1D Self-Attention network is 128; the dimensionality of the two fully-connected layers is 128 and 1 respectively, and the activation function is Leaky ReLU;
step 2.5 to enhance the stability of the DS during the training process of RBSAGAN, the same loss function as WGAN was used, and the US was trained 5 times per 1 training. The optimizer uses Adam with initial learning rate set to 0.0001 and momentum beta1And beta2The values are 0.1 and 0.999 respectively;
step 3 evaluation of generated data quality
The trained US generates new data, combines the new data with the existing data, extracts time domain and space domain characteristics through 1D CNN and realizes automatic classification. The designed 1D CNN consists of a 1D convolution layer, a BN layer, a maximum pooling layer and three full-connection layers. The convolution kernel size of the 1D convolution is 5, the step size is 2, and the number of characteristic graphs is 16; the size of the maximum pooling layer is 2, and the step length is 1; the dimensions of the three fully-connected layers are 600, 60 and n respectivelym,nmDropout is added between all connection layers for the number of electroencephalogram signal types to reduce the overfitting phenomenonThe live functions are all relus. And finally, outputting the probability of each category through softmax.
Compared with the prior art, the invention has the following advantages:
according to the RBSAGAN designed by the invention, rich characteristics contained in the electroencephalogram signals are extracted through residual learning of the Up ResBlock and the Down ResBlock networks, the problem of incomplete characteristic information caused by a plurality of convolutional layers is solved, and the change of data dimensions is realized. The 1D Self-Attention network is more suitable for electroencephalogram signals with longer time span, so that the electroencephalogram signals can learn the characteristic information of mutual connection between different moments. The generated data and the existing data are merged and evaluated through 1D CNN, the data which are closer to the real signals generated by RBSAGAN are verified, and the problem of small electroencephalogram signal sample size is solved.
Drawings
FIG. 1 is a schematic view of RBSAGAN;
FIG. 2 is a RBSAGAN network architecture;
FIG. 3 is a diagram of a Up ResBlock and Down ResBlock network;
FIG. 4 is a 1D Self-attachment network;
fig. 5 is a 1D CNN network structure;
fig. 6 is a flow chart of a data augmentation method based on RBSAGAN.
Detailed Description
The experiments of the present invention were performed in the following hardware environment: intel Xeon E5-26832.00 Hz CPU with 14 cores and GeForce 1070GPU with 8GB memory. All neural networks are implemented using a Pytorch framework.
The data used in the present invention is a public data set "BCI Competition IV 2 a". The EEG signals are collected by a 22-conductive electrode cap with the specification of 10-20 systems and the sampling frequency of 250 Hz. The 9 subjects performed four types of motor imagery tasks: left hand, right hand, foot, tongue. Each subject was subjected to two days of experiments, each day comprising 288 experiments for a total of 576 experiments. The EEG signals are filtered by a band-pass filter of 0.5Hz to 100Hz and a notch filter of 50 Hz. The arrow indicates that the direction is left, right, up or down (corresponding to four types of tasks, namely left hand, right hand, tongue or foot) at 2s in each experiment, the test subject keeps 1.25s, executes the corresponding motor imagery task according to the arrow direction displayed on the screen, and the test subject takes rest at 6 s.
The following describes the present invention in further detail with reference to the accompanying drawings.
Step 1 electroencephalogram signal preprocessing
The dimensionality of the original electroencephalogram data is 576 multiplied by 1000 multiplied by 22, and the total number of the experiments is 576 groups, each group of the experiments is collected by 22 leads and comprises 1000 sampling points. Performing band-pass filtering on the electroencephalogram signals of each lead by using a four-order Butterworth band-pass filter of 8-30Hz, and normalizing data;
step 2RBSAGAN network
Step 2.1 RBSAGAN schematic diagram as shown in fig. 1, the network is mainly composed of US and DS, and the DS distinguishes real and generated input data by training. The US takes a noise vector as input and attempts to generate spurious data that is not recognized as false by the DS, updating the parameters of the US and DS by continuous training.
The structure of the US of RBSAGAN is shown in fig. 2(a), a sample with dimension 64 is randomly selected from evenly distributed (-1,1) to serve as the input of the US, the input is connected with a full connection layer with dimension 12800, the dimension is converted into 100 × 128 through reshape operation, data is Up-sampled through two Up ResBlock networks, the 1D Self-orientation network enables the US to more effectively construct the relation between time samples, and finally the output data dimension is the same as the electroencephalogram through 1D convolution. As shown in fig. 3(a), the Up ResBlock structure has a main layer including a BN layer, a 1D deconvolution layer, a 1D convolution layer, a BN layer, and a 1D convolution layer, and a branch portion including the 1D deconvolution layer and the 1D convolution layer. The convolution kernel sizes of the 1D convolution layers contained in the two Up ResBlock networks are both 3, and the step lengths are both 1; the sizes of deconvolution kernels of the 1D deconvolution layers are all 7, and the step lengths are 5 and 2 respectively; the activation functions are all PReLU; the feature map numbers are 64 and 32, respectively. The sizes of the output data are 64 × 500 and 32 × 1003, respectively. The 1D Self-Attention network weights the data at all times as shown in fig. 4, so that the US can construct the connection between the times more efficiently. The sizes of convolution kernels are all 1, the step lengths are all 1, the number of feature maps is 4, 4 and 32 from left to right, the output feature vector f is transformed and then subjected to matrix multiplication with g, an attention map is obtained by activating a function softmax, the attention map is subjected to matrix multiplication with h to obtain a self-attention feature map, then the self-attention feature map is multiplied by a scaling factor and added with data input into the structure, and finally the output dimension is 32 multiplied by 1003. Finally, the dimensionality of the output is 22 multiplied by 1000 through 1D convolution, the output dimensionality is the same as that of the electroencephalogram signal, the convolution kernel size is 4, the step size is 1, and the number of feature maps is 22.
The structure of the DS of RBSAGAN is as shown in fig. 2(b), real data and generated data are input, and sequentially pass through a 1D convolutional layer, two Down ResBlock networks, a 1D Self-attachment network, and two full connection layers. The convolution kernel size of the 1D convolutional layer is 3, the step size is 1, the number of feature maps is 16, and the dimensionality of the output data is 16 × 1000. The Down ResBlock network is as shown in fig. 3(b), the backbone is composed of two 1D convolutional layers and 1D average pooling layers, and the branches are composed of 1D convolutional layers and 1D average pooling layers. The convolution kernel sizes of the 1D convolution layers of the two Down ResBlock structures are both 3, and the step lengths are both 1; the sizes of the 1D average pooling layers are all 2, the step sizes are respectively 5 and 2, the number of characteristic graphs is respectively 64 and 128, and the dimensionality of output data is respectively 64 multiplied by 200 and 128 multiplied by 100; the activation functions are both Leaky ReLU. The dimensionality of the two subsequent full-connection layers is 128 and 1 respectively, the activation function is Leaky ReLU, and the final output is used as the basis for optimizing the parameters of US and DS;
the optimizer of Step 2.2 RBSAGAN adopts Adam, the initial learning rate is 0.0001, and the momentum is beta1,β20.1 and 0.999 respectively. And obtaining a loss value according to the result output by the DS, and respectively optimizing the network parameters of the US and the DS through back propagation. In terms of loss, RBSAGAN uses the same loss function as WGAN. To keep the DS and US in a more balanced state during training, the US is trained 5 times for each 1 training of the arbiter. The number of network training times is 100, the batch size is 10, and the data of each category is used for generating new data through RBSAGAN;
step 3 evaluation of generated data quality
The generated data is merged with the existing data to be used as a data set of the 1D CNN, and the designed 1D CNN is shown in FIG. 5 and consists of a 1D convolution layer, a BN layer, a maximum pooling layer and three full connection layers. The convolution kernel size of the 1D convolution layer is 5, the step size is 2, and the number of characteristic graphs is 16; the size of the maximum pooling layer is 2, and the step length is 1; the dimensions of the three fully-connected layers are 600, 60 and 4 respectively, and the activation functions are all ReLU. Dropout is added between all connection layers to reduce the overfitting phenomenon, and finally the probability of each category is output. The results of the experiments are shown in the following table.
TABLE 1 results of the classification of the individual subjects
Figure BDA0002846102660000051
Figure BDA0002846102660000061

Claims (3)

1. The data augmentation method based on RBSAGAN is characterized in that: the method comprises the following specific steps:
preprocessing a Step 1 electroencephalogram signal, performing band-pass filtering on the electroencephalogram signal of each lead by using a four-order Butterworth band-pass filter of 8-30Hz, and normalizing data;
step 2RBSAGAN network
Step 2.1, designing a Up ResBlock network structure for expanding the dimensionality of data; the trunk of the Up ResBlock structure sequentially consists of a batch normalization layer BN, a 1D deconvolution layer, a 1D convolution layer, a BN layer and a 1D convolution layer, and the branch consists of the 1D deconvolution layer and the 1D convolution layer; the introduction of the BN layer is beneficial to the problem that the weight initialization of the stable generator is poor in the training process; the convolution kernel sizes of the 1D convolution layers are all 3, and the step lengths are all 1; the function of the 1D deconvolution layer is up-sampling, the sizes of deconvolution kernels are all 7, the step lengths are all nus(ii) a The number of the characteristic graphs is nuc(ii) a The activation functions all adopt PReLU to avoid sparse gradient;
step 2.2, designing a Down ResBlock network structure for reducing the dimensionality of data; the trunk of the Down ResBlock structure consists of two 1D convolution layers and a 1D average pooling layer, and the branch consists of a 1D convolution layer and a 1D average pooling layerForming; the convolution kernel size of the 1D convolution layer is 3, and the step length is 1; the average pooling layer carries out down-sampling processing on the data, the size of a pooling window is 2, the step length is nds(ii) a The number of the characteristic graphs is n respectivelydc(ii) a The activation functions are both Leaky ReLU;
step 2.3, designing a 1D Self-orientation network based on a Self-orientation mechanism for directly calculating the similarity between discrete moments and weighting the characteristics of all the moments of the data, thereby more effectively acquiring the key information of the data; the sizes of convolution kernels of the 1D convolution layers are all 1, the step lengths are all 1, and the number of characteristic graphs is n from left to rightc/8、ncA combination of/8 and ncAfter the output eigenvector f is converted, matrix multiplication is carried out on the eigenvector f and the eigenvector g, the similarity between discrete moments is obtained through an activation function softmax, namely an attention diagram is obtained, matrix multiplication is carried out on the attention diagram and the attention diagram h to obtain a self-attention characteristic diagram, the self-attention characteristic diagram is added with data input into the structure after being processed by a scaling factor, and finally the output dimensionality is the same as the input dimensionality;
step 2.4RBSAGAN consists of two opposing networks trying to go beyond each other; one network is a discriminator, which distinguishes true and false input data by training; another network is a generator that takes a noise vector as input and tries to generate spurious data that is not recognized as false by the discriminator; enabling the discriminator to drive the generator to generate better samples through the infinitesimal maximum game;
step 2.5, in order to enhance the stability of the DS of the RBSAGAN in the training process, the same loss function as the loss function of the WGAN is adopted, and 5 times of DS training are carried out every 1 time of US training; the optimizer uses Adam with initial learning rate set to 0.0001 and momentum beta1And beta2The values are 0.1 and 0.999 respectively;
step 3 evaluation of generated data quality
The trained US generates new data, combines the new data with the existing data, extracts time domain and space domain characteristics through 1D CNN and realizes automatic classification; the designed 1D CNN consists of a 1D convolution layer, a BN layer, a maximum pooling layer and three full-connection layers; the convolution kernel size of the 1D convolution is 5, the step size is 2, and the number of characteristic graphs is 16; the size of the maximum pooling layer is 2, step size is1; the dimensions of the three fully-connected layers are 600, 60 and n respectivelym,nmDropout is added between all connection layers to reduce the overfitting phenomenon for the number of electroencephalogram signal types, and the activation functions are ReLU; and finally, outputting the probability of each category through softmax.
2. The RBSAGAN-based data augmentation method of claim 1, wherein: the generator of the RBSAGAN consists of Up ResBlock and 1D Self-orientation, and is marked as US; selecting a random noise vector with a dimension of 64 from uniform distribution (-1,1) as the input of the US, connecting the random noise vector with a dimension of 12800, converting the dimension into 100 x 128 through reshape operation, then performing Up-sampling on data through two Up ResBlock networks, enabling the US to more effectively construct the relation between time samples through the 1D Self-orientation network, and finally enabling the output data dimension to be the same as the electroencephalogram signal through 1D convolution; the step sizes of two Up resblocks are 5 and 2 respectively, and the number of characteristic graphs is 64 and 32 respectively; the number of feature maps of the 1D Self-Attention network is 32; the convolution kernel size of the 1D convolution layer is 4, the step length is 1, the activation function is sigmoid, and the number of the characteristic graphs is the same as the number of the lead connections of the electroencephalogram signal.
3. The RBSAGAN-based data augmentation method of claim 1, wherein: the structure of the arbiter of RBSAGAN is similar to the generator, is formed by Down ResBlock and 1D Self-orientation, and is marked as DS; the real data and the generated data are used as input, sequentially pass through a 1D convolutional layer, two Down ResBlock networks, a 1D Self-authorization network and two full connection layers, and finally the authenticity of the data is judged according to the distance between the real data distribution and the generated data, and the authenticity is used as the basis for optimizing US and DS parameters; the convolution kernel size of the first 1D convolutional layer is 3, the step size is 1, and the number of feature maps is 16; the step sizes of the two Down resblocks are 5 and 2 respectively, and the number of the characteristic maps is 64 and 128 respectively; the number of feature maps of the 1D Self-Attention network is 128; the dimensions of the two fully connected layers are 128 and 1 respectively, and the activation function is Leaky ReLU.
CN202011509929.9A 2020-12-19 2020-12-19 RBSAGAN-based data augmentation method Active CN112668424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011509929.9A CN112668424B (en) 2020-12-19 2020-12-19 RBSAGAN-based data augmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011509929.9A CN112668424B (en) 2020-12-19 2020-12-19 RBSAGAN-based data augmentation method

Publications (2)

Publication Number Publication Date
CN112668424A true CN112668424A (en) 2021-04-16
CN112668424B CN112668424B (en) 2024-02-06

Family

ID=75407208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011509929.9A Active CN112668424B (en) 2020-12-19 2020-12-19 RBSAGAN-based data augmentation method

Country Status (1)

Country Link
CN (1) CN112668424B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934282A (en) * 2019-03-08 2019-06-25 哈尔滨工程大学 A kind of SAR objective classification method expanded based on SAGAN sample with auxiliary information
CN111833359A (en) * 2020-07-13 2020-10-27 中国海洋大学 Brain tumor segmentation data enhancement method based on generation of confrontation network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934282A (en) * 2019-03-08 2019-06-25 哈尔滨工程大学 A kind of SAR objective classification method expanded based on SAGAN sample with auxiliary information
CN111833359A (en) * 2020-07-13 2020-10-27 中国海洋大学 Brain tumor segmentation data enhancement method based on generation of confrontation network

Also Published As

Publication number Publication date
CN112668424B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN112308158B (en) Multi-source field self-adaptive model and method based on partial feature alignment
CN110069958B (en) Electroencephalogram signal rapid identification method of dense deep convolutional neural network
CN109472194B (en) Motor imagery electroencephalogram signal feature identification method based on CBLSTM algorithm model
CN108615010B (en) Facial expression recognition method based on parallel convolution neural network feature map fusion
CN108596039B (en) Bimodal emotion recognition method and system based on 3D convolutional neural network
CN108491077B (en) Surface electromyographic signal gesture recognition method based on multi-stream divide-and-conquer convolutional neural network
CN110163180A (en) Mental imagery eeg data classification method and system
CN111950455B (en) Motion imagery electroencephalogram characteristic identification method based on LFFCNN-GRU algorithm model
CN109726751A (en) Method based on depth convolutional neural networks identification brain Electrical imaging figure
CN111696101A (en) Light-weight solanaceae disease identification method based on SE-Inception
CN111832416A (en) Motor imagery electroencephalogram signal identification method based on enhanced convolutional neural network
CN106503661B (en) Face gender identification method based on fireworks deepness belief network
CN114038037B (en) Expression label correction and identification method based on separable residual error attention network
CN113920581B (en) Method for identifying actions in video by using space-time convolution attention network
CN110717423B (en) Training method and device for emotion recognition model of facial expression of old people
CN113627401A (en) Myoelectric gesture recognition method of feature pyramid network fused with double-attention machine system
CN112465069A (en) Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN
CN112950480A (en) Super-resolution reconstruction method integrating multiple receptive fields and dense residual attention
CN112766283A (en) Two-phase flow pattern identification method based on multi-scale convolution network
CN114781441B (en) EEG motor imagery classification method and multi-space convolution neural network model
CN113116361A (en) Sleep staging method based on single-lead electroencephalogram
CN115965864A (en) Lightweight attention mechanism network for crop disease identification
CN113128353B (en) Emotion perception method and system oriented to natural man-machine interaction
CN114170657A (en) Facial emotion recognition method integrating attention mechanism and high-order feature representation
CN113076878A (en) Physique identification method based on attention mechanism convolution network structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant