CN115310585A - High-dimensional neural signal dimension reduction method based on self-encoder and application - Google Patents
High-dimensional neural signal dimension reduction method based on self-encoder and application Download PDFInfo
- Publication number
- CN115310585A CN115310585A CN202210785759.XA CN202210785759A CN115310585A CN 115310585 A CN115310585 A CN 115310585A CN 202210785759 A CN202210785759 A CN 202210785759A CN 115310585 A CN115310585 A CN 115310585A
- Authority
- CN
- China
- Prior art keywords
- neural
- signals
- dimensional
- encoder
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000001537 neural effect Effects 0.000 title claims abstract description 105
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000009467 reduction Effects 0.000 title claims abstract description 45
- 238000004458 analytical method Methods 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims abstract description 10
- 238000012800 visualization Methods 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 22
- 210000005036 nerve Anatomy 0.000 claims description 21
- 238000012549 training Methods 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 12
- 238000013528 artificial neural network Methods 0.000 claims description 10
- 230000003542 behavioural effect Effects 0.000 claims description 7
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000012512 characterization method Methods 0.000 claims description 4
- 230000001054 cortical effect Effects 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000001228 spectrum Methods 0.000 claims description 3
- 238000011478 gradient descent method Methods 0.000 claims description 2
- 210000004556 brain Anatomy 0.000 claims 1
- 230000006399 behavior Effects 0.000 abstract description 25
- 238000013527 convolutional neural network Methods 0.000 abstract description 7
- 239000000284 extract Substances 0.000 abstract description 5
- 230000008901 benefit Effects 0.000 abstract description 4
- 238000000605 extraction Methods 0.000 abstract description 3
- 230000007787 long-term memory Effects 0.000 abstract description 3
- 230000006403 short-term memory Effects 0.000 abstract description 3
- 230000000007 visual effect Effects 0.000 abstract description 2
- 238000000513 principal component analysis Methods 0.000 description 6
- 238000002474 experimental method Methods 0.000 description 4
- 230000015654 memory Effects 0.000 description 4
- 241000282693 Cercopithecidae Species 0.000 description 3
- 238000002790 cross-validation Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000002566 electrocorticography Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- OYPRJOBELJOOCE-UHFFFAOYSA-N Calcium Chemical group [Ca] OYPRJOBELJOOCE-UHFFFAOYSA-N 0.000 description 1
- 229910052791 calcium Inorganic materials 0.000 description 1
- 239000011575 calcium Substances 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000000556 factor analysis Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000000926 neurological effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The invention provides a high-dimensional neural signal dimension reduction method based on a self-encoder and application thereof, which combine the powerful feature extraction capability of a convolutional neural network and the time sequence processing capability of long-term and short-term memory, reduce the dimension of neural data through the self-encoder, and extract behavior-related information for subsequent analysis and decoding. Compared with the existing neural signal dimension reduction algorithm, the neural signal dimension reduction algorithm can reserve more neural information related to behaviors in a feature space with lower dimension, and has remarkable advantages in the aspects of decoding motion tracks and analyzing behavior features. In addition, the neural characteristics after visual dimensionality reduction show that the method has good dimensionality reduction performance and can be applied to analysis, decoding, visualization and other processing of various neural signals.
Description
Technical Field
The invention belongs to the field of signal processing in neuroscience or neural engineering, and relates to a high-dimensional neural signal dimension reduction method based on an autoencoder and application thereof.
Background
As the size of the neuromorphic recording increases, the performance of implantable brain-computer interfaces is constrained by high-dimensional neural features. Especially for continuous signals (such as Local Field Potential (LFP) or corticography (ECoG), etc.), the dimension of the features grows exponentially with the increase of frequency band, which brings huge challenges for subsequent neural feature decoding. Therefore, dimension reduction, which is a method for retaining and highlighting key information in original features by using low-dimensional representation, can be well used for removing redundant information in neural features, and is a key processing step before decoding neural features.
At present, various algorithms are applied to the dimensionality reduction analysis of the neural signals, including classical Principal Component Analysis (PCA), up-to-date Prior Subspace Identification (PSID), implicit factor analysis (LFADS) under a dynamic system, and the like, and can be used for the dimensionality reduction work of the neural signals. However, these methods have certain disadvantages, for example, PCA captures all kinds of variances in raw data, including recording noise and neural information irrelevant to behavioral tasks, etc., which leads to interpretability of PCA principal components; the flexibility of the PSID is slightly deficient, the target dimension of dimension reduction is limited by boundary parameters in the algorithm, and the training time of the algorithm is exponentially multiplied along with the increase of the boundary parameters; the LFADS is only used for peak potential signals and is not suitable for use in a computer-to-computer interface.
Disclosure of Invention
The invention aims to provide a high-dimensional neural signal dimension reduction method based on a self-encoder, which combines the powerful feature extraction capability of a convolutional neural network and the time sequence processing capability of long-term and short-term memory, performs dimension reduction on neural data through the self-encoder, extracts behavior-related neural information, and maps the behavior-related neural information into a low-dimensional subspace, so as to realize visual representation of a neural cluster activity mode and be used for subsequent neural decoding.
The invention is realized by the following steps:
(1) Performing time-frequency analysis on the original continuous neural signals to extract time-frequency characteristics;
(2) Aligning the neural features and the corresponding behavior signals to obtain a complete training data set;
(3) Constructing a self-encoder neural network comprising a condition module for reducing the dimension of the neural characteristics, and training the neural network by using a large number of samples to obtain a neural signal dimension reduction model;
(4) Preprocessing a nerve signal to be tested and inputting the preprocessed nerve signal into a dimensionality reduction model to obtain a low-dimensional nerve representation of the signal;
(5) The resulting low-dimensional neural characterization is used for subsequent analysis or decoding work.
Further, the step (1) is continuous neural signals, i.e. Local Field Potential (LFP) or corticography (ECoG) signals under a specific task paradigm; the specific task refers to a brain-computer interface task paradigm related to motion, comprises a two-dimensional cursor task, a three-dimensional food tracking task and the like and is used for generating specific neural activity; the process of preprocessing the neural signals is as follows: firstly, filtering data, removing baseline drift and power frequency interference, and then respectively extracting time-frequency band characteristics of a plurality of frequency bands (such as 0.3-5, 5-8, 8-13, 13-30, 30-70, 70-200, 200-400Hz and the like) by using multi-window power spectrum estimation.
Further, in the step (2), neural signals and behavior signals under a task paradigm need to be synchronously acquired, time labels are marked on the neural signals and the behavior signals, and the neural signals and the behavior signals are down-sampled to a specific frequency (such as 10 Hz) after the labels are aligned; the behavior signals are different according to different task paradigms, and can be continuous time sequence signals or discrete label signals.
Further, the self-encoder neural network in the step (3) comprises a dimension reduction module consisting of an encoder and a decoder and a condition module consisting of a full connection layer; the encoder is used for compressing input data of the neural network into low-dimensional potential features and providing the low-dimensional potential features to the decoder and the condition module, the decoder is used for reconstructing the low-dimensional neural features into original neural features, and the condition module is used for mapping the low-dimensional neural features into behavior data to play a role in regularizing network training.
Further, the specific process of training the self-encoder in step (3) is as follows: inputting the training samples into the network one by one to obtain reconstructed neural data and decoded behavior data, and respectively calculating loss functions between the reconstructed neural data and the real neural dataAnd a loss function between the decoded behavior data and the real neural dataAnd updating network parameters through back propagation by a gradient descent method according to the sum of the two loss functions, and iterating the process until the loss functions are converged.
Further, the loss function is expressed as follows:
wherein theta is AE Are the trainable parameters of the self-encoder,to reconstruct the term loss function, θ CE For the trainable parameters of the encoder and condition module,for decoding term loss function, λ is weight of decoding term loss function, n is dimension of original nerve feature, X i For the input raw neural features, m is the feature dimension of the behavioral data, Y i For i-th dimension of behavior data, table psi, phi and omegaA decoder, an encoder, and a condition module.
The invention also aims to provide application of the method in analysis, decoding, visualization and other processing of various types of neural signals (including local field potential signals, cortical electroencephalogram signals and the like).
The invention researches a self-encoder model to perform dimensionality reduction treatment on high-dimensional continuous neural signals obtained by neuroscience experiments. The model is specifically constructed based on Convolutional Neural Networks (CNNs) and Long-Short-Term Memory (LSTM), and dimension reduction of high-dimensional continuous Neural signals is realized and information related to behaviors in the high-dimensional continuous Neural signals is reserved through respective strong feature extraction and time sequence processing capabilities of the CNNs and the LSTM, so that an effective dimension reduction method is provided for dynamic characterization of neurological experimental Neural data.
The invention designs a dimensionality reduction method based on an autoencoder for a multi-channel continuous neural signal, which can efficiently extract components related to a behavior task from high-dimensional neural characteristics and map the components to a low-dimensional potential subspace. The invention can be used as a preposed signal processing method for high-precision decoding work or a brand-new analysis tool for researchers to analyze the distribution characteristics and the dynamic characteristics of neural signals. Compared with methods such as PCA and PSID, the method can process large-scale multi-channel continuous neural signals and can effectively extract behavior-related neural features. The invention is also superior to PCA and other dimension reduction methods in the aspects of clustering accuracy and decoding precision. In addition, in consideration of the number of effective dimensions after dimension reduction, the method has obvious advantages in the aspect of computational complexity compared with methods such as PSID and LFADS. Compared with the existing neural signal dimension reduction algorithm, the neural signal dimension reduction algorithm can keep more neural information related to behaviors in a feature space with lower dimension, and has more remarkable advantages in the aspects of decoding motion tracks and analyzing behavior features. In addition, the clustering accuracy of the features after dimensionality reduction is evaluated in a quantitative mode, and the high-dimensional neural signal dimensionality reduction method based on the self-encoder of the recurrent neural network and the long-term and short-term memory is superior to other dimensionality reduction methods.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a network structure diagram of a high-dimensional neural signal dimension reduction method based on a long-short term memory self-encoder.
FIG. 3 is a representation of a monkey two-dimensional center-out cursor experiment neural signal visualized using the reduced dimension features of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
Example 1
The invention relates to a high-dimensional neural signal dimensionality reduction method of a self-encoder based on a recurrent neural network and long-short term memory, which comprises the following steps of:
1. and designing a corresponding experimental paradigm according to the experimental target. In the experimental paradigm, both neural and behavioral signals are recorded. Preprocessing the recorded neural signals, firstly filtering by using a high-pass filter with the cutoff frequency of 0.3Hz, and removing direct-current components; then, a notch filter with the center frequency of 50Hz and the harmonic frequency thereof is used for removing power frequency interference; and then common-mode average reference is used for removing common noise of each channel, and a multi-window power spectrum estimation method is used for extracting time-frequency band characteristics with frequency bands of 0.3-5 Hz, 5-8 Hz, 8-13 Hz, 13-30 Hz, 30-70 Hz, 70-200 Hz and 200-400Hz.
2. Aligning the neural data and the behavioral data according to the timestamps of the neural data and the behavioral data; after that, the interval between the sampling points is set to 100 msec. The data with the last time length of 5 minutes is taken as a test set, and the rest data are taken as a training set and a verification set. For example, using Utah array electrodes, 96 channels, the original sampling rate is 2kHz, the characteristic dimension of the obtained nerve data per second is 2000 × 96 × 7, after the processing of the step, the time dimension of the data can be reduced from 2000 to 100, and the obtained nerve data per second is 100 × 96 × 7
3. And (3) reducing the dimension of the data obtained in the step (2) through a self-encoder. The structure of the network is shown in fig. 2. The optimal dimension of the dimension reduction is determined by five-fold cross validation. The dimension range of the cross-validation search is set to a = {2,3,4,6,8,10,20}. Dense searches of set a within 20 dimensions are based on motion-related dimensions in neural features, typically within 20 dimensions. After dimension reduction, the neural features per second are changed from 100 × 96 × 7 of input to 100 × N, where N is the dimension of the optimal low-dimensional space. The compression of neural data features by the encoder can be expressed by the following formula:
z=φ(x)
wherein φ represents an encoder composed of four layers of CNN and one layer of LSTM, x represents the input neural features, and z represents the neural features after dimensionality reduction. During the training process, the decoder reconstructs the original neural data aiming at the hidden variables after dimensionality reduction:
where psi is also a decoder consisting of four layers of CNN and one layer of LSTM,representing reconstructed neural data.
Besides the decoder, during the training process, the neural feature z after dimensionality reduction is also decoded into behavior data by the condition module, and the process of mapping the low-dimensional neural feature to the behavior data is as follows:
During the training process, the invention uses the following loss function:
wherein,is a global loss function;in order to reconstruct a loss function of the neural data, the low-dimensional representation obtained by the model can be ensured to reconstruct the original data;ensuring that sufficient behavior-related information is contained in the low-dimensional representation for decoding the loss function of the neural data; λ is the weight of the decoding loss function. The detailed calculation process of the above two loss functions is as follows:
wherein theta is AE Being trainable parameters of the self-encoder, theta CE Trainable parameters for the encoder and condition module, n is the dimension of the original neural features, X i Is the ith dimension of input raw nerve data, m is the characteristic dimension of behavior data, Y i Is the behavioral data of the ith dimension.
4. The algorithm optimizes the loss function through an Adam optimizer, calculates an optimal model and achieves dimensionality reduction of high-dimensional neural data. During training, the learning rate of 0.01 is determined by cross validation, and λ =1 is used. The experiment was tested for a total of 12 days with data on a daily basis at around 30 minutes.
5. After the training is completed, the model is used to compute the low-dimensional potential characterization of the neural signals, namely:
z=ψ(X,θ enc )
FIG. 3 shows a dimension reduction feature visualization of the algorithm on a single piece of data under test. The neural features after dimensionality reduction are projected into a two-dimensional jPCA space by using jPCA. The figure shows the nerve trajectories in the opposite direction of motion in jPCA space. The circular points and the rectangular points respectively represent the starting points of the two opposite directions of motion in the experimental task, the solid line curves and the dotted line curves represent the dynamic nerve tracks of the two opposite directions of motion in the experimental task, and the thicker lines are the average of the nerve tracks. According to the direction and arrangement of the nerve tracks, the nerve tracks of the low-dimensional nerve features obtained by the method in different directions are separable, so that the algorithm is proved to effectively reduce the dimension of the nerve data and extract the behavior-related nerve information.
The dimension reduction method of the high-dimensional neural signal based on the long-short term memory self-encoder, which is constructed by the invention, provides a new analysis processing method for hundreds of neuron cluster activities recorded in experiments such as neuroelectrophysiology and calcium signal imaging. In fig. 3, the nerve tracks of the monkey executing the two-dimensional cursor task are separated from each other, and the nerve tracks in the opposite movement directions are completely opposite, so that better spatial organization and dynamics are shown. In data collected by a monkey two-dimensional cursor task, after a model is trained, the method only takes 0.1 second for processing 672-dimensional neural data, and compared with the existing method, the method has remarkable advantage in operation speed.
Claims (9)
1. A high-dimensional neural signal dimension reduction method based on an autoencoder is characterized by comprising the following steps:
(1) Performing time-frequency analysis on the original continuous neural signals to extract time-frequency characteristics;
(2) Aligning the neural features and the corresponding behavior signals to obtain a complete training data set;
(3) Constructing a self-encoder neural network containing a condition module for reducing the dimension of the neural characteristics, and training the neural network by using a large number of samples to obtain a neural signal dimension reduction model;
(4) Preprocessing a nerve signal to be tested and inputting the preprocessed nerve signal into a dimensionality reduction model to obtain a low-dimensional nerve representation of the signal;
(5) The resulting low-dimensional neural characterization is used for subsequent analysis or decoding work.
2. The method of claim 1, wherein the method comprises: the continuous neural signals in the step (1) comprise local field potentials or cortical electrogram signals acquired under a paradigm of executing a specific task; the specific task refers to a brain-computer interface task paradigm related to motion, comprises a two-dimensional cursor task and a three-dimensional food tracking task and is used for generating specific neural activity; the process of preprocessing the neural signals is as follows: firstly, filtering data to remove baseline drift and power frequency interference, and then respectively extracting time-frequency band characteristics of a plurality of frequency bands by using multi-window power spectrum estimation.
3. The method according to claim 2, wherein: respectively extracting a plurality of frequency bands of 0.3-5 Hz, 5-8 Hz, 8-13 Hz, 13-30 Hz, 30-70 Hz, 70-200 Hz and 200-400Hz.
4. The method of claim 1, wherein the method comprises: synchronously acquiring the neural signals and the behavior signals in the task paradigm in the step (2), marking time labels on the neural signals and the behavior signals, aligning the labels, and then down-sampling the neural signals and the behavior signals to a specific frequency; the behavior signals are different according to different task paradigms, and continuous time sequence signals or discrete label signals are selected.
5. The method of claim 1, wherein: the self-encoder neural network in the step (3) comprises a dimensionality reduction module consisting of an encoder and a decoder and a condition module consisting of a full connection layer; the encoder is used for compressing input data of the neural network into low-dimensional potential features and providing the low-dimensional potential features to the decoder and the condition module, the decoder is used for reconstructing the low-dimensional neural features into original neural features, and the condition module is used for mapping the low-dimensional neural features into behavior data to play a role in regularization of dimension reduction.
6. The method of claim 5, wherein the neural signal is reduced in dimension: the specific process of training the self-encoder is as follows: inputting the training samples into the network one by one to obtain the reconstructed neural data and the decoded behavior data, and respectively calculating the loss function between the reconstructed neural data and the real neural dataAnd a loss function between the decoded behavior data and the real neural dataAnd updating network parameters through back propagation by a gradient descent method according to the sum of the two loss functions, and iterating the process until the loss functions are converged.
7. The method of claim 6, wherein: the loss function is expressed as follows:
wherein theta is AE Is a trainable parameter of the self-encoder,to reconstruct the term loss function, θ CE Are the trainable parameters of the encoder and condition module,for the decoding term loss function, and λ is the decoding term loss functionN is the dimension of the original nerve feature, X i For the input raw neural features, m is the feature dimension of the behavioral data, Y i For the i-th dimension of the behavior data, ψ, φ, and ω denote a decoder, an encoder, and a condition module, respectively.
8. The high-dimensional neural signal dimension reduction method of claim 1, applied in analysis, decoding and visualization processing of neural signals.
9. The use of claim 8, wherein the neural signals comprise local field potential signals and cortical electrical brain signals.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210785759.XA CN115310585B (en) | 2022-07-04 | 2022-07-04 | High-dimensional neural signal dimension reduction method based on self-encoder and application thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210785759.XA CN115310585B (en) | 2022-07-04 | 2022-07-04 | High-dimensional neural signal dimension reduction method based on self-encoder and application thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115310585A true CN115310585A (en) | 2022-11-08 |
CN115310585B CN115310585B (en) | 2024-08-09 |
Family
ID=83857739
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210785759.XA Active CN115310585B (en) | 2022-07-04 | 2022-07-04 | High-dimensional neural signal dimension reduction method based on self-encoder and application thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115310585B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180101957A1 (en) * | 2016-10-06 | 2018-04-12 | Qualcomm Incorporated | Neural network for image processing |
CN109033415A (en) * | 2018-08-06 | 2018-12-18 | 浙江大学 | A kind of dimensionality reduction and method for visualizing of the multidimensional nerve signal based on laplacian eigenmaps |
CN109086805A (en) * | 2018-07-12 | 2018-12-25 | 华南理工大学 | A kind of clustering method constrained based on deep neural network and in pairs |
CN109443382A (en) * | 2018-10-22 | 2019-03-08 | 北京工业大学 | Vision SLAM closed loop detection method based on feature extraction Yu dimensionality reduction neural network |
CN110276148A (en) * | 2019-06-27 | 2019-09-24 | 上海交通大学 | The feature extraction of micro-structure dimensionality reduction and reconstruct implementation method based on self-encoding encoder |
CN110393525A (en) * | 2019-06-18 | 2019-11-01 | 浙江大学 | A kind of brain activity detection method based on deep-cycle self-encoding encoder |
CN111513717A (en) * | 2020-04-03 | 2020-08-11 | 常州大学 | Method for extracting brain functional state |
CN111967502A (en) * | 2020-07-23 | 2020-11-20 | 电子科技大学 | Network intrusion detection method based on conditional variation self-encoder |
CN112241478A (en) * | 2020-11-12 | 2021-01-19 | 广东工业大学 | Large-scale data visualization dimension reduction method based on graph neural network |
WO2021051598A1 (en) * | 2019-09-19 | 2021-03-25 | 平安科技(深圳)有限公司 | Text sentiment analysis model training method, apparatus and device, and readable storage medium |
CN112861625A (en) * | 2021-01-05 | 2021-05-28 | 深圳技术大学 | Method for determining stacking denoising autoencoder model |
CN112869754A (en) * | 2021-01-08 | 2021-06-01 | 浙江大学 | Brain-machine fusion neural signal lie detection method |
US20210266875A1 (en) * | 2020-02-24 | 2021-08-26 | Qualcomm Incorporated | MACHINE LEARNING FOR ADDRESSING TRANSMIT (Tx) NON-LINEARITY |
US20220129071A1 (en) * | 2020-10-27 | 2022-04-28 | Emory University | Systems and Methods for Nonlinear Latent Spatiotemporal Representation Alignment Decoding for Brain-Computer Interfaces |
-
2022
- 2022-07-04 CN CN202210785759.XA patent/CN115310585B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180101957A1 (en) * | 2016-10-06 | 2018-04-12 | Qualcomm Incorporated | Neural network for image processing |
CN109086805A (en) * | 2018-07-12 | 2018-12-25 | 华南理工大学 | A kind of clustering method constrained based on deep neural network and in pairs |
CN109033415A (en) * | 2018-08-06 | 2018-12-18 | 浙江大学 | A kind of dimensionality reduction and method for visualizing of the multidimensional nerve signal based on laplacian eigenmaps |
CN109443382A (en) * | 2018-10-22 | 2019-03-08 | 北京工业大学 | Vision SLAM closed loop detection method based on feature extraction Yu dimensionality reduction neural network |
CN110393525A (en) * | 2019-06-18 | 2019-11-01 | 浙江大学 | A kind of brain activity detection method based on deep-cycle self-encoding encoder |
CN110276148A (en) * | 2019-06-27 | 2019-09-24 | 上海交通大学 | The feature extraction of micro-structure dimensionality reduction and reconstruct implementation method based on self-encoding encoder |
WO2021051598A1 (en) * | 2019-09-19 | 2021-03-25 | 平安科技(深圳)有限公司 | Text sentiment analysis model training method, apparatus and device, and readable storage medium |
US20210266875A1 (en) * | 2020-02-24 | 2021-08-26 | Qualcomm Incorporated | MACHINE LEARNING FOR ADDRESSING TRANSMIT (Tx) NON-LINEARITY |
CN111513717A (en) * | 2020-04-03 | 2020-08-11 | 常州大学 | Method for extracting brain functional state |
CN111967502A (en) * | 2020-07-23 | 2020-11-20 | 电子科技大学 | Network intrusion detection method based on conditional variation self-encoder |
US20220129071A1 (en) * | 2020-10-27 | 2022-04-28 | Emory University | Systems and Methods for Nonlinear Latent Spatiotemporal Representation Alignment Decoding for Brain-Computer Interfaces |
CN112241478A (en) * | 2020-11-12 | 2021-01-19 | 广东工业大学 | Large-scale data visualization dimension reduction method based on graph neural network |
CN112861625A (en) * | 2021-01-05 | 2021-05-28 | 深圳技术大学 | Method for determining stacking denoising autoencoder model |
CN112869754A (en) * | 2021-01-08 | 2021-06-01 | 浙江大学 | Brain-machine fusion neural signal lie detection method |
Non-Patent Citations (6)
Title |
---|
MIN-KI KIM: "Finding Kinematics-Driven Latent Neural States From Neuronal Population Activity for Motor Decoding", IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 22 September 2021 (2021-09-22) * |
冉星辰: "Dimensionality Reduction of Local Field Potential Features with Convolution Neural Network in Neural Decoding: A Pilot Study", 2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC), 9 December 2021 (2021-12-09) * |
张成刚;姜静清;: "一种稀疏降噪自编码神经网络研究", 内蒙古民族大学学报(自然科学版), no. 01, 15 January 2016 (2016-01-15) * |
杨云开;范文兵;彭东旭;: "基于一维卷积神经网络和降噪自编码器的驾驶行为识别", 计算机应用与软件, no. 08, 12 August 2020 (2020-08-12) * |
杨蕾: "基于自编码器和流形正则的结构保持无监督特征选择", 计算机科学, 21 April 2021 (2021-04-21) * |
钟昕孜;廖闻剑;: "基于自编码器的语音情感识别方法研究", 电子设计工程, no. 06, 20 March 2020 (2020-03-20) * |
Also Published As
Publication number | Publication date |
---|---|
CN115310585B (en) | 2024-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114266276B (en) | Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution | |
CN110969108B (en) | Limb action recognition method based on autonomic motor imagery electroencephalogram | |
Alomari et al. | Wavelet-based feature extraction for the analysis of EEG signals associated with imagined fists and feet movements | |
Jang et al. | EEG-based video identification using graph signal modeling and graph convolutional neural network | |
CN108256629B (en) | EEG signal unsupervised feature learning method based on convolutional network and self-coding | |
CN112244873A (en) | Electroencephalogram time-space feature learning and emotion classification method based on hybrid neural network | |
CN113191225B (en) | Emotion electroencephalogram recognition method and system based on graph attention network | |
Al-Saegh et al. | CutCat: An augmentation method for EEG classification | |
CN111184509A (en) | Emotion-induced electroencephalogram signal classification method based on transfer entropy | |
CN113180659B (en) | Electroencephalogram emotion recognition method based on three-dimensional feature and cavity full convolution network | |
CN112515685A (en) | Multi-channel electroencephalogram signal channel selection method based on time-frequency co-fusion | |
Aly et al. | Bio-signal based motion control system using deep learning models: A deep learning approach for motion classification using EEG and EMG signal fusion | |
Yang et al. | Mlp with riemannian covariance for motor imagery based eeg analysis | |
CN109375776A (en) | EEG signals based on multitask RNN model act intension recognizing method | |
Ranjani et al. | Classifying the autism and epilepsy disorder based on EEG signal using deep convolutional neural network (DCNN) | |
CN114699078A (en) | Emotion recognition method and system based on small number of channel EEG signals | |
Wang et al. | A hybrid transfer learning approach for motor imagery classification in brain-computer interface | |
CN117860271A (en) | Classifying method for motor imagery electroencephalogram signals | |
Shang et al. | Multi-band spatial feature extraction and classification for motor imaging EEG signals based on OSFBCSP-GAO-SVM model: EEG signal processing | |
CN116776188A (en) | Electroencephalogram signal classification method based on multi-branch graph self-adaptive network | |
CN115310585B (en) | High-dimensional neural signal dimension reduction method based on self-encoder and application thereof | |
CN113780134A (en) | Motor imagery electroencephalogram decoding method based on ShuffleNet V2 network | |
Reaj et al. | Emotion recognition using EEG-based brain computer interface | |
Chen et al. | Denosieformer: A Transformer based Approach for Single-Channel EEG Artifact Removal | |
Gupta et al. | A three phase approach for mental task classification using EEG |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |