CN117708546A - Decoding method and device of high-flux nerve signals based on invasive brain-computer interface - Google Patents

Decoding method and device of high-flux nerve signals based on invasive brain-computer interface Download PDF

Info

Publication number
CN117708546A
CN117708546A CN202410160200.7A CN202410160200A CN117708546A CN 117708546 A CN117708546 A CN 117708546A CN 202410160200 A CN202410160200 A CN 202410160200A CN 117708546 A CN117708546 A CN 117708546A
Authority
CN
China
Prior art keywords
layer
time
gru
motion vector
decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410160200.7A
Other languages
Chinese (zh)
Other versions
CN117708546B (en
Inventor
吴格非
宋麒
龚业勇
周涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhiran Medical Technology Co ltd
Original Assignee
Beijing Zhiran Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhiran Medical Technology Co ltd filed Critical Beijing Zhiran Medical Technology Co ltd
Priority to CN202410160200.7A priority Critical patent/CN117708546B/en
Priority claimed from CN202410160200.7A external-priority patent/CN117708546B/en
Publication of CN117708546A publication Critical patent/CN117708546A/en
Application granted granted Critical
Publication of CN117708546B publication Critical patent/CN117708546B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The application provides a decoding method and device of high-flux nerve signals based on an invasive brain-computer interface and a computer readable storage medium. A decoding method comprising the steps of: predicting motion vector parameters from the 1 st time to the n th time in sequence by utilizing a trained learning network formed by a linear layer and a double-layer GRU based on high-flux neural signal data of motor imagery from the 1 st time to the n th time in sequence, wherein n is a natural number; and deriving a predicted motion trajectory based at least on the predicted motion vector parameter at the nth time. The GRU can effectively capture long-term dependency relationship in the sequence when processing time sequence data, meets the requirement of high-flux data processing on real-time performance, and ensures efficient execution of the decoding process.

Description

Decoding method and device of high-flux nerve signals based on invasive brain-computer interface
Technical Field
The present disclosure relates to the field of brain-computer interfaces, and in particular, to a method and apparatus for decoding high-flux neural signals based on an invasive brain-computer interface, and a computer readable storage medium.
Background
The neural signal decoding technology has wide application in the fields of neuroscience, brain-computer interfaces, medical research and the like. With the continued advancement of nerve signal acquisition technology, high-throughput, high-resolution nerve signal data is becoming more common. Decoding such data and extracting useful information in real time is an important challenge. The existing neural signal decoding technology generally adopts a Kalman filter and an MLP model as a decoding model, but the traditional Kalman algorithm generally needs more than 200ms in calculation time when calculating high-flux data, and for a spike signal data decoding task acquired by taking 50ms as a time window, the requirement of real-time performance is far from being met, and the MLP model is generally not good at processing time sequence data, and has no mechanism for capturing time dependency structurally. Thus, current neural signal decoding techniques are often difficult to meet both high-throughput data and real-time requirements, and often require a sacrifice in resolution or sampling rate to be able to achieve.
Disclosure of Invention
Aiming at the technical problems in the prior art, the application provides a decoding method and device of high-flux nerve signals based on an invasive brain-computer interface and a computer readable storage medium, which solve the problem of real-time decoding of the high-flux nerve signals and can simultaneously meet the requirements of high-flux data and high instantaneity.
In a first aspect, an embodiment of the present application provides a method for decoding a high-flux neural signal based on an invasive brain-computer interface, including step S101 and step S102. Step S101: based on the high-flux neural signal data of the motor imagery from the 1 st time to the n th time in sequence, predicting the motion vector parameters from the 1 st time to the n th time in sequence by utilizing a trained learning network formed by a linear layer and a double-layer GRU, wherein n is a natural number. Step S102: a predicted motion trajectory is derived based at least on the predicted motion vector parameter at the nth time.
In a second aspect, embodiments of the present application provide a decoding apparatus for high-throughput neural signals based on an invasive brain-computer interface, comprising a processor configured to: predicting motion vector parameters from the 1 st time to the n th time in sequence by utilizing a trained learning network formed by a linear layer and a double-layer GRU based on high-flux neural signal data of motor imagery from the 1 st time to the n th time in sequence, wherein n is a natural number; and deriving a predicted motion trajectory based at least on the predicted motion vector parameter at the nth time.
In a third aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method for decoding high-throughput neural signals based on an invasive brain-computer interface described above.
Compared with the prior art, the beneficial effects of the embodiment of the application are that: according to the method, the training learning network based on the linear layer and the double-layer GRU is utilized to predict the motion vector parameters from the 1 st time to the nth time in sequence, the GRU can effectively capture long-term dependency relationship in the sequence when processing time sequence data, the requirement of high-flux data processing instantaneity is met, efficient execution of a decoding process is ensured, and the resolution of decoding can be improved by adopting the GRU, so that the problem of resolution sacrifice in the prior art is solved.
Drawings
In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. The accompanying drawings illustrate various embodiments by way of example in general and not by way of limitation, and together with the description and claims serve to explain the disclosed embodiments. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. Such embodiments are illustrative and not intended to be exhaustive or exclusive of the present apparatus or method.
FIG. 1 is a first flowchart of a method for decoding high-throughput neural signals based on an invasive brain-computer interface according to an embodiment of the present application;
FIG. 2 is a diagram of a model architecture used in a detection phase in the method for decoding high-throughput neural signals based on an invasive brain-computer interface according to an embodiment of the present application;
FIG. 3 is a diagram of a model architecture used in a training phase in the method for decoding high-throughput neural signals based on an invasive brain-computer interface according to an embodiment of the present application;
FIG. 4 is a second flowchart of a method of decoding high-throughput neural signals based on an invasive brain-computer interface according to an embodiment of the present application;
fig. 5 is a block diagram of a decoding apparatus for high-flux neural signals based on an invasive brain-computer interface according to an embodiment of the present application.
The reference numerals in the drawings denote components:
100. a coding section; 101. a first linear layer; 102. a first bilayer GRU; 200. a decoding unit; 201. a second dual layer GRU; 202. a second linear layer; 203. a third linear layer; 300. decoding device of high flux nerve signal based on invasive brain-computer interface; 301. a processor.
Detailed Description
It will be appreciated that various modifications may be made to the embodiments herein. Therefore, the above description should not be taken as limiting, but merely as exemplification of the embodiments. Other modifications within the scope and spirit of this application will occur to those skilled in the art.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and, together with a general description of the application given above and the detailed description of the embodiments given below, serve to explain the principles of the application.
These and other characteristics of the present application will become apparent from the following description of a preferred form of embodiment, given as a non-limiting example, with reference to the accompanying drawings.
It is also to be understood that, although the present application has been described with reference to some specific examples, those skilled in the art can certainly realize many other equivalent forms of the present application.
The foregoing and other aspects, features, and advantages of the present application will become more apparent in light of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present application will be described hereinafter with reference to the accompanying drawings; however, it is to be understood that the embodiments are merely examples of the application, which may be embodied in various forms. Well-known and/or repeated functions and constructions are not described in detail to avoid obscuring the application with unnecessary or excessive detail. Therefore, specific structural and functional details herein are not intended to be limiting, but merely serve as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present application in virtually any appropriately detailed structure.
The specification may use the word "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments as per the application.
The embodiment of the application provides a decoding method of high-flux nerve signals based on an invasive brain-computer interface, which can be based on a decoding device of the high-flux nerve signals of the invasive brain-computer interface, and the decoding device is used for decoding high-flux nerve signal data so as to predict actions or intentions of a person based on the nerve signals by adopting the decoding method.
The high-flux neural signals are understood to be signals containing a large number of neurons or brain region activities, and are obtained by high-speed, high-density acquisition and processing of large-scale neural signals to obtain more comprehensive and detailed neural activity information.
The decoding method can be applied to a plurality of fields such as neuroscience research, brain-computer interface control, medical diagnosis and the like, and the application is not particularly limited.
As shown in fig. 1, the decoding method of the high-flux neural signal based on the invasive brain-computer interface includes step S101 and step S102.
Step S101: based on the high-flux neural signal data of the motor imagery from the 1 st time to the n th time in sequence, predicting the motion vector parameters from the 1 st time to the n th time in sequence by utilizing a trained learning network formed by a linear layer and a double-layer GRU, wherein n is a natural number.
Step S102: a predicted motion trajectory is derived based at least on the predicted motion vector parameter at the nth time.
It is understood that step S101 and step S102 are performed in the detection phase.
Optionally, the learning network prediction motion vector parameter may be single-point prediction, that is, each time T, the motion vector parameter from the 1 st time to the n th time in sequence is decoded according to the acquired spike signal, where the motion vector parameter may include a motion speed and a motion direction.
Optionally, the high-flux neural signal data may be preprocessed before the learning network is utilized, and the preprocessing method may be performing operations such as data format normalization and normalization on the high-flux neural signal data according to actual situations.
Alternatively, the high-throughput neural signal data described above may be acquired by a high-throughput acquisition device.
The GRU (Gated Recurrent Unit; gating cycle unit) can be understood as employing two gating mechanisms, namely an update gate and a reset gate. The update gate and the reset gate can control the flow and update of information in the cyclic unit to better handle long sequence data. The update gate and the reset gate of the GRU will be specifically described below, and are not described in detail herein.
Optionally, the predicted motion trail may be a series of instructions, motion sequences or control signals for achieving interaction or control with an external device, so as to achieve the purposes of interaction between a person and a computer or other external devices, rehabilitation, motion assistance, research on brain functions, and the like.
It can be appreciated that the learning network may include multiple linear layers and multiple dual-layer GRUs that handle relatively abrupt motion trajectories, which can better address the problem of severe single-layer GRU bias. Moreover, compared with a single-layer GRU, the double-layer GRU can better model long-term dependency and time sequence relation, and can more accurately predict and obtain the motion vector parameters. In addition, the data processing speed and the training speed of the GRU are faster than those of the traditional LSTM, the GRU is simpler in structure, and the processing of the high-flux neural signal data can even reach the millisecond level on the basis of guaranteeing the detection precision.
The GRU can improve the decoding resolution, so that the problem of resolution sacrifice in the traditional method is solved, the strong characteristic extraction capability and time sequence modeling capability of the GRU are realized, the complex data preprocessing steps are reduced, the noise and interference resistance capability is improved, the time dependence in nerve signals can be effectively captured, and the problem of insufficient modeling of the time sequence in the traditional method is solved.
According to the method, the training learning network based on the linear layer and the double-layer GRU is utilized to predict the motion vector parameters from the 1 st time to the nth time in sequence, the GRU can effectively capture long-term dependency relationship in the sequence when processing time sequence data, the requirement of high-flux data processing instantaneity is met, efficient execution of a decoding process is ensured, and the resolution of decoding can be improved by adopting the GRU, so that the problem of resolution sacrifice in the prior art is solved.
In some embodiments, deriving the predicted motion trajectory based at least on the predicted motion vector parameter at the nth time of step S102 specifically includes, for each iteration, the nth time by iteration n: obtaining a predicted (n+1) -th motion position based on the n-th motion position and the predicted n-th motion vector parameter; the motion positions of the (n+1) th time of each iteration are connected to obtain a predicted motion trail.
Therefore, the motion position of the next time can be obtained one by one through a plurality of iterations by adopting the mode of the nth time of each iteration, so that the motion position of the (n+1) th time of each iteration is connected to obtain the predicted motion track, and the obtained predicted motion track has robustness, namely the predicted motion track is more stable and reliable.
Illustratively, based on the motion position at the (n+1) th time and the predicted motion vector parameter at the (n+1) th time, the predicted motion position at the (n+2) th time is obtained, whereby each motion position in time series can be obtained by iterating n to obtain a predicted motion trajectory based thereon.
In some embodiments, the method further comprises: based on the high-flux neural signal data of the motor imagery from time 1 to time n in sequence, n pieces of input data of dimensions including time, channel and feature are extracted and fed to the learning network.
Alternatively, the high-flux neural signal data may be a spike signal (neuron discharge pulse signal) with a time window length of T, and the acquired high-flux neural signal data may be in the format of (T, C, F), where T represents time, C represents the number of channels, and F represents the number of features.
In some embodiments, the linear layer is configured to enrich the input data and feed the enriched feature information to the dual-layer GRU.
Therefore, the characteristic information after enrichment treatment can be used as the input of the double-layer GRU, so that the double-layer GRU can generate an output result based on the richer characteristic information.
It is appreciated that Enrichment (Enrichment) of input data is performed by adding, correlating, or computing additional information to increase the richness and value of the input data to provide more comprehensive, more insight into the data to support deeper analysis, decision making, and applications.
In some embodiments, the number of channels of the high-throughput neural signal is 500 or more, and the predicted motion trajectory reaches a level of real-time below 10 ms. Therefore, the GRU can effectively capture the long-term dependency relationship in the sequence when processing time sequence data, and the requirement of high-flux data processing instantaneity is met.
Preferably, the predicted motion trail can reach the real-time level below 5ms, thereby meeting the requirement of high real-time and ensuring the efficient execution of the decoding process.
In some embodiments, as shown in fig. 2, the learning network includes an encoding portion 100 and a decoding portion 200, the encoding portion 100 includes a first linear layer 101 and a first dual-layer GRU 102, the decoding portion 200 includes a second dual-layer GRU 201 and a second linear layer 202, n pieces of input data are fed to the first dual-layer GRU 102 after being processed by the first linear layer 101, the first dual-layer GRU 102 outputs hidden state information and is fed to the second dual-layer GRU 201, and the second linear layer 202 is used for outputting the predicted n motion vector parameters.
The architecture of the learning network is simple, and the encoding part 100 and the decoding part 200 respectively comprise a double-layer GRU, so that the decoding resolution can be further improved through the double-layer GRU, the feature extraction capability and the time sequence modeling capability are improved, complex data preprocessing steps are reduced, the resistance capability of noise and interference is improved, the time dependence in nerve signals can be effectively captured, and the problem of insufficient modeling of the time sequence in the traditional method is solved.
The input data of the encoding unit 100 may be X1-Xn (n is 4 in fig. 2 as an example), and the data L1-Ln may be obtained after the first linear layer 101 processing, and then the hidden status information H1-Hn may be obtained after the first double-layer GRU 102. The hidden state information H1-Hn is fed to the second dual-layer GRU 201 of the decoding section 200, so that data Hout1-Houtn can be obtained, and the data Hout1-Houtn can be further passed through the second linear layer 202 to obtain motion vector parameters V1 '-Vn'.
Alternatively, a sigmoid activation calculation may be employed to determine the update gate zt of the GRU, the update gate zt being calculated using equation 1.1 below:
equation 1.1;
wherein zt is the update gate; sigma is a sigmoid activation function; wz is a weight matrix; ht-1 is the hidden state of the previous cell output; xt is the input state.
Alternatively, a sigmoid activation function may be employed to determine the reset gate rt of the GRU, which is calculated using the following equation 1.2:
equation 1.2;
wherein rt is a reset gate; wr is a weight matrix.
Alternatively, the GRU may calculate the candidate hidden states h to t using hyperbolic tangent activation, specifically using the following formula 1.3:
equation 1.3;
h-t are candidate hidden states; w is a weight matrix; tanh is the activation function tanh.
The encoding section 100 and the decoding section 200 may use the above formulas for the calculation of the update gate, the reset gate, and the candidate hidden state, which is not particularly limited in this application.
Alternatively, the encoding part 100 may update the hidden states ht, which are the hidden state information, by using the update gate, and specifically calculate the hidden state information by using the following formula 1.4:
equation 1.4.
The final concealment state ht of the encoding section 100 is taken as the initial concealment state of the decoding section 200.
Alternatively, the decoding section 200 may update the hidden state st using the update gate, specifically calculated using the following equation 1.5:
equation 1.5.
Optionally, using the second linear layer 202 as an output layer, the hidden state st output by the second dual-layer GRU 201 is calculated by using the following formula 1.6, where the motion vector parameter Vpre is obtained by the second linear layer 202:
equation 1.6;
wherein Wout is an output weight matrix.
In some embodiments, as shown in fig. 3 and 4, the learning network is trained by the following steps S201 to S205.
Step S201: a third linear layer 203 is added at the input of the decoding section 200.
Step S202: the n pieces of input data are input to the encoding unit 100, and the labeling data of n motion vector parameters corresponding to the n pieces of input data are input to the third linear layer 203.
Step S203: n predicted motion vector parameters from the second linear layer 202 are determined.
Step S204: parameters of the learning network comprising the third linear layer 203 are adjusted using mean square error of the n predicted motion vector parameters and labeling data of the n motion vector parameters as a loss function.
Step S205: the third linear layer 203 is removed from the trained learning network, and the trained learning network after removing the third linear layer 203 is used for predicting the sequence of motion vector parameters.
Thus, the training of the learning network formed by the linear layer and the double-layer GRU can be realized by adopting the steps, so that the performance and generalization capability of the learning network are improved, and the learning network is continuously optimized through iteration.
In the training learning network stage, the input data of the decoding unit 200 includes not only the hidden state information output from the first dual-layer GRU 102, but also the labeling data of n motion vector parameters corresponding to n pieces of input data. Thereby achieving training of the learning network.
In the training learning network stage, the input data of the encoding unit 100 may be X1-Xn (n is 4 in fig. 2 as an example), and the data L1-Ln may be obtained after the first linear layer 101 is processed, and then the hidden status information H1-Hn may be obtained after the first dual-layer GRU 102 is processed. The hidden state information H1-Hn is fed to the second dual-layer GRU 201 of the decoding section 200. The other part of the input data of the decoding unit 200 is V1-Vn, i.e. the labeling data of n motion vector parameters corresponding to the n pieces of input data, and the labeling data can obtain data U1-Un through the third linear layer 203. The hidden state information H1-Hn and the data U1-Un are jointly fed into the second double-layer GRU 201, output data Hout1-Houtn can be obtained, and V1 '-Vn' is obtained through the second linear layer 202. The mean square error of the n predicted motion vector parameters Vn' and the labeling data Vn of motion vector parameters is then used as a loss function to adjust the parameters of the learning network comprising the third linear layer 203.
Alternatively, a mean square error may be calculated for the n predicted motion vector parameters and the labeled data loss function (loss function) for the n motion vector parameters.
In some embodiments, as shown in fig. 2, a downstream layer in the first dual-layer GRU 102 is connected in series to a downstream layer in the second dual-layer GRU 201, and an upstream layer in the first dual-layer GRU 102 is connected in series to an upstream layer in the second dual-layer GRU 201.
In this way, the long-term dependency relationship and the time sequence relationship can be better modeled through the double-layer GRU, so that the motion vector parameters can be more accurately predicted.
Optionally, for the detection phase, adjacent upstream and downstream layers of the second dual-layer GRU 201 are connected in series.
The embodiment of the application also provides a decoding device 300 of the high-flux nerve signal based on the invasive brain-computer interface. As shown in fig. 5, the decoding apparatus 300 for high-throughput neural signals based on an invasive brain-computer interface includes a processor 301, the processor 301 configured to: predicting motion vector parameters from the 1 st time to the n th time in sequence by utilizing a trained learning network formed by a linear layer and a double-layer GRU based on high-flux neural signal data of motor imagery from the 1 st time to the n th time in sequence, wherein n is a natural number; and deriving a predicted motion trajectory based at least on the predicted motion vector parameter at the nth time.
According to the method, the training learning network based on the linear layer and the double-layer GRU is utilized to predict the motion vector parameters from the 1 st time to the nth time in sequence, the GRU can effectively capture long-term dependency relationship in the sequence when processing time sequence data, the requirement of high-flux data processing instantaneity is met, efficient execution of a decoding process is ensured, and the resolution of decoding can be improved by adopting the GRU, so that the problem of resolution sacrifice in the prior art is solved.
In some embodiments, the processor 301 is further configured to: through iteration n, for each iteration's nth time: obtaining a predicted (n+1) -th motion position based on the n-th motion position and the predicted n-th motion vector parameter; the motion positions of the (n+1) th time of each iteration are connected to obtain a predicted motion trail.
In some embodiments, the processor 301 is further configured to: based on the high-flux neural signal data of the motor imagery from time 1 to time n in sequence, n pieces of input data of dimensions including time, channel and feature are extracted and fed to the learning network.
In some embodiments, the linear layer is configured to enrich the input data and feed the enriched feature information to the dual-layer GRU.
In some embodiments, the number of channels of the high-throughput neural signal is 500 or more, and the predicted motion trajectory reaches a level of real-time below 10 ms.
In some embodiments, the learning network includes an encoding portion 100 and a decoding portion 200, the encoding portion 100 includes a first linear layer 101 and a first dual-layer GRU 102, the decoding portion 200 includes a second dual-layer GRU 201 and a second linear layer 202, n pieces of input data are fed to the first dual-layer GRU 102 after being processed by the first linear layer 101, the first dual-layer GRU 102 outputs hidden state information to the second dual-layer GRU 201, and the second linear layer 202 is used to output the predicted n motion vector parameters.
In some embodiments, the processor 301 is further configured to train the learning network by: adding a third linear layer 203 at the input of the decoding section 200; inputting the n pieces of input data into the encoding unit 100, and inputting labeling data of n motion vector parameters corresponding to the n pieces of input data into the third linear layer 203; determining n predicted motion vector parameters from the second linear layer 202; adjusting parameters of the learning network including the third linear layer 203 using mean square error of the n predicted motion vector parameters and labeling data of the n motion vector parameters as a loss function; the third linear layer 203 is removed from the trained learning network, and the trained learning network after removing the third linear layer 203 is used for predicting the sequence of motion vector parameters.
In some embodiments, a downstream layer in the first dual-layer GRU 102 is connected in series to a downstream layer in the second dual-layer GRU 201, and an upstream layer in the first dual-layer GRU 102 is connected in series to an upstream layer in the second dual-layer GRU 201.
Embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method for decoding high-throughput neural signals based on an invasive brain-computer interface described above.
Note that according to various units in various embodiments of the present application, they may be implemented as computer-executable instructions stored on a memory, which when executed by a processor, may implement corresponding steps; may also be implemented as hardware having corresponding logic computing capabilities; and may also be implemented as a combination of software and hardware (firmware). In some embodiments, the processor may be implemented as any one of FPGA, ASIC, DSP chip, SOC (system on a chip), MPU (e.g., without limitation, cortex), etc. The processor may be communicatively coupled to the memory and configured to execute computer-executable instructions stored therein. The memory may include read-only memory (ROM), flash memory, random Access Memory (RAM), dynamic Random Access Memory (DRAM) such as Synchronous DRAM (SDRAM) or Rambus DRAM, static memory (e.g., flash memory, static random access memory), etc., upon which computer-executable instructions are stored in any format. Computer-executable instructions may be accessed by the processor, read from ROM or any other suitable memory location, and loaded into RAM for execution by the processor to implement a wireless communication method in accordance with various embodiments of the present application.
It should be noted that, among the components of the system of the present application, the components thereof are logically divided according to functions to be implemented, but the present application is not limited thereto, and the components may be re-divided or combined as needed, for example, some components may be combined into a single component, or some components may be further decomposed into more sub-components.
Various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in a system according to embodiments of the present application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application may also be embodied as an apparatus or device program (e.g., computer program and computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present application may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form. In addition, the application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
Furthermore, although exemplary embodiments have been described herein, the scope thereof includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of the various embodiments across), adaptations or alterations as pertains to the present application. Elements in the claims are to be construed broadly based on the language employed in the claims and are not limited to examples described in the present specification or during the practice of the present application, which examples are to be construed as non-exclusive.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the above detailed description, various features may be grouped together to streamline the application. This is not to be interpreted as an intention that the disclosed features not being claimed are essential to any claim. Rather, the subject matter of the present application is capable of less than all of the features of a particular disclosed embodiment. Thus, the claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with one another in various combinations or permutations. The scope of the application should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The above embodiments are only exemplary embodiments of the present application and are not intended to limit the present application, the scope of which is defined by the claims. Various modifications and equivalent arrangements may be made to the present application by those skilled in the art, which modifications and equivalents are also considered to be within the scope of the present application.

Claims (10)

1. A method for decoding high-flux neural signals based on an invasive brain-computer interface, comprising the steps of:
predicting motion vector parameters from the 1 st time to the n th time in sequence by utilizing a trained learning network formed by a linear layer and a double-layer GRU based on high-flux neural signal data of motor imagery from the 1 st time to the n th time in sequence, wherein n is a natural number; and
a predicted motion trajectory is derived based at least on the predicted motion vector parameter at the nth time.
2. The decoding method of claim 1, wherein deriving the predicted motion trajectory based at least on the predicted motion vector parameter at the nth time comprises, for each iteration, at the nth time by an iteration n:
obtaining a predicted (n+1) -th motion position based on the n-th motion position and the predicted n-th motion vector parameter;
the motion positions of the (n+1) th time of each iteration are connected to obtain a predicted motion trail.
3. The decoding method of claim 1, further comprising:
based on the high-flux neural signal data of the motor imagery from time 1 to time n in sequence, n pieces of input data of dimensions including time, channel and feature are extracted and fed to the learning network.
4. A decoding method according to claim 3, wherein the linear layer is configured to enrich the input data and feed the enriched feature information to the dual layer GRU.
5. A decoding method according to any one of claims 1-3, wherein the number of channels of the high-throughput neural signal is 500 or more, and the predicted motion trajectory reaches a level of real-time of 10ms or less.
6. A decoding method according to claim 3, wherein the learning network comprises an encoding part and a decoding part, the encoding part comprising a first linear layer and a first double-layer GRU, the decoding part comprising a second double-layer GRU and a second linear layer, n pieces of input data being fed to the first double-layer GRU after processing via the first linear layer, the first double-layer GRU outputting hidden state information being fed to the second double-layer GRU, the second linear layer being for outputting the predicted n motion vector parameters.
7. The decoding method of claim 6, wherein the learning network is trained by:
adding a third linear layer at an input of the decoding section;
inputting the n pieces of input data into the encoding part, and inputting labeling data of n motion vector parameters corresponding to the n pieces of input data into the third linear layer;
determining n predicted motion vector parameters from the second linear layer;
adjusting parameters of the learning network including the third linear layer using mean square error of the n predicted motion vector parameters and labeling data of the n motion vector parameters as a loss function;
and removing the third linear layer from the trained learning network, wherein the trained learning network after removing the third linear layer is used for predicting the sequence of the motion vector parameters.
8. The decoding method of claim 6, wherein a downstream layer in the first dual-layer GRU is connected in series to a downstream layer in the second dual-layer GRU, and an upstream layer in the first dual-layer GRU is connected in series to an upstream layer in the second dual-layer GRU.
9. A high-throughput neural signal decoding device based on an invasive brain-computer interface, comprising a processor configured to: predicting motion vector parameters from the 1 st time to the n th time in sequence by utilizing a trained learning network formed by a linear layer and a double-layer GRU based on high-flux neural signal data of motor imagery from the 1 st time to the n th time in sequence, wherein n is a natural number; and
a predicted motion trajectory is derived based at least on the predicted motion vector parameter at the nth time.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the method for decoding high-throughput neural signals based on an invasive brain-computer interface as claimed in any one of claims 1 to 8.
CN202410160200.7A 2024-02-05 Decoding method and device of high-flux nerve signals based on invasive brain-computer interface Active CN117708546B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410160200.7A CN117708546B (en) 2024-02-05 Decoding method and device of high-flux nerve signals based on invasive brain-computer interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410160200.7A CN117708546B (en) 2024-02-05 Decoding method and device of high-flux nerve signals based on invasive brain-computer interface

Publications (2)

Publication Number Publication Date
CN117708546A true CN117708546A (en) 2024-03-15
CN117708546B CN117708546B (en) 2024-05-10

Family

ID=

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022251472A1 (en) * 2021-05-26 2022-12-01 The Regents Of The University Of California Methods and devices for real-time word and speech decoding from neural activity
CN115462803A (en) * 2022-07-29 2022-12-13 上海电机学院 BG-Attention-based electroencephalogram signal denoising method, device and storage medium
CN115982617A (en) * 2022-11-29 2023-04-18 西北工业大学 EEG signal classification method based on multi-segment signal random recombination and interactive bidirectional RNN
CN117251778A (en) * 2023-09-07 2023-12-19 北京师范大学珠海校区 Electroencephalogram signal classification method and device, electronic equipment and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022251472A1 (en) * 2021-05-26 2022-12-01 The Regents Of The University Of California Methods and devices for real-time word and speech decoding from neural activity
CN115462803A (en) * 2022-07-29 2022-12-13 上海电机学院 BG-Attention-based electroencephalogram signal denoising method, device and storage medium
CN115982617A (en) * 2022-11-29 2023-04-18 西北工业大学 EEG signal classification method based on multi-segment signal random recombination and interactive bidirectional RNN
CN117251778A (en) * 2023-09-07 2023-12-19 北京师范大学珠海校区 Electroencephalogram signal classification method and device, electronic equipment and medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
GOKHAN YALINIZ 等: ""Using independently recurrent networks for reinforcement learning based unsupervised video summarization"", 《MULTIMEDIA TOOLS AND APPLICATIONS》, 12 February 2021 (2021-02-12) *
刘政 等: ""基于深度学习的头皮脑电信息解码研究进展"", 《中国生物医学工程学报》, vol. 39, no. 2, 30 April 2020 (2020-04-30) *
周涛: ""面向手势识别的惯性传感器数据深度学习方法"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 July 2020 (2020-07-15) *
郑永康: ""面向下肢外骨骼机器人控制的运动意图预测技术研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 February 2021 (2021-02-15) *
高上凯 等: "《脑-计算机交互研究前沿》", 31 December 2019, 上海:上海交通大学出版社, pages: 388 *

Similar Documents

Publication Publication Date Title
Choi et al. EmbraceNet: A robust deep learning architecture for multimodal classification
Ariav et al. An end-to-end multimodal voice activity detection using wavenet encoder and residual networks
WO2016145850A1 (en) Construction method for deep long short-term memory recurrent neural network acoustic model based on selective attention principle
Praveen et al. Cross attentional audio-visual fusion for dimensional emotion recognition
Lin et al. A novel multichannel dilated convolution neural network for human activity recognition
Latif et al. Deep architecture enhancing robustness to noise, adversarial attacks, and cross-corpus setting for speech emotion recognition
Senthilkumar et al. Speech emotion recognition based on Bi-directional LSTM architecture and deep belief networks
CN107704924B (en) Construction method of synchronous self-adaptive space-time feature expression learning model and related method
Guo et al. A hybrid deep representation learning model for time series classification and prediction
US11386288B2 (en) Movement state recognition model training device, movement state recognition device, methods and programs therefor
Haque et al. Gru-based attention mechanism for human activity recognition
Gajurel et al. A fine-grained visual attention approach for fingerspelling recognition in the wild
Mohammad et al. Primitive activity recognition from short sequences of sensory data
CN115630653A (en) Network popular language emotion analysis method based on BERT and BilSTM
Setyono et al. Recognizing word gesture in sign system for Indonesian language (SIBI) Sentences using DeepCNN and BiLSTM
CN117708546B (en) Decoding method and device of high-flux nerve signals based on invasive brain-computer interface
Bakhshi et al. End-to-end speech emotion recognition based on time and frequency information using deep neural networks
Shekofteh et al. MLP-based isolated phoneme classification using likelihood features extracted from reconstructed phase space
CN117708546A (en) Decoding method and device of high-flux nerve signals based on invasive brain-computer interface
CN115994221A (en) Memristor-based text emotion detection system and method
Sommer et al. Simultaneous and spatiotemporal detection of different levels of activity in multidimensional data
Guan et al. Research on human behavior recognition based on deep neural network
Granger et al. Cross attentional audio-visual fusion for dimensional emotion recognition
Zhang et al. Multi-modal Data Transfer Learning-based LSTM Method for Speech Emotion Recognition
Yang et al. Visual oriented encoder: integrating multimodal and multi-scale contexts for video captioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant