CN116187152B - Adaptive decoding method for invasive brain-computer interface based on dynamic evolution calculation - Google Patents

Adaptive decoding method for invasive brain-computer interface based on dynamic evolution calculation Download PDF

Info

Publication number
CN116187152B
CN116187152B CN202211301943.9A CN202211301943A CN116187152B CN 116187152 B CN116187152 B CN 116187152B CN 202211301943 A CN202211301943 A CN 202211301943A CN 116187152 B CN116187152 B CN 116187152B
Authority
CN
China
Prior art keywords
model
pool
likelihood
computer interface
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211301943.9A
Other languages
Chinese (zh)
Other versions
CN116187152A (en
Inventor
祁玉
祝歆韵
王跃明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202211301943.9A priority Critical patent/CN116187152B/en
Publication of CN116187152A publication Critical patent/CN116187152A/en
Application granted granted Critical
Publication of CN116187152B publication Critical patent/CN116187152B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/06Multi-objective optimisation, e.g. Pareto optimisation using simulated annealing [SA], ant colony algorithms or genetic algorithms [GA]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/02Reliability analysis or reliability optimisation; Failure analysis, e.g. worst case scenario performance, failure mode and effects analysis [FMEA]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses an invasive brain-computer interface self-adaptive decoding method based on dynamic evolution calculation, which comprises the steps of initializing a series of linear models in a training process, automatically switching the linear models according to data conditions in a brain-computer interface system test decoding process, and realizing the weight and parameter self-adaptive change of a model pool according to a recent neural signal self-evolution model at a proper moment so as to finish brain signal dynamic decoding. Through the autonomous evolution multi-model and the time-sharing integration strategy, the capability of describing different characteristic linear decoders can be fused, the accuracy and the stability of the motor brain-computer interface system are improved, and the problem of unstable decoding performance of the brain-computer interface system caused by unsteadiness of nerve signals is solved to a certain extent.

Description

Adaptive decoding method for invasive brain-computer interface based on dynamic evolution calculation
Technical Field
The invention relates to the field of brain-computer interfaces and neural signal decoding, in particular to an invasive brain-computer interface self-adaptive decoding method based on dynamic evolution calculation.
Background
The brain-computer interface spans the fields of neurology, cognition, computer science, control and information science, technology, medicine and other subjects, and gradually becomes the front crossing research direction with a revolutionary influence, so that the brain-computer interface has great research significance for human beings and society.
An invasive motor brain-computer interface collects nerve signals by means of microelectrode arrays implanted in the motor cortex of the brain and decodes them into motor signals, aiming at establishing a direct communication and control channel of nerve information between the brain and external devices. The technology is hopeful to recover part of the movement function of paralytic patients by controlling equipment such as exoskeleton, computer cursor and the like through nerve signals.
In an invasive motor brain-computer interface system, a neural decoding algorithm is critical. Researchers have proposed a number of algorithms for decoding motion information from neural signals, including cluster vector methods, optimal linear estimation methods, recursive bayesian decoders, deep neural networks, and the like.
In the methods, the Kalman filter combines the track change process and uses the track change process as priori knowledge to obtain more accurate prediction, so that the Kalman filter is widely applied to online cursor decoding and exoskeleton control to achieve optimal online control performance.
The decoders used by current invasive motor brain-computer interfaces mostly assume a stable functional relationship between neural signals and motion, and therefore use a fixed decoding model. However, during the on-line decoding process, the acquired neural signals occasionally introduce noise or vanish; meanwhile, since neurons have plasticity, brain activity patterns may change with time or different behavioral states. The presence of the above-mentioned noise and changes makes the mapping function of the neural signal to the motion signal unstable and continuously variable over time. A fixed decoding function may lead to unstable, inaccurate decoding results and therefore require retraining at intervals to maintain a certain performance.
Decoders proposed for the problem of neural signal instability can be divided into two classes. The first category still uses a fixed model, relying on periodic retraining or online gradual updating of model parameters to maintain performance. The second class uses dynamic models to track the changes in neural signals, which can avoid the cost of retraining, and is more suitable for long-term decoding tasks. However, few studies in this class of methods directly model neural signal instability. There are ways of using multi-model dynamic integration that try to model the unsteadiness of the neural signal directly, but its candidate model pool is fixed once generated, and the model itself cannot change with the change of the neural signal in the test phase.
Therefore, how to construct an effective evolutionary candidate model pool and model the dynamics of neural signals through multi-model integration to obtain stable and robust decoding performance is an important problem to be solved in the current motor nerve decoding field.
Disclosure of Invention
The invention provides an invasive brain-computer interface self-adaptive decoding method based on dynamic evolution calculation, which can greatly reduce the influence of neural signal instability on decoding performance and improve the robustness of a decoder.
An intrusion type brain-computer interface self-adaptive decoding method based on dynamic evolution calculation comprises the following steps:
(1) The method comprises the steps of obtaining an original motor nerve signal, preprocessing the original motor nerve signal, and dividing the original motor nerve signal into a training set, a verification set and a test set according to a proportion;
(2) Initializing a model pool, which specifically comprises the following steps: setting the number of models in a model pool and the proportion of each model distribution data to all data; cutting the training set according to the set proportion of the distribution data in sequence and then distributing the training set to each model in sequence; for each model, learning a linear model by using the allocated training samples, evaluating by using a verification set, and taking the trained model as a candidate model in a model pool;
(3) Dynamically characterizing the relationship between the state variable and the observed variable by using the constructed multiple candidate models; wherein, the state variable is a motion signal, and the observation variable is a nerve signal;
(4) Dynamically combining candidate models in the model pool according to the Bayesian model average rule to serve as observation functions of the state space model;
(5) In the application process, the state estimation is carried out on the neural signals to be decoded by utilizing the constructed state space model, the states corresponding to different candidate models are estimated and integrated, and the decoded motion signals are obtained;
in the decoding process, when a certain moment reaches a model pool updating condition, the model pool is updated by using evolutionary computation, which comprises the following specific steps:
(5-1) determining an fitness value of each model in the evolutionary computation according to model likelihood in the Bayesian multi-model integration framework;
and (5-2) selecting an optimal model according to the fitness value, using a differential evolution algorithm for each model in the model pool, and evolving towards the optimal model through mutation, crossover and selection iteration operation until the fitness value change is smaller than a threshold value, and finishing updating the model pool.
According to the invention, through a data driving mode, a differential evolution algorithm is used for autonomously updating parameters of the model in the model pool, and the models are used for replacing an observation function in a Kalman filter, so that an online dynamic integrated decoder is realized, and the change of a nerve signal is better adapted. In the decoding process, the models are automatically selected and combined according to a Bayesian updating mechanism, so that the influence of nerve signal instability on decoding performance is greatly reduced, and the robustness of a decoder is improved.
In the step (1), the pretreatment process is as follows: and acquiring an original motor nerve signal from the hardware equipment, selecting a proper window size to calculate the release rate of the nerve signal, intercepting a valid data segment according to the state label, and performing standardization and smoothing operation on the data to obtain a preprocessed motor nerve signal.
Preferably, the data of the paradigm preparation and return phases may be removed and the actual operational phase selected for analysis.
Preferably, the normalization and smoothing operation can be performed by adopting a z-score function and a movmean function in Matlab, so that the normalization and smoothing operation can be performed on each dimension of the motion signal, the issuing frequency of each neuron of the neural signal can be smoothed, and the specific smoothing window size can be selected according to actual requirements.
Preferably, the division of the data in step (1) may be performed according to 6:2: the scale of 2 is divided into a training set, a validation set and a test set.
In the step (2), the model pool is initially set as follows:
setting the number of models in the model pool as M, and expressing any model in the model pool as
Setting the proportion of each model distribution data to all training data as r seg The length of time of the training set is l all The method comprises the steps of carrying out a first treatment on the surface of the Sequentially cutting training sets into training subsets, sequentially assigning to each model, wherein the time length of the training data subsets assigned to each model is l seg =l all ·r seg The method comprises the steps of carrying out a first treatment on the surface of the The cutting windows of each training subset are uniformly distributed at training time points, and the intervals of the front window and the rear window are as follows Thus, the subscript of the ith training subset ranges from (i-1). L stride +1 to min { l all ,(i-1)·l stride +l seg }。
Cutting the training data subset according to the subscript to obtain a neural signal and a motion signal corresponding to each model wherein ,/>Representation model->Assigned motion data;representation model->Assigned neural data d x and dy Characteristic dimensions respectively representing the motion data and the nerve data; />Representation model->The length of time of the assigned motion data.
From the assigned data, a linear model is fitted using a least squares method:
wherein ,for a motion data matrix, ratio->The one-dimensional representation bias term is added; />Is a linear mapping matrix, here representing the mth linear model,/the matrix is a linear mapping matrix>Representing independent co-distributed observation noise.
In the step (3), the constructed plurality of candidate models dynamically characterize the relationship between the state variable and the observed variable as follows:
x k =f(x k-1 )+v k-1
y k =h k (x k )+n k
wherein k represents a discrete time step;representing motion data, i.e., state variables; />Representing neural data, i.e., observed variables; v k 、n k Representing independent and equidistributed state transition noise and observation noise;the observation function representing the k moment is obtained by integrating the models in the model pool, alpha k,m Representing the integration weight of each model.
In the step (4), the state estimation of the bayesian model average rule is as follows:
in the formula ,the posterior probability of the state when the mth model is selected at the kth moment;the posterior probability of the mth model is selected at the kth moment, and the calculation formula is as follows:
wherein ,selecting the prior probability of an mth model for the kth moment; p is p m (y k |y 0:k-1 ) The edge likelihood of the mth model is selected for the kth time.
The prior probability calculation formula for selecting the mth model at the kth moment is as follows:
wherein ,selecting the probability of the mth model for the k-1 time; alpha is forgetting factor, 0 < alpha < 1.
The edge likelihood calculation formula for selecting the mth model at the kth moment is as follows:
p m (y k |y 0:k-1 )=∫p m (y k |x k )p(x k |y 0:k-1 )dx k
wherein ,pm (y k |x k ) Is the mth model with respect to the motion parameter x k Likelihood functions of (a) are provided.
In the step (5), the state estimation adopts a particle filtering algorithm, and is calculated based on particles and />
Particle filtering basedFirst, a priori transitions from stateObtain->Then according to the importance sampling principle, the following can be obtained:
wherein , representing the importance weight normalized for the ith particle when the mth model was selected.
Particle filtering basedGiven->First calculate +.>Is a priori probability of (c). At likelihood p m (y k |y 0:k-1 ) M=1,.. where M is known, it can be calculatedPosterior probability>In said step (3), the state transition a priori is used as an importance function, i.e. q (x k |x k-1 ,y 0:k )=p(x k |x k-1 ). Thus, x k The distribution of (a) can be approximated by particles, i.e And then can obtain:
it is noted that particle filtering typically has particle degradation problems, with only a small number of particles having a high weight after several iterations. Thus, resampling methods are employed to remove particles that are too small or too large in weight to alleviate the particle degradation problem.
The specific process of the step (5-1) is as follows:
likelihood of each model is determined by each particle in particle filteringLikelihood of->And (3) adding to obtain:
wherein ,pm (y k |y 0:k-1 ) For the likelihood of each model,is particles, N s For the number of particles>Likelihood of the ith particle being the mth model;
in model pool evolution, model pools are used as populations, and models (i.e., linear mapping matrices) are usedDrawn as a one-dimensional vector->As individuals, the fitness value of the m-th individual in the k-th time population is:
wherein ,lpre Is the length of the most recent time window. Note that neural decoding is a timing problem, and we do not update the model pool at every instant. Therefore, we typically accumulate some of the data at the latest moment to update the evolution model.
The specific process of the step (5-2) is as follows:
existing initialization population Wherein G represents an iterative round of differential evolution;
population after variation operation of round GBy parent population-> According to a specific mutation strategy, the method comprises the following steps:
wherein ,randomly selected from the first 100 p% of individuals in the population, p.epsilon. [0.05,0.2 ]];/> and />Is from [1, M]Model subscripts randomly selected in (a), which are mutually unequal and are not equal to the current model subscript m; f is a list of variant factors, +.>Is randomly selected from the external storage model pool.
Preferably, in the selection process in each differential evolution iteration round G, if the fitness value of each mutated and worse individual is smaller than that of the original individual, the mutated and worse individual is not selected for the next iteration, and is added into the external storage model pool a as a candidate model of the subsequent mutation process, and if the current population is P, thenIs randomly selected from the set A U P.
Preferably, in each iteration round G of differential evolution, the list F of variation factors will be generated from a cauchy distribution: f=random (μ) F 0.1), wherein μ F Is the initial variation factor, 0.1 is the scaling parameter. The value generated is set to 1 when it is greater than 1, and to 0 when it is less than 0. When this iteration is completed, the list will be based on the successful variation factor S F Updating: mu (mu) F =(1-c)·μ F +c·mean L (S F ) Wherein c is E (0, 1)]Is a proportional parameter used to control the effect of the current iteration round, mean L (S F ) Represents the Lehmer mean:
the crossover operation generates a crossover post-population by randomly selecting a component from each dimension of the parent population or the mutated population; the intersected individual is a D-dimensional vectorEach component of the vector is:
wherein ,jrand Is from [1, D]CR is a list of crossing factors.
Preferably, in each differential evolution iteration round G, the list of crossover factors F will be generated from a gaussian distribution: cr=rand (μ) CR 0.1), wherein μ CR Is the initial crossover factor and 0.1 is the scaling parameter. The value generated is set to 1 when it is greater than 1, and to 0 when it is less than 0. When this round of iteration is completed, the list will be based on the successful crossover factor S CR Updating: mu (mu) CR =(1-c)·μ CR +c·mean A (S CR ) Wherein c is E (0, 1)]Is a proportional parameter used to control the effect of the current iteration round, mean A (S CR ) Representing the arithmetic mean.
In the selection phase of operation, it is determined by the fitness function f (·) whether an individual remains in the new population:
the evolution process of the model pool is iterated for tens to hundreds of rounds once, and each model in the model pool is changedDifferent, crossing, selecting iterative operations up to a maximum number of iterations G max Or the adaptation value change is smaller than a threshold value, and the model pool updating is finished.
In the model pool updating process, strategies of the variant evolution and history model pool are adopted to determine when and how to update the model pool, and the method specifically comprises the following steps:
default setting is a fixed time interval t up Updating the model pool, however, excessive time intervals may result in untimely updates, and too small time intervals may result in many unnecessary updates. Thus, the variant evolution strategy adaptively selects update times based on model likelihood, which would maintain a log-likelihood list L max-ever Storing the maximum log likelihood in all candidate models at each moment; every time a new neural signal enters the decoder, the last three log-likelihoods in the list will be averaged as the log-likelihood l at the current time cur The three log-likelihoods before the last three log-likelihoods are averaged simultaneously as the log-likelihood l at the previous time pre The method comprises the steps of carrying out a first treatment on the surface of the Thus, the model pool will be at l cur <r up ·l pre Time update, where r up ∈[0,1]Is the update coefficient introduced.
The default setting is to update the model pool with all individuals after differential evolution. However, differential evolution tends to evolve all models toward the best model, making the model pool deficient in diversity, insufficient to characterize the time-variability of neural signals. Thus, the history model pool strategy builds a history model poolStoring an optimal model at each moment, wherein the size of the historical model pool is equal to that of the main model pool; when the iteration of the differential evolution is finished, the method randomly selects a model in a historical model pool and replaces part of models in the model pool after the evolution; reserve ratio parameter r pre The history model used to control how much will be retained in the master model pool: m is M pre =r pre M; thus, the last updated model pool is:wherein his represents that the model is derived from a historical model pool,/->Represents M pre A random index for selecting which historical model pools will be retained in the master model pool.
The invention provides a dynamic self-adaptive brain-computer interface decoding method based on multi-model evolution integration, which is applied to neural signal decoding based on a traditional state space model and particle filtering, and reduces the influence caused by neural signal instability to a certain extent. The method is superior to a Kalman filtering algorithm in the test, and the effectiveness of the method is proved.
Drawings
FIG. 1 is a comparison of a functional trace on simulation data with a multi-model integration method (no evolution) using the method of the present invention;
FIG. 2 is a graph of the validity of a "variant update" strategy in the method employed in the present invention;
FIG. 3 is a graph comparing decoding performance of the method employed by the present invention on monkey datasets and clinical datasets with other methods.
Detailed Description
The invention will be described in further detail with reference to the drawings and examples, it being noted that the examples described below are intended to facilitate the understanding of the invention and are not intended to limit the invention in any way.
The study employed multiple data sets, including a simulation data set, a monkey data set, and a clinical data set. All clinical and experimental procedures in this study were approved by the ethical committee of medical science, second affiliated hospital, university of Zhejiang (ethical review No. 2019-158). Informed consent was obtained verbally by the participants and their immediate relatives and signed by their legal representatives. Volunteers were a 72 year old male, three years ago, with car accidents and quadriplegia after C4 level cervical trauma. Volunteers can only move around their neck and have normal language ability and understanding ability for all tasks. For limb movement behavior, the skeletal muscle strength score of the patient was 0/5, completely losing limb movement control.
Two 96-channel Utah intracutaneous microelectrode arrays (4 mm long 1.4 mm Utah array, blackrock Microsystems, salt lake City, utah, U.S.) were implanted in the left primary motor cortex of volunteers, one located in the central hand junction region and the other located outside 2 mm. Implantation uses electron computer tomography and functional nuclear magnetic resonance imaging to guide implantation. During implantation, participants were asked to perform flexion and extension movements of the elbows and elbows, confirming the active areas of the motor cortex by functional nuclear magnetic resonance scanning. Mechanical arms are used to assist in implanting the electrodes during the procedure. The participants had a one week recovery time before the neuro-signal recording and BCI training tasks began.
Participants performed BCI training tasks on each weekday, with weekends resting. The training time per day was about 3 hours, including preparation of signal recordings, impedance testing, spike classification, and paradigm tasks. Once the participants feel tired or develop a physical abnormality (e.g., fever or urinary tract infection), the experiment is stopped. The entire experiment lasted half a year and nerve signals were recorded for 12 weeks.
The 2D cursor control task was performed on a computer display 1.5 meters in front of the volunteer. The task contains eight objects at the top, bottom, left, right and four corners of the screen. The targets may be displayed sequentially in a clockwise order or pseudo-random manner. The relevant parameters of the task may be configured in the task settings, including the distance of the target from the center of the screen, the diameter of the target sphere, the maximum trial time, the arrival threshold, the minimum dwell time, and the number of repetitions after failure. For each experimental trial, the volunteer was asked to control one blue cursor sphere from the center of the screen to one red target sphere. In a successful three, the distance between the centers of the blue and red spheres should be less than the preset arrival threshold and the dwell time should not be shorter than the preset dwell time. The maximum duration of each real is set to 2 to 5 seconds. An audible cue may be given at the end of the test to indicate its success or failure. A maximum of 4 attempts were made after each three failed. The default setting is to display the target sphere 15 cm from the center of the screen, 5 cm in diameter.
The invention provides an invasive brain-computer interface self-adaptive decoding method based on dynamic evolution calculation, which uses a differential evolution algorithm to autonomously update parameters of a model in a model pool in a data driving mode, and uses the models to replace an observation function in a Kalman filter so as to realize an online dynamic integrated decoder to better adapt to the change of nerve signals. In the decoding process, the models are automatically selected and combined according to a Bayesian updating mechanism, so that the influence of nerve signal instability on decoding performance is greatly reduced, and the robustness of a decoder is improved.
An intrusion type brain-computer interface self-adaptive decoding method based on dynamic evolution calculation comprises the following steps:
(1) Motor nerve signal pretreatment: acquiring an original motor nerve signal from hardware equipment, selecting a proper window size to calculate a nerve signal release rate, intercepting a valid data segment according to a state label, and performing standardization and smoothing operation on the data to obtain preprocessed motor and nerve signals; the data are divided into a training set, a verification set and a test set according to reasonable proportion.
Specifically, experiments recorded neural signals using the neuroort system (NSP, blackrock Microsystems). Neural activity was amplified, digitized and recorded at a frequency of 30 KHz. The threshold for neural action potential detection for each high pass filtered (250 Hz cutoff frequency) electrode was set to-6.5 RMS to-5.5 RMS, respectively, using the Central software package (Blackrock Microsystem). At the beginning of each daily task, researchers manually classify the spikes, which takes about 25 to 35 minutes. Peak activity was converted to a firing rate in 20ms and low pass filtered using an exponential smoothing function with a 450ms window.
(2) Model Chi Chushi settings specifically include:
setting the number of models in the model pool as M, and expressing any model in the model pool as
Setting the proportion of each model distribution data to all training data as r seg The length of time of the training dataset is l all . The training set is cut into training subsets in sequence and sequentially distributed to the models, and then the time length of the training data subset distributed to each model is l seg =l all ·r seg . The cutting windows of each training subset are uniformly distributed at training time points, and the intervals of the front window and the rear window are as followsThus, the subscript of the ith training subset ranges from (i-1). L stride +1 to min { l all ,(i-1)·l stride +l seg }。
Cutting the training data subset according to the subscript to obtain a neural signal and a motion signal corresponding to each model wherein ,/>Representation model->The assigned motion data is used to determine,representation model->Assigned neural data d x and dy Characteristic dimensions of the motion data and the neural data are represented, respectively.
From the assigned data, a linear model is fitted using a least squares method:
wherein ,for a motion data matrix, ratio->The one dimension is added to represent the bias term. />Is a linear mapping matrix (here representing the mth linear model), a +.>Representing independent co-distributed observation noise.
In the simulation data of the example, the number of models is set to be 50, and each model is allocated with data accounting for the proportion r of all training data seg Set to 0.1, motion data dimension d x 1, neurodata dimension d y Training set time length l of 2 all 650.
In the monkey data of this example, the number of models is set to 20 for M, and each model is assigned data to a total training data proportion r seg Set to 0.5, motion data dimension d x 2, neurodata dimension d y Length of training set time l of 20 all 650.
In the clinical data of the example, the number of models is set to be 20 by M, and each model is allocated with data accounting for the proportion r of the total training data seg Set to 0.5, motion data dimension d x 2, neurodata dimension d y Length of training set time l of 20 all 600.
(3) State space model based on dynamic integration:
(3-1) dynamically characterizing the relationship between the state variables and the observed variables for the plurality of candidate models constructed as follows:
x k =f(x k-1 )+v k-1
y k =h k (x k )+n k
wherein k represents a discrete time step;representing motion data, i.e., state variables; />Representing neural data, i.e., observed variables; v k 、n k Representing independent and equidistributed state transition noise and observation noise;the observation function representing the k moment is obtained by integrating the models in the model pool, alpha k,m Representing the integration weight of each model.
(3-2) state estimation of the bayesian model mean law is as follows:
in the formula ,the posterior probability of the state when the mth model is selected at the kth moment;the posterior probability of the mth model is selected at the kth moment, and the calculation formula is as follows:
wherein ,selecting the prior probability of an mth model for the kth moment; p is p m (y k |y 0:k-1 ) Is thatThe edge likelihood of the mth model is selected at the kth time.
The prior probability calculation formula for selecting the mth model at the kth moment is as follows:
wherein ,selecting the probability of the mth model for the k-1 time; alpha is forgetting factor, 0 < alpha < 1.
In this example, the forgetting factor setting varies from 0.1 to 0.9. When the forgetting factor is set to 0.9, the influence of the motion state at the forward moment is considered to a great extent, and thus the decoded motion state is smoother.
The edge likelihood calculation formula for selecting the mth model at the kth moment is as follows:
p m (y k |y o:k- 1)=∫p m (y k |x k )p(x k |y 0:k - 1 )dx k
wherein ,pm (y k |x k ) Is the mth model with respect to the motion parameter x k Likelihood functions of (a) are provided.
(3-3) the state estimation employs a particle filtering algorithm, particle-based computationAnd
(3-3-1) particle-based FilteringFirst, a priori transitions from stateObtain->Then according to the importance sampling principle, the following can be obtained:
wherein , representing the importance weight normalized for the ith particle when the mth model was selected.
In this example, the training set is used as the particle set, so that instability caused by random scattering of particles is avoided.
(3-3-2) particle-based FilteringGiven->First of all, the forgetting factor in step (6-2) is used to calculate +.>Is a priori probability of (c). At likelihood p m (y k |y 0:k-1 ) M=1,.. in case M is known, it is possible to calculate +.>Posterior probability>In said step (3-1), the state transition a priori is used as an importance function, i.e. q (x) k |x k-1 ,y 0:k )=p(x k |x k-1 ). Thus, x k The distribution of (2) can be approximated by particles, i.e.>And then can obtain:
it is noted that particle filtering typically has particle degradation problems, with only a small number of particles having a high weight after several iterations. Thus, resampling methods are employed to remove particles that are too small or too large in weight to alleviate the particle degradation problem.
(4-1) determining fitness values of each model in the evolutionary computation according to model likelihood in the Bayesian multi-model integration framework, by the following method:
likelihood of each model is determined by each particle in particle filteringLikelihood of->And (3) adding to obtain:
wherein ,pm (y k |y 0:k-1 ) For the likelihood of each model,is particles, N s For the number of particles>The likelihood of the ith particle being the mth model.
In model pool evolution, model pools are used as populations, and models (i.e., linear mapping matrices) are usedDrawn as a one-dimensional vector->As an individual. The fitness value of the mth individual in the kth population is:
wherein ,lpre Is the length of the most recent time window. Note that neural decoding is a timing problem, and we do not update the model pool at every instant. Therefore, we typically accumulate some of the data at the latest moment to update the evolution model.
(4-2) selecting an optimal model according to the fitness value, and updating each model in the model pool by using a differential evolution algorithm, wherein the method comprises the following steps:
existing initialization population Where G represents an iterative round of differential evolution.
Population after "mutation" operation of round GBy parent population-> Obtained according to a specific mutation strategy. To avoid the problem of premature population convergence in differential evolution, and to make the population more diverse, a "DE/current-to-pbest/1" variation strategy with external storage is used here:
wherein ,randomly selected from the first 100 p% of the population, usually p.epsilon. [0.05,0.2 ]]。/>Andis from [1, M]Are not equal to each other and are not equal to the current model index m. F is a list of variant factors, +.>Is randomly selected from the external storage model pool.
In the selection process in each differential evolution iteration round G, if the fitness value of each mutated and worse individual is smaller than that of the original individual, the mutated and worse individual is not selected into the next iteration, and is added into an external storage model pool A to serve as a candidate model of the subsequent mutation process, and if the current population is P, the method is thatIs randomly selected from the set A U P.
In each differential evolution iteration round G, the list of variation factors F will be generated from one cauchy distribution: f=random (μ) F 0.1), wherein μ F Is the initial variation factor, 0.1 is the scaling parameter. The value generated is set to 1 when it is greater than 1, and to 0 when it is less than 0. When this iteration is completed, the list will be based on the successful variation factor S F Updating: mu (mu) F =(1-c)·μ F +c·mean L (S F ) Wherein c is E (0, 1)]Is a proportional parameter used to control the effect of the current iteration round, mean L (S F ) Represents the Lehmer mean:
the "cross" operation produces a cross population by randomly selecting a component from each dimension of the parent population or the mutated population. The intersected individual is a D-dimensional vectorEach component of the vector is:
wherein ,jrand Is from [1, D]CR is a list of crossing factors.
In each differential evolution iteration round G, the list of crossover factors F will be generated from a gaussian distribution: cr=rand (μ) CR 0.1), wherein μ CR Is the initial crossover factor and 0.1 is the scaling parameter. The value generated is set to 1 when it is greater than 1, and to 0 when it is less than 0. When this round of iteration is completed, the list will be based on the successful crossover factor S cR Updating: mu (mu) CR =(1-c)·μ CR +c·mean A (S CR ) Wherein c is E (0, 1)]Is a proportional parameter used to control the effect of the current iteration round, mean A (S CR ) Representing the arithmetic mean.
In the "select" phase of operation, it is determined by the fitness function f (·) whether an individual remains in the new population:
the model pool evolution process will iterate several tens to several hundreds of rounds once, the termination condition is that the maximum iteration number G is reached max Or when the fitness value decreases by no more than a certain number of consecutive rounds.
(4-3) the "variant evolution" and "historical model pool" strategies, the method is as follows:
default settings are fixed time intervalst up Updating the model pool, however, excessive time intervals may result in untimely updates, and too small time intervals may result in many unnecessary updates. Thus, the "variant evolution" strategy adaptively selects update times based on model likelihood. The method maintains a log-likelihood list L max-ever Wherein the maximum log likelihood among all candidate models at each time instant is stored. Whenever a new neural signal enters the decoder, the method averages the last three log-likelihoods in the list as the log-likelihood l for the current time cur The three log-likelihoods before the last three log-likelihoods are averaged simultaneously as the log-likelihood l at the previous time pre . Thus, the model pool will be at l cur <r up ·l pre Time update, where r up ∈[0,1]Is the update coefficient introduced.
The default setting is to update the model pool with all individuals after differential evolution. However, differential evolution tends to evolve all models toward the best model, making the model pool deficient in diversity, insufficient to characterize the time-variability of neural signals. Thus, the "history model pool" strategy builds a history model poolTo store an optimal model for each instant, the model pool size being equal to the master model pool size. When the iteration of the differential evolution is finished, the method randomly selects a model in the historical model pool and replaces part of models in the model pool after the evolution. Reserve ratio parameter r pre The history model used to control how much will be retained in the master model pool: m is M pre =r pre M. Thus, the last updated model pool is:
in the simulation data of this example, p in the top 100×p% of the population is set to 0.1, the ratio parameter c for controlling the current iteration round effect is set to 0.05, and the initial variation factor μ F Set to 0.1, initial crossover factor μ CR Set to 0.1. When updating fixedlyInterval t up Set to 15, maximum iteration number G max Set to 100 and the interval round of the early termination operation set to 10 rounds. History data length of each update use l pre 30. Retention proportion r of "historical model pool" policy pre Set to 0.8.
In the monkey data of this example, p in the top 100 x p% of the population was set to 0.2, the scaling parameter c used to control the current iteration round effect was set to 0.05, and the initial variation factor μ F Set to 0.2, initial crossover factor μ cR Set to 0.1. Fixed update time interval t up Set to 15, maximum iteration number G max Set to 300 and the interval round of the early termination operation set to 20 rounds. History data length of each update use l pre 15. Retention proportion r of "historical model pool" policy pre Set to 0.5.
In the clinical data of this example, p in the top 100 x p% of population was set to 0.2, the scaling parameter c used to control the current iteration round effect was set to 0.05, and the initial variation factor μ F Set to 0.3, initial crossover factor μ CR Set to 0.1. Fixed update time interval t up Set to 15, update parameter r of "update with change" policy up Set to 0.4, maximum iteration number G max Set to 300 and the interval round of the early termination operation set to 20 rounds. History data length of each update use l pre 15. Retention proportion r of "historical model pool" policy pre Set to 0.3.
(5) Performance evaluation of multi-model evolution integration method: the performance and effectiveness of the method are evaluated by comparing the method with other methods in test set data.
The present example employs multiple data sets, including a simulation data set, a monkey data set, and a clinical data set.
As shown in fig. 1, the process of a multi-model evolutionary integrated decoder tracking real function changes over a simulated data set is illustrated. Compared with a multi-model integrated decoder without evolution, the multi-model evolutionary integrated method has the advantages that the function fitting track of the multi-model evolutionary integrated method can track the change of the true function, and the effectiveness of the method is proved.
As shown in fig. 2, the decoding performance contrast with and without the use of the "variant update" strategy is demonstrated in four scenarios of the simulated data set. The four scenes are respectively a real function slow monotonic change, a fast monotonic change, a slow non-monotonic change and a fast non-monotonic change. The result shows that the use of the 'variant update strategy' can bring improvement of decoding performance under four conditions, and the effectiveness of the 'variant update' strategy is proved.
As shown in fig. 3, a graph of the decoding performance of the multimodal evolutionary integrated method of the invention versus other methods is shown on both monkey datasets and clinical datasets. It can be seen that the decoding performance of the method is superior to other methods, such as particle filtering, kalman filtering, rehearsal kalman filtering, and non-evolutionary multi-model dynamic integration methods, on each data set. The effectiveness of the method is demonstrated.
The foregoing embodiments have described in detail the technical solution and the advantages of the present invention, it should be understood that the foregoing embodiments are merely illustrative of the present invention and are not intended to limit the invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the invention.

Claims (10)

1. An invasive brain-computer interface self-adaptive decoding method based on dynamic evolution calculation is characterized by comprising the following steps:
(1) The method comprises the steps of obtaining an original motor nerve signal, preprocessing the original motor nerve signal, and dividing the original motor nerve signal into a training set, a verification set and a test set according to a proportion;
(2) Initializing a model pool, which specifically comprises the following steps: setting the number of models in a model pool and the proportion of each model distribution data to all data; cutting the training set according to the set proportion of the distribution data in sequence and then distributing the training set to each model in sequence; for each model, learning a linear model by using the allocated training samples, evaluating by using a verification set, and taking the trained model as a candidate model in a model pool;
(3) Dynamically characterizing the relationship between the state variable and the observed variable by using the constructed multiple candidate models; wherein, the state variable is a motion signal, and the observation variable is a nerve signal;
(4) Dynamically combining candidate models in the model pool according to the Bayesian model average rule to serve as observation functions of the state space model;
(5) In the application process, the state estimation is carried out on the neural signals to be decoded by utilizing the constructed state space model, the states corresponding to different candidate models are estimated and integrated, and the decoded motion signals are obtained;
in the decoding process, when a certain moment reaches a model pool updating condition, the model pool is updated by using evolutionary computation, which comprises the following specific steps:
(5-1) determining an fitness value of each model in the evolutionary computation according to model likelihood in the Bayesian multi-model integration framework;
and (5-2) selecting an optimal model according to the fitness value, using a differential evolution algorithm for each model in the model pool, and evolving towards the optimal model through mutation, crossover and selection iteration operation until the fitness value change is smaller than a threshold value, and finishing updating the model pool.
2. The adaptive decoding method of an invasive brain-computer interface based on dynamic evolution computation according to claim 1, wherein in step (2), the model pool is initially set as follows:
setting the number of models in the model pool as M, and expressing any model in the model pool as
Setting the proportion of each model distribution data to all training data as r seg The length of time of the training set is l all The method comprises the steps of carrying out a first treatment on the surface of the Sequentially cutting training sets into training subsets, sequentially assigning to each model, wherein the time length of the training data subsets assigned to each model is l seg =l all ·r seg
Nerve information corresponding to each modelNumber and motion signal wherein ,/> Representation modelAssigned motion data; />Representation model->Assigned neural data d x and dy Characteristic dimensions respectively representing the motion data and the nerve data; />Representation model->The length of time of the assigned motion data;
from the assigned data, a linear model is fitted using a least squares method:
wherein ,for a motion data matrix, ratio->The one-dimensional representation bias term is added; />Is a linear mapping matrix, here representing the mth linear model,/the matrix is a linear mapping matrix>Representing independent co-distributed observation noise.
3. The adaptive decoding method of an invasive brain-computer interface based on dynamic evolution computation according to claim 1, wherein in step (3), the relationship between the constructed plurality of candidate models dynamically characterizes the state variable and the observed variable is as follows:
x k =f(x k-1 )+v k-1
y k =h k (x k )+n k
wherein k represents a discrete time step;representing motion data, i.e., state variables; />Representing neural data, i.e., observed variables; v k 、n k Representing independent and equidistributed state transition noise and observation noise;the observation function representing the k moment is obtained by integrating the models in the model pool, alpha k,m Representing the integration weight of each model.
4. The adaptive decoding method of an invasive brain-computer interface based on dynamic evolution computation according to claim 1, wherein in step (4), the state estimation of the bayesian model mean law is as follows:
in the formula ,the posterior probability of the state when the mth model is selected at the kth moment;the posterior probability of the mth model is selected at the kth moment, and the calculation formula is as follows:
wherein ,selecting the prior probability of an mth model for the kth moment; p is p m (y k |y 0:k-1 ) The edge likelihood of the mth model is selected for the kth time.
5. The adaptive decoding method of an invasive brain-computer interface based on dynamic evolution computation according to claim 4, wherein the prior probability calculation formula of selecting the mth model at the kth time is as follows:
wherein ,selecting the probability of the mth model for the k-1 time; alpha is forgetting factor, 0 < alpha < 1.
6. The adaptive decoding method of an invasive brain-computer interface based on dynamic evolution computation according to claim 4, wherein the edge likelihood calculation formula of selecting the mth model at the kth time is as follows:
p m (y k |y 0:k-1 )=∫p m (y k |x k )p(x k |y 0:k-1 )dx k
wherein ,pm (y k |x k ) Is the mth model with respect to the motion parameter x k Likelihood functions of (a) are provided.
7. The adaptive decoding method of an invasive brain-computer interface based on dynamic evolution computation according to claim 4, wherein in step (5), said state estimation uses a particle filter algorithm, and is based on particle computation and />
8. The adaptive decoding method of an invasive brain-computer interface based on dynamic evolution computation according to claim 7, wherein the specific process of step (5-1) is:
likelihood of each model is determined by each particle in particle filteringLikelihood of->And (3) adding to obtain:
wherein ,pm (y k |y 0:k-1 ) For the likelihood of each model,is particles, N s For the number of particles to be the same,likelihood of the ith particle being the mth model;
in model pool evolution, a model pool is used as a population, and each model is usedDrawn into one-dimensional vectorAs individuals, the fitness value of the m-th individual in the k-th time population is:
wherein ,lpre Is the length of the most recent time window.
9. The adaptive decoding method of an invasive brain-computer interface based on dynamic evolution computation according to claim 8, wherein the specific process of step (5-2) is:
existing initialization population Wherein G represents an iterative round of differential evolution;
population after variation operation of round GBy parent population-> According to a specific mutation strategy, the method comprises the following steps:
wherein ,randomly selected from the first 100 p% of individuals in the population, p.epsilon. [0.05,0.2 ]];/> and />Is from [1, M]Model subscripts randomly selected in (a), which are mutually unequal and are not equal to the current model subscript m; f is a list of variant factors, +.>Randomly selecting from an external storage model pool;
the crossover operation generates a crossover post-population by randomly selecting a component from each dimension of the parent population or the mutated population; the intersected individual is a D-dimensional vectorEach component of the vector is:
wherein ,jrand Is from [1,D]CR is a list of crossover factors;
in the selection phase of operation, it is determined by the fitness function f (·) whether an individual remains in the new population:
each model in the model pool is operated through iteration of mutation, crossover and selection until the change of the fitness value is smaller than a threshold value, and the updating of the model pool is finished.
10. The adaptive decoding method of an invasive brain-computer interface based on dynamic evolution computation according to claim 9, wherein in the model pool updating process, a strategy of a variant evolution and history model pool is used to determine when and how to update the model pool, specifically as follows:
the variant evolution strategy adaptively selects update times based on model likelihood, which strategy maintains a log-likelihood list L max-ever Storing the maximum log likelihood in all candidate models at each moment; every time a new neural signal enters the decoder, the last three log-likelihoods in the list will be averaged as the log-likelihood l at the current time cur The three log-likelihoods before the last three log-likelihoods are averaged simultaneously as the log-likelihood l at the previous time pre The method comprises the steps of carrying out a first treatment on the surface of the Thus, the model pool will be at l cur <r up ·l pre Time update, where r up ∈[0,1]Is an introduced update coefficient;
historical model pool strategy establishes a historical model poolStoring an optimal model at each moment, wherein the size of the historical model pool is equal to that of the main model pool; when the iteration of the differential evolution is finished, the method randomly selects a model in a historical model pool and replaces part of models in the model pool after the evolution; reserve ratio parameter r pre The history model used to control how much will be retained in the master model pool: m is M pre =r pre M; thus, the last updated model pool is: /> Wherein his represents that the model is derived from a historical model pool,/->Represents M pre A random index for selecting which historical model pools will be retained in the master model pool.
CN202211301943.9A 2022-10-24 2022-10-24 Adaptive decoding method for invasive brain-computer interface based on dynamic evolution calculation Active CN116187152B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211301943.9A CN116187152B (en) 2022-10-24 2022-10-24 Adaptive decoding method for invasive brain-computer interface based on dynamic evolution calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211301943.9A CN116187152B (en) 2022-10-24 2022-10-24 Adaptive decoding method for invasive brain-computer interface based on dynamic evolution calculation

Publications (2)

Publication Number Publication Date
CN116187152A CN116187152A (en) 2023-05-30
CN116187152B true CN116187152B (en) 2023-08-25

Family

ID=86442983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211301943.9A Active CN116187152B (en) 2022-10-24 2022-10-24 Adaptive decoding method for invasive brain-computer interface based on dynamic evolution calculation

Country Status (1)

Country Link
CN (1) CN116187152B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996054A (en) * 2014-06-05 2014-08-20 中南大学 Electroencephalogram feature selecting and classifying method based on combined differential evaluation
CN112764526A (en) * 2020-12-29 2021-05-07 浙江大学 Self-adaptive brain-computer interface decoding method based on multi-model dynamic integration
CN113589937A (en) * 2021-08-04 2021-11-02 浙江大学 Invasive brain-computer interface decoding method based on twin network kernel regression

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040073414A1 (en) * 2002-06-04 2004-04-15 Brown University Research Foundation Method and system for inferring hand motion from multi-cell recordings in the motor cortex using a kalman filter or a bayesian model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996054A (en) * 2014-06-05 2014-08-20 中南大学 Electroencephalogram feature selecting and classifying method based on combined differential evaluation
CN112764526A (en) * 2020-12-29 2021-05-07 浙江大学 Self-adaptive brain-computer interface decoding method based on multi-model dynamic integration
CN113589937A (en) * 2021-08-04 2021-11-02 浙江大学 Invasive brain-computer interface decoding method based on twin network kernel regression

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
脑机交互研究及标准化实践;孙锴 等;信息技术与标准化(第04期);第10-13页 *

Also Published As

Publication number Publication date
CN116187152A (en) 2023-05-30

Similar Documents

Publication Publication Date Title
Perna Convolutional neural networks learning from respiratory data
CN112764526B (en) Self-adaptive brain-computer interface decoding method based on multi-model dynamic integration
CN108319928B (en) Deep learning method and system based on multi-target particle swarm optimization algorithm
Hosman et al. BCI decoder performance comparison of an LSTM recurrent neural network and a Kalman filter in retrospective simulation
US9547820B2 (en) Method of classifying input pattern and pattern classification apparatus
JP2020036633A (en) Abnormality determination program, abnormality determination method and abnormality determination device
CN109864737A (en) The recognition methods of P wave and system in a kind of electrocardiosignal
Homer et al. Adaptive offset correction for intracortical brain–computer interfaces
Pal et al. Many-objective feature selection for motor imagery EEG signals using differential evolution and support vector machine
CN111950441A (en) FNIRS real-time decoding method and system for upper limb movement intention
Damayanti et al. Epilepsy detection on EEG data using backpropagation, firefly algorithm and simulated annealing
Bokil et al. A method for detection and classification of events in neural activity
Cowley et al. Adaptive stimulus selection for optimizing neural population responses
Chrol-Cannon et al. Learning structure of sensory inputs with synaptic plasticity leads to interference
CN116187152B (en) Adaptive decoding method for invasive brain-computer interface based on dynamic evolution calculation
US20060241788A1 (en) System and method for providing a combined bioprosthetic specification of goal state and path of states to goal
Shen et al. Intermediate sensory feedback assisted multi-step neural decoding for reinforcement learning based brain-machine interfaces
US20230010664A1 (en) Reinforcement Learning Based Adaptive State Observation for Brain-Machine Interface
Esfahani et al. Application of NSGA-II in channel selection of motor imagery EEG signals with common spatio-spectral patterns in BCI systems
CN110738093A (en) Classification method based on improved small world echo state network electromyography
CN114330438A (en) Lower limb movement recognition method based on improved sparrow search algorithm optimization KELM
Venu et al. Optimized Deep Learning Model Using Modified Whale’s Optimization Algorithm for EEG Signal Classification
Janghel et al. Epileptic seizure detection and classification using machine learning
CN115358367B (en) Dynamic self-adaptive brain-computer interface decoding method based on multi-model learning integration
CN112766317A (en) Neural network weight training method based on memory playback and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant