CN116595437B - Training method, device and storage medium for zero calibration transfer learning classification model - Google Patents

Training method, device and storage medium for zero calibration transfer learning classification model Download PDF

Info

Publication number
CN116595437B
CN116595437B CN202310558791.9A CN202310558791A CN116595437B CN 116595437 B CN116595437 B CN 116595437B CN 202310558791 A CN202310558791 A CN 202310558791A CN 116595437 B CN116595437 B CN 116595437B
Authority
CN
China
Prior art keywords
marked
domain
data
classification
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310558791.9A
Other languages
Chinese (zh)
Other versions
CN116595437A (en
Inventor
王佳星
王卫群
侯增广
王一涵
苏健强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202310558791.9A priority Critical patent/CN116595437B/en
Publication of CN116595437A publication Critical patent/CN116595437A/en
Application granted granted Critical
Publication of CN116595437B publication Critical patent/CN116595437B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention provides a training method, equipment and storage medium of a zero calibration transfer learning classification model, wherein the method comprises the following steps: acquiring electroencephalogram data of a mark domain classification label and a task classification label; based on the feature extraction layer in the model, carrying out feature extraction on the marked electroencephalogram data; predicting the corresponding prediction task classification and the prediction domain classification by utilizing a motor imagery classification layer and a domain discrimination layer in the model respectively; determining a total loss function of the classification model based on task state data of a source domain and corresponding prediction task classifications in the marked electroencephalogram data and resting state data of a target domain and the source domain and corresponding prediction domain classifications; and under the condition that the total loss function meets convergence or reaches a preset threshold value, obtaining the zero calibration transfer learning classification model. By the training method provided by the invention, the zero calibration transfer learning classification model does not need to be calibrated in advance, is a model which takes the classification accuracy and the exclusive property of a user into consideration, and improves the accuracy of classifying the electroencephalogram data.

Description

Training method, device and storage medium for zero calibration transfer learning classification model
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a training method, training equipment and training storage medium for a zero calibration transfer learning classification model.
Background
The brain-computer interface is capable of directly converting brain intent into control signals, establishing communication by analyzing brain activity. In the related art, an attempt has been made to apply the brain-computer interface technology to rehabilitation of patients with dyskinesia and obtain a better clinical effect. Research shows that although a patient with dyskinesia can not move autonomously, the motor imagery brain electrical signal of the patient can be acquired through the motor cortex area of the same part, so that the brain function is activated, and the neural network is remodeled. Therefore, brain-computer interface technology can be combined with physical equipment, the motor imagery brain signals of a patient are collected, the motor intention of the patient is analyzed, the physical equipment is used for assisting the patient in corresponding movement, so that nerve plasticity is triggered, and the autonomous movement of the patient is finally realized through continuous training.
Due to the differences of individual physiological structures and psychological states, the difference of the brain electrical signals to be tested under the same motor imagery task is often larger, and the brain electrical signals have the characteristics of high nonlinearity, high non-stationarity, easiness in generating artifacts and the like, so that great difficulty is brought to classification tasks. In order to ensure classification accuracy, researchers need to perform a plurality of tests on target testees to build a model in order to build a personalized model for the testees, and the long-time electroencephalogram experiments for paralyzed patients are not considered to bring great burden. Although the personalized model has high precision, common features among the tested models are ignored, so that the data is wasted, and the practicability is poor. The data can be fully utilized across the tested model, but when the uncalibrated data is directly adopted, the classification effect is poor, a plurality of calibration sessions are required to collect labeled data of new tested persons to calibrate the model, the existing model is very time-consuming and not friendly to users, so that how to extract common characteristics of electroencephalogram signals among different tested persons to reduce differences among different tested persons, and in large-scale application, ensuring the accuracy of model construction while eliminating the need of additional electroencephalogram task experiments by users is a technical problem to be solved in the industry.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides a training method, equipment and a storage medium of a zero calibration transfer learning classification model.
In a first aspect, the present invention provides a training method for a zero calibration transfer learning classification model, including:
acquiring electroencephalogram data of the mark domain classification tag and the task classification tag as marked electroencephalogram data; the marked electroencephalogram data are resting state data and task state data generated when a plurality of testees execute different motor imagery tasks, and the domain classification label is used for indicating that the testees corresponding to the electroencephalogram data belong to a target domain or a source domain; the task classification label is used for representing a motor imagery task corresponding to task state data in the electroencephalogram data;
based on a feature extraction layer in the zero calibration transfer learning classification model, extracting features of the marked electroencephalogram data;
classifying the marked electroencephalogram data according to the extracted features by utilizing a motor imagery classification layer and a domain discrimination layer in the zero calibration transfer learning classification model respectively, and determining prediction task classification and prediction domain classification corresponding to the marked electroencephalogram data;
Determining a first loss function based on task state data of a tested person marked as a source domain in the marked electroencephalogram data and a corresponding prediction task classification;
determining a second loss function based on the rest state data marked as a source domain in the marked electroencephalogram data and the corresponding prediction domain classification thereof, and the rest state data marked as a target domain in the marked electroencephalogram data and the corresponding prediction domain classification thereof;
under the condition that the total loss function meets convergence or reaches a preset threshold value, the zero calibration transfer learning classification model is obtained; the total loss function is determined based on the first loss function and the second loss function.
Optionally, after acquiring the electroencephalogram data of the tag domain classification tag and the task classification tag, the method includes:
based on Riemann alignment, determining an alignment result of resting state data of a tested person marked as a target domain in the marked electroencephalogram data as a first alignment result;
based on Riemann alignment, determining an alignment result of rest state data of each tested person marked as a source domain in the marked electroencephalogram data as a second alignment result;
based on Euler alignment, determining an alignment result of task state data of each testee marked as a source domain in the marked electroencephalogram data as a third alignment result;
Updating the marked electroencephalogram data based on the first alignment result, the second alignment result and the third alignment result.
Optionally, before the feature extraction layer in the zero calibration transfer learning classification model performs feature extraction on the marked electroencephalogram data, the method includes:
based on a multi-core maximum mean difference algorithm, determining the similarity of the first resting state data and the second resting state data as the similarity between each testee of the source domain and the testee of the target domain; the first rest state data are rest state data of each tested person marked as a source domain in the updated marked electroencephalogram data; the second resting state data is resting state data of a tested person marked as a target domain;
and sorting from large to small according to the similarity, screening out the last K testees marked as the source domain, updating the testees marked as the source domain in the marked electroencephalogram data, wherein K is a positive integer.
Optionally, the acquiring the electroencephalogram data of the tag domain classification tag and the task classification tag includes:
acquiring brain electricity data generated when the plurality of testees execute different motor imagery tasks;
Extracting brain electrical data of 0.5 to 3.5 seconds after the motor imagery task stimulus appears, marking task classification labels corresponding to the motor imagery task on the brain electrical data, and taking the task classification labels as task state data;
extracting brain electrical data of 4.25 to 5.25 seconds after the motor imagery task stimulus appears as resting state data;
the task state data are passed through a sliding window form, and multiple experimental results of the task state data corresponding to the same motor imagery task are obtained according to a preset step length;
and expanding the duration corresponding to the rest state data in a copying mode to align the duration corresponding to the single experimental result of the task state data.
Optionally, after acquiring the electroencephalogram data generated when the plurality of testees perform different motor imagery tasks, the method includes:
determining a target domain formed by any one of the plurality of testees and a source domain formed by the rest of the testees based on a left cross-validation method, and marking domain classification labels corresponding to brain electrical data of the testees; the domain classification tag includes a target domain and a source domain.
Optionally, the zero calibration transfer learning classification model is based on depth antagonism neural network modeling.
Optionally, the first loss function and the second loss function are both determined using a cross entropy loss function.
In a second aspect, the present invention provides a training device for a zero calibration transfer learning classification model, including:
the acquisition module is used for acquiring the electroencephalogram data of the marked domain classification tag and the task classification tag as marked electroencephalogram data; the marked electroencephalogram data are resting state data and task state data generated when a plurality of testees execute different motor imagery tasks, and the domain classification label is used for indicating that the testees corresponding to the electroencephalogram data belong to a target domain or a source domain; the task classification label is used for representing a motor imagery task corresponding to task state data in the electroencephalogram data;
the feature extraction module is used for extracting features of the marked electroencephalogram data based on a feature extraction layer in the zero calibration transfer learning classification model;
the classification module is used for classifying the marked electroencephalogram data according to the extracted features by utilizing a motor imagery classification layer and a domain identification layer in the zero calibration transfer learning classification model respectively and predicting task classification labels and domain classification labels corresponding to the marked electroencephalogram data;
The first loss determination module is used for determining a first loss function based on task state data of a tested person marked as a source domain in the marked electroencephalogram data and a task classification label of the corresponding prediction;
the second loss determining module is used for determining a second loss function based on the resting state data marked as the source domain in the marked electroencephalogram data and the corresponding predicted domain classification label thereof, and the resting state data marked as the target domain in the marked electroencephalogram data and the corresponding predicted domain classification label thereof;
the training module is used for obtaining the zero calibration transfer learning classification model under the condition that the total loss function meets convergence or reaches a preset threshold value; the total loss function is determined based on the first loss function and the second loss function.
In a third aspect, the present invention also provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the training method of the zero calibration transfer learning classification model according to the first aspect above when executing the program.
In a fourth aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a training method of the zero calibration transfer learning classification model according to the first aspect described above.
In a fifth aspect, the present invention also provides a computer program product comprising a computer program which when executed by a processor implements a training method of the zero calibration transfer learning classification model as described in the first aspect above.
According to the training method, the device and the storage medium for the zero calibration transfer learning classification model, through acquiring the electroencephalogram data of the mark domain classification label and the task classification label, the common characteristics of the electroencephalogram data are extracted by the zero calibration transfer learning classification model, the prediction task classification and the prediction domain classification corresponding to the electroencephalogram data are determined, the total loss function of the zero calibration transfer learning classification model is constructed, and the trained zero calibration transfer learning classification model is obtained under the condition that the total loss function reaches convergence or a preset threshold value, and the zero calibration transfer learning classification model does not need to be calibrated in advance, is a model which is special for a user and is compatible with classification accuracy and accuracy of electroencephalogram data classification is improved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a training method of a zero calibration transfer learning classification model provided by the invention;
FIG. 2 is a schematic diagram of an implementation process of the zero calibration transfer learning classification model provided by the invention;
FIG. 3 is a schematic structural diagram of a training device for zero calibration transfer learning classification model provided by the invention;
fig. 4 is a schematic diagram of an entity structure of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Brain-computer interface is used for acquiring brain electrical data (brain electrical signals) of a patient with dyskinesia, activating brain functions of the patient with dyskinesia, remodelling a neural network and carrying out related research on rehabilitation. Patients with dyskinesia caused by nervous system diseases such as cerebral apoplexy and spinal cord injury can not independently exercise, and no proper rehabilitation treatment scheme can be adopted for the people to recover the plasticity. Stroke, one of the most common causes, often causes a variety of impairments including motor, cognitive and emotional deficits. In recent years, the brain-computer interface is based on obtaining the brain-electrical signal of a patient, and the brain-computer interface has good effects on the aspect of exercise rehabilitation treatment of a patient suffering from the apoplexy. Brain-computer interface technology can be used as a tool to enhance the neuroplasticity and neuro-rehabilitation results of patients. Studies have shown that severely debilitating stroke patients are still able to imagine movement of a paralyzed hand, even without actual movement, which can be attempted.
The non-stationarity of the electroencephalogram signals is easily influenced by external noise, and the classification task is very difficult due to the fact that different tested electroencephalogram signals are large in difference. Thus, previous studies have placed the center of gravity only on improving classification accuracy. In order to ensure classification accuracy, researchers build a personalized model for a tested object, and the target is required to be tested for a plurality of times to build the model, and long-time electroencephalogram experiments for paralyzed patients are not considered to bring great burden. Although the personalized model is high in precision, common brain electrical characteristics among the tested parts are ignored, so that data waste is caused, the number of samples is small, snow is frosted, and the practicability is poor.
In addition to personalized models, the focus of research is across individual models. Currently, many students attempt to apply transfer learning to an electroencephalogram data classification model, with transfer learning (Transfer Learning, TL) improving a new task of learning by transferring knowledge from a related task that has been learned. Aiming at specific tasks, the brain electrical data of similar testees can be used for training, and usability is improved. However, the migration learning requires tagged brain electrical data of the target subject, and at the same time, the difference between individuals results in a long calibration effort to obtain the subject-specific model.
In order to solve the problems of low practicability, long calibration time and the like, the invention provides a zero calibration transfer learning classification model, wherein zero calibration refers to that in large-scale application, a user does not need to do additional electroencephalogram task experiments to acquire data with task classification labels, and a user exclusive model can be obtained only by keeping a resting state to acquire resting state data. And the characteristics of the testees reflected in the resting state data are utilized, the resting state data are used for measuring the difference among the testees and performing migration learning, the task state data of other testees are used for classifying task training, and a model which takes the classification accuracy and the exclusive user into consideration is obtained through an countermeasure training method.
Fig. 1 is a flow chart of a training method of a zero calibration transfer learning classification model provided by the invention, and as shown in fig. 1, the method includes:
step 101, acquiring electroencephalogram data of a marked domain classification tag and a task classification tag as marked electroencephalogram data; the marked electroencephalogram data are resting state data and task state data generated when a plurality of testees execute different motor imagery tasks, and the domain classification label is used for indicating that the testees corresponding to the electroencephalogram data belong to a target domain or a source domain; the task classification label is used for representing a motor imagery task corresponding to task state data in the electroencephalogram data;
Specifically, by means of a brain-computer interface, contacts are arranged in different areas of the heads of a plurality of testees, brain-computer data (brain-computer signals) generated by the testees when different motor imagery tasks are executed are obtained, wherein the motor imagery tasks specifically comprise movements of imagining left hands, right hands, feet, tongues, left little fingers and the like, and each motor imagery task corresponds to a task classification label. The electroencephalogram data comprises resting state data and task state data, wherein the resting state data is mainly used for representing the fluctuation condition of an electroencephalogram signal of a tested person in a period of time after the motor imagery task is completed, the task state data is mainly used for representing the fluctuation condition of the electroencephalogram signal of the tested person in a period of time generated immediately after the motor imagery task is received, and for any motor imagery task, the collection time of the resting state data in the electroencephalogram data of the tested person and the collection time of the task state data in the electroencephalogram data of the tested person are adjacent, wherein the collection time of the resting state data may be before the collection time of the task state data or after the collection time of the task state data.
In addition, the electroencephalogram data also needs to be marked with corresponding domain classification labels, which are mainly used for distinguishing new testers from original testers, and certainly, the electroencephalogram data of all testers can be divided in a training stage, one of the testers is determined to be a target domain, the other testers are taken as source domains, different testers are sequentially selected to be the target domains in a polling mode, and meanwhile, the corresponding source domains are obtained, and the electroencephalogram data is marked with the domain classification labels in the same mode.
102, extracting features of the marked electroencephalogram data based on a feature extraction layer in the zero calibration transfer learning classification model;
step 103, classifying the marked electroencephalogram data according to the extracted features by utilizing a motor imagery classification layer and a domain discrimination layer in the zero calibration transfer learning classification model respectively, and determining a prediction task classification and a prediction domain classification corresponding to the marked electroencephalogram data;
the marked brain electrical data is input into a constructed zero calibration transfer learning classification model, and the zero calibration transfer learning classification model comprises a feature extraction layer, a motor imagery classification layer and a domain identification layer, wherein the feature extraction layer, the motor imagery classification layer and the domain identification layer can be respectively realized through a classifier. And the feature extraction layer and the domain identification layer form an countermeasure network, and migration is realized by making the domain identification layer unable to distinguish the source domain sample and the target domain sample. The marked electroencephalogram data is used for extracting common characteristics of electroencephalogram data of each tested person through a characteristic extraction layer, the extracted characteristics are input into a motor imagery classification layer and a domain identification layer simultaneously, the motor imagery classification layer predicts the prediction task classification corresponding to the electroencephalogram data, namely the motor imagery classification corresponding to the electroencephalogram data, and the domain identification layer predicts the prediction domain classification corresponding to the electroencephalogram data, namely the target domain or the source domain corresponding to the electroencephalogram data.
Step 104, determining a first loss function based on task state data of a tested person marked as a source domain in the marked electroencephalogram data and a corresponding prediction task classification;
step 105, determining a second loss function based on the rest state data marked as the source domain in the marked electroencephalogram data and the corresponding prediction domain classification thereof, and the rest state data marked as the target domain in the marked electroencephalogram data and the corresponding prediction domain classification thereof;
after the prediction result corresponding to each piece of marked electroencephalogram data is obtained through the steps, determining a first loss function according to task state data of a tested person marked as a source domain in the marked electroencephalogram data and prediction task classification corresponding to the task state data of the tested person in the source domain. The source domain testees have a plurality of first loss functions, and the first loss functions correspond to the loss functions determined by the testees.
According to the rest state data of the testee marked as the source domain in the marked electroencephalogram data and the prediction domain classification corresponding to the rest state data of the testee in the source domain, determining a loss function corresponding to the rest state data of the testee in the source domain, wherein a plurality of testees in the source domain correspond to one loss function, and then the number of the loss functions corresponding to the rest state data of all testees in the source domain is the same as the number of the testees in the source domain aiming at the marked electroencephalogram data.
According to the rest state data of the testee marked as the target domain in the marked electroencephalogram data and the prediction domain classification corresponding to the rest state data of the testee in the target domain, determining a loss function corresponding to the rest state data of the testee in the target domain, wherein the number of testees in the target domain is usually one, namely only one loss function corresponding to the rest state data of the testee in the target domain aiming at the marked electroencephalogram data.
And determining a second loss function based on the loss function corresponding to the rest state data of each tested person in the source domain and the loss function corresponding to the rest state data of the tested person in the target domain.
Step 106, obtaining the zero calibration transfer learning classification model under the condition that the total loss function meets convergence or reaches a preset threshold value; the total loss function is determined based on the first loss function and the second loss function.
And determining the total loss function of the zero calibration transfer learning classification model based on the first loss function and the second loss function, training the zero calibration transfer learning classification model through the marked electroencephalogram data, and adjusting parameters of the zero calibration transfer learning classification model by means of counter propagation every time training until the total loss function of the zero calibration transfer learning classification model meets convergence or reaches a preset threshold value, so as to obtain a trained zero calibration transfer learning classification model. The variability among testees is measured by using the resting state data, the transfer learning is carried out, the task classification training is carried out by using the task state data of other testees, the model which takes the classification accuracy and the exclusive property of the user into consideration is obtained by a countermeasure training method, and the decoding precision is not reduced or even improved on the premise of not carrying out calibration work. The trained zero calibration transfer learning classification model can be used for classifying the motor imagery tasks corresponding to the new testee more accurately based on the received brain electricity data of the testee. Wherein the brain electrical data of the new testee is mainly task state data. And then, the brain function of the dyskinesia patient is activated by using the trained zero calibration migration model and brain electrical data (brain electrical signals) of the dyskinesia patient acquired by the brain-computer interface, and the neural network is remodeled to perform rehabilitation treatment on the dyskinesia patient.
According to the training method of the zero calibration transfer learning classification model, the electroencephalogram data of the mark domain classification label and the task classification label are obtained, the common characteristics of the electroencephalogram data are extracted by the zero calibration transfer learning classification model, the prediction task classification and the prediction domain classification corresponding to the electroencephalogram data are determined, the total loss function of the zero calibration transfer learning classification model is determined (built), and under the condition that the total loss function reaches convergence or reaches a preset threshold value, the trained zero calibration transfer learning classification model is obtained, and the zero calibration transfer learning classification model does not need to be calibrated in advance, is a model which is special for a user and is compatible with classification accuracy and accuracy of electroencephalogram data classification is improved.
Optionally, after acquiring the electroencephalogram data of the tag domain classification tag and the task classification tag, the method includes:
based on Riemann alignment, determining an alignment result of resting state data of a tested person marked as a target domain in the marked electroencephalogram data as a first alignment result;
based on Riemann alignment, determining an alignment result of rest state data of each tested person marked as a source domain in the marked electroencephalogram data as a second alignment result;
Based on Euler alignment, determining an alignment result of task state data of each testee marked as a source domain in the marked electroencephalogram data as a third alignment result;
updating the marked electroencephalogram data based on the first alignment result, the second alignment result and the third alignment result.
Specifically, the individual difference is reduced by adopting a data alignment method, the similarity of brain electrical data of different subjects is improved, the transfer learning of the cross-testee is realized, and a more universal model is established. After the marked electroencephalogram data is obtained, in order to reduce the difference among different testees, the rest state data and the task state data are respectively aligned through Riemann alignment and European alignment.
According to the invention, on the premise of not contacting task state data of a tested person in a target domain, the Riemann reference matrix is used for aligning the task state data of the tested person marked as the target domain, so that network performance can be better exerted.
Riemann reference matrix of kth 1 testee of target domainCan be expressed as:
wherein the method comprises the steps of
Riemann alignment matrixThe initial value is +.>
Wherein, the liquid crystal display device comprises a liquid crystal display device,riemann reference matrix delta representing the kth 1 st subject in the target domain t 2 () Representing the square of the Riemann distance, < >>Riemann alignment matrix representing the kth 1 st subject in the target domain t, ++>Representing resting state data in the ith experimental result of the kth 1 st subject in the target domain t, ++>Representation pair->Transpose, m represents the total number of trials of the kth 1 st subject in the target domain t,/>Representation->Is II F Representing the Frobenius norm, +.>Representation->To the negative half of>Representation->Is the first alignment result described above.
In addition, the task state data of the testee marked as the target domain is aligned by using the Riemann reference matrix, which can be specifically expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device,task state data in the ith experimental result of the kth 1 st tested person in the target domain t are represented by +.>Representation->Riemann alignment results of->Riemann reference matrix representing the kth 1 st subject in the target domain t>To the negative power of half.
Riemann reference matrix of kth 2 testees of source domainCan be expressed as:
wherein the method comprises the steps of
Riemann alignment matrix The initial value is +.>
Wherein, the liquid crystal display device comprises a liquid crystal display device,riemann reference matrix delta representing the kth 2 subject in the source domain s 2 () Representing the square of the Riemann distance, < >>Riemann alignment matrix representing the kth 2 th subject in the source field s, ++>Representing resting state data in the j-th experimental result of the kth 2 th subject in the source domain s,/for>Representation pair->Transpose, n represents the total number of trials of the kth 2 subjects in the source field s,/>Representation->Is II F Representing the Frobenius norm, +.>Representation->To the negative half of>Representation->Is the second alignment result described above.
Based on Euler alignment, determining an alignment result of task state data of each testee marked as a source domain in the marked electroencephalogram data, wherein the alignment result is used as a third alignment result and can be specifically expressed as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,euler reference matrix representing the kth 2 subjects in source field s, n representing the total number of experiments of the kth 2 subjects in source field s, < >>The task state data in the j-th experimental result of the kth 2 testee in the source domain s is represented,representation pair->Transpose,/->Representation->To the negative half of>Representation->The Euler alignment result of (2), namely the third alignment result.
And updating the marked electroencephalogram data based on the alignment result.
After Riemann alignment and European alignment, the average covariance matrix corresponding to the electroencephalogram data of each tested person in the source domain can be mapped into a unit matrix, and the average covariance matrix corresponding to the electroencephalogram data of the tested person in the target domain can be mapped into a unit matrix, namely, the electroencephalogram data of different tested persons in the source domain and the target domain are transformed into the same reference, so that the difference between different tested persons is reduced.
Optionally, before the feature extraction layer in the zero calibration transfer learning classification model performs feature extraction on the marked electroencephalogram data, the method includes:
based on a multi-core maximum mean difference algorithm, determining the similarity of the first resting state data and the second resting state data as the similarity between each testee of the source domain and the testee of the target domain; the first rest state data are rest state data of each tested person marked as a source domain in the updated marked electroencephalogram data; the second resting state data is resting state data of a tested person marked as a target domain;
and sorting from large to small according to the similarity, screening out the last K testees marked as the source domain, updating the testees marked as the source domain in the marked electroencephalogram data, wherein K is a positive integer.
Specifically, after the marked electroencephalogram data are subjected to Riemann alignment and Euler alignment in the mode, the difference between different testers is reduced, so that the difference between the testers in the source domain and the testers in the target domain in the migration learning process is further reduced, and meanwhile, in order to avoid the situation of negative migration, the testers in the source domain, which are too different from the testers in the target domain, are removed through the screening of the testers. Specifically, a multi-core maximum mean difference (Multiple Kernel Maximum Mean Discrepancy, MK-MMD) algorithm is adopted to measure the difference between testees in a source domain and a target domain for screening the testees. The multi-core maximum mean difference is used for measuring the brain electrical data distribution difference among the testees so as to screen the source domain testees which are more similar to the testees in the target domain for training. The multi-core maximum mean difference algorithm mainly utilizes the rest state data of the testees in the target domain and the rest state data of the testees in the source domain.
Based on the updated rest state data of each testee marked as a source domain and the rest state data of each testee marked as a target domain in the marked electroencephalogram data, determining the similarity of each testee in the source domain and each testee in the target domain by adopting a multi-core maximum mean difference algorithm, wherein the similarity can be expressed as follows:
Wherein MMMD []Line multi-core maximum mean difference computation, F represents the function domain,riemann alignment result representing resting state data in the jth experimental result of the kth 2 th subject in the source domain s, +.>Riemann alignment result representing resting state data in the ith experimental result of the kth 1 st subject in the target domain t, +.>Represents the distance in the regenerated Hilbert space, m represents the total number of experiments of the kth 1 st subject in the target domain t, n represents the total number of experiments of the kth 2 nd subject in the source domain s,/>The representation will->Mapping to regenerated Hilbert space by the function Φ ()>The representation will->The function F represents the function domain by mapping to the regenerated hilbert space by the function Φ (), which is a function in the function domain F.
In actual calculation, the function phi () is not calculated, but can be directly calculated by using the kernel skills, and specifically expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device,the j of the kth 2 tested person in the source domain s Riemann alignment of resting data in the results of the experiments, < ->Ith representing kth 1 st subject in target field t Results of Riemann alignment of resting data in the secondary experimental results, K sr,sr 、K sr,tr 、K tr,sr And K tr,tr All represent kernel functions, gamma represents hyper-parameters, is the bandwidth corresponding to kernel functions, ++ >Representing the set of rest data in all experimental times of the kth 2 subject in the source field s,/for>Represent all experimental runs of the kth 1 st subject in the target domain tThe rest state data in the numbers form a set, M ij Representing the correlation coefficient.
The similarity between each testee of the source domain and the testee of the target domain is determined through the formula of the multi-core maximum mean difference algorithm, namely, the similarity between the rest state data of each testee of the source domain and the rest state data of the testee marked as the target domain is determined through the formula of the multi-core maximum mean difference algorithm, the testees marked as the source domain are screened out from the last K of the ranks according to the sequence of the similarity from large to small, namely, the testees marked as the source domain in the K source domains with the lowest similarity are not used as training samples of the zero calibration transfer learning classification model, and the domain classification labels of the testees marked as the source domain in the marked electroencephalogram data are updated, namely, the K is deleted, and the K is a positive integer. Assuming that 10 testers of the whole source domain exist, 2 testers of the source domain with the lowest screened similarity exist, setting domain classification labels of the two testers of the source domain with the lowest similarity to be null, and finally setting the number of the testers of the source domain in a training sample serving as a zero calibration transfer learning classification model to be 8.
Optionally, the acquiring the electroencephalogram data of the tag domain classification tag and the task classification tag includes:
acquiring brain electricity data generated when the plurality of testees execute different motor imagery tasks;
extracting brain electrical data of 0.5 to 3.5 seconds after the motor imagery task stimulus appears, marking task classification labels corresponding to the motor imagery task on the brain electrical data, and taking the task classification labels as task state data;
extracting brain electrical data of 4.25 to 5.25 seconds after the motor imagery task stimulus appears as resting state data;
the task state data are passed through a sliding window form, and multiple experimental results of the task state data corresponding to the same motor imagery task are obtained according to a preset step length;
and expanding the duration corresponding to the rest state data in a copying mode to align the duration corresponding to the single experimental result of the task state data.
Specifically, acquiring brain electrical data generated when the plurality of testees execute different motor imagery tasks; for example, each test is required to perform four different motor imagery tasks, i.e., imagining the left hand, right hand, feet and tongue, and an Electroencephalogram (EEG) of 22 Electroencephalogram channels and an Electrooculography (EOG) of 3 Electrooculography channels are recorded at a sampling rate of 250 Hz. And 72 experiments were performed on each category of motor imagery tasks.
In the signal processing step, the band-pass filter with cut-off frequency of 6dB is used for extracting frequency of [8,30] Hz for removing muscle artifact, mains interference and DC drift. The EEG signal between [0.5,3.5] seconds after the stimulus was then extracted as mission state data. EEG signals between [4.25,5.25] seconds after the stimulus was presented were extracted as resting state data.
In order to effectively utilize limited experimental data, an overlapping time slicing strategy is used for expanding a sample, the extracted [0.5,3.5] seconds of electroencephalogram data is sliced, the length of a sliding window is 2 seconds, the step length is set to be 0.5 seconds, so that a single task is expanded to 3 times, the label of each time slice is consistent with an original label, and a final prediction label is obtained by voting three sliced time slices. Meanwhile, in order to enable the task state data with the duration of 2 seconds and the rest state data with the duration of 1 second to share a feature extraction layer (feature extractor) in the zero calibration transfer learning classification model, the rest state data is copied into 2 seconds. By the method, electroencephalogram data with more experimental times are obtained, namely task state data and rest state data with more experimental times are obtained, and the number of samples for training the zero calibration transfer learning classification model is expanded.
In addition, public data sets commonly used in the art, such as a third international brain-computer interface competition data set Dataset IVa, a fourth international brain-computer interface competition data set Dataset 2a, dataset 2b and the like, can be adopted, and the data sets are used for enabling a tested person to execute various motor imagery tasks according to different experimental paradigms by distributing different numbers of electrodes to brain electrical signal areas of the tested person, and acquiring brain electrical signal EEG (electroencephalogram data) corresponding to each motor imagery task acquired by each tested person through the different electrodes (channels), wherein the experimental paradigms are used for limiting the frequency, duration, interval duration and the like of the stimulation of the observed motor imagery tasks by the tested person. And the original data are subjected to pretreatment such as filtering, interception and the like, and more samples are obtained through a sliding window.
Optionally, after acquiring the electroencephalogram data generated when the plurality of testees perform different motor imagery tasks, the method includes:
determining a target domain formed by any one of the plurality of testees and a source domain formed by the rest of the testees based on a left cross-validation method, and marking domain classification labels corresponding to brain electrical data of the testees; the domain classification tag includes a target domain and a source domain.
Specifically, one testee is sequentially selected as a target testee by a leave-one-out cross-validation (LOSO), namely, the domain classification label of the testee is marked as a source domain, and the rest testees are used as source domain testees until all testees are included in the test set once. And (3) after the iterative training of preset times (such as 100 times), or under the condition that the total loss function convergence of the zero calibration transfer learning classification model is determined, or the total loss function reaches a preset threshold value, obtaining an optimal model of each tested person.
Optionally, the first loss function and the second loss function are both determined using a cross entropy loss function.
Specifically, the zero calibration transfer learning classification model is based on depth antagonism neural network modeling, and as shown in fig. 2, includes a feature extraction layer, a motor imagery classification layer, and a domain discrimination layer. All sample data used to train the zero calibration transfer learning classification model share parameters of the feature extraction layer. The characteristic parameters obtained by the characteristic extraction layer are respectively subjected to a motor imagery classification layer and a domain identification layer in the zero calibration transfer learning classification model, the characteristic parameters are subjected to motor imagery classification and domain classification by a network of the motor imagery classification layer and a network of the domain identification layer, the motor imagery classification layer calculates the predicted task classification loss corresponding to the task state data of a tested person in a source domain, and counter-propagates the network parameters of the motor imagery classification layer, so that the network parameters of the motor imagery classification layer are adjusted, the domain identification layer calculates the domain classification loss of resting state data of the tested person in the source domain and the tested person in a target domain, counter-propagates the network parameters of the domain identification layer, so that the network parameters of the domain identification layer are adjusted, and a gradient inversion method is used in the domain identification layer, so that the two losses of the motor imagery classification layer and the domain identification layer are in the same direction, and the loss of the motor imagery classification layer can be optimized at the same time.
The zero calibration transfer learning classification model constructed by the deep countermeasure neural network is used for extracting public features as far as possible through a feature extraction layer, a motor imagery classification layer and a domain identification layer train the network of each layer according to corresponding task state data and rest state data respectively, and under the condition that the total loss function meets convergence or reaches a preset threshold value, the optimal network parameters of each layer are obtained, and the trained zero calibration transfer learning classification model is obtained.
The loss of the motor imagery classification layer (prediction task classification loss), that is, the task state data of the testee marked as the source domain in the marked electroencephalogram data and the corresponding prediction task classification thereof, adopts a cross entropy loss function to determine (construct) a first loss function, which can be expressed as:
wherein L is y k2 A first loss function corresponding to the kth 2 testee of the source domain s; θ f Network parameters representing the feature extraction layer; θ y Network parameters representing a motor imagery classification layer; θ d Network parameters representing a domain authentication layer; g f () Representing processing of the input data by the feature extraction layer; g y () Representing the number of pairs of inputs by the motor imagery classification layerProcessing according to the data; g d () Representing processing of the input data by the domain discrimination layer; The alignment result of task state data in the jth experimental result of the kth 2 testees in the source domain s is represented, and the result of feature extraction by a feature extraction layer in the zero calibration transfer learning classification model is input; />The method comprises the steps of representing prediction task classification corresponding to an alignment result of task state data in a jth experimental result of a kth 2 tested person in a source domain s; n represents the total experiment number of the kth 2 testees in the source domain s;and the actual task classification label of the task state data in the j-th experimental result of the kth 2 testee in the source domain s is represented, and different values of the task classification label are used for representing different motor imagery tasks.
The first loss function corresponding to all testees of the source domain can be expressed as:
wherein n is s Representing the total number of all subjects in the source domain, L y Representing the first loss functions corresponding to all the testees of the source domain.
The loss of the domain authentication layer includes the domain migration loss of the resting state data of the testee of the source domain and the domain migration loss of the resting state data of the testee of the target domain.
The domain migration loss of the resting state data of the testee in the source domain, that is, based on the resting state data marked as the source domain in the marked electroencephalogram data and the corresponding prediction domain classification, adopts a cross entropy loss function to determine (construct) a loss function corresponding to the resting state data of each testee in the source domain, and can be expressed as:
Wherein L is sd k2 A loss function corresponding to the rest state data of the kth 2 testee of the source domain s is represented;the alignment result of the resting state data in the j-th experimental result of the kth 2 testee of the source domain s is input into the result of feature extraction by the feature extraction layer in the zero calibration transfer learning classification model; />Representing the prediction domain classification corresponding to the alignment result of the resting state data in the j-th experimental result of the kth 2 testee of the source domain s; n represents the total experiment number of the kth 2 testees in the source domain s; />The actual domain classification label of the rest state data in the j-th experimental result of the kth 2 tested person of the source domain s is represented, and other parameters have the same meaning as the same parameters in the first loss function.
The loss function corresponding to the rest state data of all the testees in the source domain can be expressed as:
wherein n is s Representing the total number of all subjects in the source domain,representing the loss function corresponding to the rest state data of all testees of the source domain.
The domain migration loss of the resting state data of the testee in the target domain, that is, based on the resting state data marked as the target domain in the marked electroencephalogram data and the corresponding prediction domain classification, adopts a cross entropy loss function to determine (construct) the loss function corresponding to the resting state data of the testee in the target domain, and can be expressed as:
Wherein, the liquid crystal display device comprises a liquid crystal display device,the method comprises the steps that an alignment result of resting state data in an ith experimental result of a kth 1 tested person in a target domain t is represented, and a result of feature extraction by a feature extraction layer in a zero calibration transfer learning classification model is input;representing the prediction domain classification corresponding to the alignment result of the resting state data in the ith experimental result of the kth 1 tested person in the target domain t; m represents the total experiment times of the kth 1 tested person in the target domain t; />The actual domain classification label of the resting state data in the ith experimental result of the kth 1 tested person in the target domain t is represented, and other parameters have the same meaning as the same parameters in the first loss function.
The loss function corresponding to the rest state data of all the testees in the target domain can be expressed as:
wherein n is t Representing the total number of testees in the target domain;representing silence of all subjects of the target domainLoss function corresponding to the rest state data.
The loss function of the domain discrimination layer, i.e. the second loss function, is:
wherein L is d A loss function representing a domain discrimination layer;representing loss functions corresponding to the rest state data of all testees in the source domain; />And representing the loss function corresponding to the rest state data of all testees in the target domain.
Zero calibration transfer learning classification model total loss function E (θ fyd ) Can be expressed as:
wherein λ represents a trade-off parameter for determining a ratio of a loss function corresponding to the rest state data to a loss function corresponding to the task state data.
In the case that the total loss function meets the convergence or reaches the preset threshold, a zero calibration transfer learning classification model is obtained by determining the function E (theta fyd ) Is realized by means of a minimum value of (2), in which case the loss L of the motor imagery classification layer needs to be satisfied y Minimum, and loss L of domain discrimination layer d Maximum, i.e. minimizing, loss L of the motor imagery classification layer y
Maximizing loss L of domain discrimination layer d
Wherein, the liquid crystal display device comprises a liquid crystal display device,and respectively representing the optimal network parameters of the feature extraction layer, the optimal network parameters of the motor imagery classification layer and the optimal network parameters of the domain identification layer in the zero calibration transfer learning classification model.
The network parameters of the zero calibration transfer learning classification model are updated through a gradient descent algorithm, the zero calibration transfer learning classification model comprises a feature extraction layer, a motor imagery classification layer and a domain identification layer, the gradient descent algorithm is used for optimizing parameters of each layer of the zero calibration transfer learning classification model, and a corresponding gradient update formula is as follows:
wherein θ f Network parameters representing the feature extraction layer; θ y Network parameters representing a motor imagery classification layer; θ d Representing network parameters of the domain authentication layer.
According to the training method of the zero calibration transfer learning classification model, the electroencephalogram data of the mark domain classification label and the task classification label are obtained, the common characteristics of the electroencephalogram data are extracted by the zero calibration transfer learning classification model, the prediction task classification and the prediction domain classification corresponding to the electroencephalogram data are determined, the total loss function of the zero calibration transfer learning classification model is built, and the trained zero calibration transfer learning classification model is obtained under the condition that the total loss function converges or reaches a preset threshold value, and the zero calibration transfer learning classification model does not need to be calibrated in advance, is a model which is special for a user and is compatible with classification accuracy, and improves accuracy of electroencephalogram data classification.
In addition, the application adopts a simulation mode, adopts a co-space mode algorithm (Common Spatial Pattern, CSP) +linear discriminant analysis (Linear Discriminant Analysis, LDA), a Riemann average minimum distance algorithm (Minimum Distance to Riemannian Mean, MDRM), a Riemann average minimum distance algorithm MDRM+calibration CALIBRATE, a co-space mode algorithm CSP+linear discriminant analysis LDA+calibration CALIBRATE, and a zero calibration migration learning classification model constructed by the deep countermeasure neural network (Domain-Adversarial Training ofNeural Networks, DANN) to predict and classify the electroencephalogram data of the same tested person, and obviously improves the classification accuracy of the zero calibration migration learning classification model, and particularly refers to a table 1.
TABLE 1
Fig. 3 is a schematic structural diagram of a training device for zero calibration transfer learning classification model, as shown in fig. 3, including:
the acquiring module 301 is configured to acquire electroencephalogram data of the tag domain classification tag and the task classification tag as labeled electroencephalogram data; the marked electroencephalogram data are resting state data and task state data generated when a plurality of testees execute different motor imagery tasks, and the domain classification label is used for indicating that the testees corresponding to the electroencephalogram data belong to a target domain or a source domain; the task classification label is used for representing a motor imagery task corresponding to task state data in the electroencephalogram data;
the feature extraction module 302 is configured to perform feature extraction on the marked electroencephalogram data based on a feature extraction layer in the zero calibration transfer learning classification model;
the classification module 303 is configured to classify the marked electroencephalogram data according to the extracted features by using a motor imagery classification layer and a domain identification layer in the zero calibration transfer learning classification model, and predict task classification labels and domain classification labels corresponding to the marked electroencephalogram data;
a first loss determining module 304, configured to determine a first loss function based on task state data of a subject marked as a source domain in the marked electroencephalogram data and a task classification label of a prediction corresponding to the task state data;
A second loss determining module 305, configured to determine a second loss function based on the rest state data marked as the source domain in the marked electroencephalogram data and the domain classification label corresponding to the rest state data;
the training module 306 is configured to obtain the zero calibration transfer learning classification model by making the total loss function meet a convergence or reach a preset threshold; the total loss function is determined based on the first loss function and the second loss function.
Optionally, the training device of the zero calibration transfer learning classification model further includes a data alignment module, specifically configured to:
based on Riemann alignment, determining an alignment result of resting state data of a tested person marked as a target domain in the marked electroencephalogram data as a first alignment result;
based on Riemann alignment, determining an alignment result of rest state data of each tested person marked as a source domain in the marked electroencephalogram data as a second alignment result;
based on Euler alignment, determining an alignment result of task state data of each testee marked as a source domain in the marked electroencephalogram data as a third alignment result;
Updating the marked electroencephalogram data based on the first alignment result, the second alignment result and the third alignment result.
Optionally, the training device of the zero calibration transfer learning classification model further comprises a tested screening module, which is specifically used for:
based on a multi-core maximum mean difference algorithm, determining the similarity of the first resting state data and the second resting state data as the similarity between each testee of the source domain and the testee of the target domain; the first rest state data are rest state data of each tested person marked as a source domain in the updated marked electroencephalogram data; the second resting state data is resting state data of a tested person marked as a target domain;
and sorting from large to small according to the similarity, screening out the last K testees marked as the source domain, updating the testees marked as the source domain in the marked electroencephalogram data, wherein K is a positive integer.
The training device for the zero calibration transfer learning classification model provided by the embodiment of the invention can execute the training method for the zero calibration transfer learning classification model in any embodiment, and the implementation principle and beneficial effects of the training method for the zero calibration transfer learning classification model are similar to those of the zero calibration transfer learning classification model, and can be seen from the implementation principle and beneficial effects of the training method for the zero calibration transfer learning classification model, and the detailed description is omitted herein.
Fig. 4 is a schematic physical structure of an electronic device according to an embodiment of the present invention, as shown in fig. 4, the electronic device may include: processor 410, communication interface (Communications Interface) 420, memory 430 and communication bus 440, wherein processor 410, communication interface 420 and memory 430 communicate with each other via communication bus 440. The processor 410 may invoke logic instructions in the memory 430 to perform a training method for a zero calibration transfer learning classification model, the method comprising:
acquiring electroencephalogram data of the mark domain classification tag and the task classification tag as marked electroencephalogram data; the marked electroencephalogram data are resting state data and task state data generated when a plurality of testees execute different motor imagery tasks, and the domain classification label is used for indicating that the testees corresponding to the electroencephalogram data belong to a target domain or a source domain; the task classification label is used for representing a motor imagery task corresponding to task state data in the electroencephalogram data;
based on a feature extraction layer in the zero calibration transfer learning classification model, extracting features of the marked electroencephalogram data;
classifying the marked electroencephalogram data according to the extracted features by utilizing a motor imagery classification layer and a domain discrimination layer in the zero calibration transfer learning classification model respectively, and determining prediction task classification and prediction domain classification corresponding to the marked electroencephalogram data;
Determining a first loss function based on task state data of a tested person marked as a source domain in the marked electroencephalogram data and a corresponding prediction task classification;
determining a second loss function based on the rest state data marked as a source domain in the marked electroencephalogram data and the corresponding prediction domain classification thereof, and the rest state data marked as a target domain in the marked electroencephalogram data and the corresponding prediction domain classification thereof;
under the condition that the total loss function meets convergence or reaches a preset threshold value, the zero calibration transfer learning classification model is obtained; the total loss function is determined based on the first loss function and the second loss function.
Further, the logic instructions in the memory 430 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, where the computer program product includes a computer program, where the computer program can be stored on a non-transitory computer readable storage medium, and when the computer program is executed by a processor, the computer can execute a training method of the zero calibration transfer learning classification model provided by the above methods, and the method includes:
acquiring electroencephalogram data of the mark domain classification tag and the task classification tag as marked electroencephalogram data; the marked electroencephalogram data are resting state data and task state data generated when a plurality of testees execute different motor imagery tasks, and the domain classification label is used for indicating that the testees corresponding to the electroencephalogram data belong to a target domain or a source domain; the task classification label is used for representing a motor imagery task corresponding to task state data in the electroencephalogram data;
based on a feature extraction layer in the zero calibration transfer learning classification model, extracting features of the marked electroencephalogram data;
classifying the marked electroencephalogram data according to the extracted features by utilizing a motor imagery classification layer and a domain discrimination layer in the zero calibration transfer learning classification model respectively, and determining prediction task classification and prediction domain classification corresponding to the marked electroencephalogram data;
Determining a first loss function based on task state data of a tested person marked as a source domain in the marked electroencephalogram data and a corresponding prediction task classification;
determining a second loss function based on the rest state data marked as a source domain in the marked electroencephalogram data and the corresponding prediction domain classification thereof, and the rest state data marked as a target domain in the marked electroencephalogram data and the corresponding prediction domain classification thereof;
under the condition that the total loss function meets convergence or reaches a preset threshold value, the zero calibration transfer learning classification model is obtained; the total loss function is determined based on the first loss function and the second loss function.
In yet another aspect, the present invention further provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform a training method of a zero calibration transfer learning classification model provided by the methods above, the method comprising:
acquiring electroencephalogram data of the mark domain classification tag and the task classification tag as marked electroencephalogram data; the marked electroencephalogram data are resting state data and task state data generated when a plurality of testees execute different motor imagery tasks, and the domain classification label is used for indicating that the testees corresponding to the electroencephalogram data belong to a target domain or a source domain; the task classification label is used for representing a motor imagery task corresponding to task state data in the electroencephalogram data;
Based on a feature extraction layer in the zero calibration transfer learning classification model, extracting features of the marked electroencephalogram data;
classifying the marked electroencephalogram data according to the extracted features by utilizing a motor imagery classification layer and a domain discrimination layer in the zero calibration transfer learning classification model respectively, and determining prediction task classification and prediction domain classification corresponding to the marked electroencephalogram data;
determining a first loss function based on task state data of a tested person marked as a source domain in the marked electroencephalogram data and a corresponding prediction task classification;
determining a second loss function based on the rest state data marked as a source domain in the marked electroencephalogram data and the corresponding prediction domain classification thereof, and the rest state data marked as a target domain in the marked electroencephalogram data and the corresponding prediction domain classification thereof;
under the condition that the total loss function meets convergence or reaches a preset threshold value, the zero calibration transfer learning classification model is obtained; the total loss function is determined based on the first loss function and the second loss function.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A training method of a zero calibration transfer learning classification model is characterized by comprising the following steps:
acquiring electroencephalogram data of the mark domain classification tag and the task classification tag as marked electroencephalogram data; the marked electroencephalogram data are resting state data and task state data generated when a plurality of testees execute different motor imagery tasks, and the domain classification label is used for indicating that the testees corresponding to the electroencephalogram data belong to a target domain or a source domain; the task classification label is used for representing a motor imagery task corresponding to task state data in the electroencephalogram data;
based on a feature extraction layer in the zero calibration transfer learning classification model, extracting features of the marked electroencephalogram data;
classifying the marked electroencephalogram data according to the extracted features by utilizing a motor imagery classification layer and a domain discrimination layer in the zero calibration transfer learning classification model respectively, and determining prediction task classification and prediction domain classification corresponding to the marked electroencephalogram data;
determining a first loss function based on task state data of a tested person marked as a source domain in the marked electroencephalogram data and a corresponding prediction task classification;
Determining a second loss function based on the rest state data marked as a source domain in the marked electroencephalogram data and the corresponding prediction domain classification thereof, and the rest state data marked as a target domain in the marked electroencephalogram data and the corresponding prediction domain classification thereof;
under the condition that the total loss function meets convergence or reaches a preset threshold value, the zero calibration transfer learning classification model is obtained; the total loss function is determined based on the first loss function and the second loss function.
2. The training method of the zero calibration transfer learning classification model according to claim 1, wherein after acquiring the electroencephalogram data of the tag domain classification label and the task classification label, the training method comprises:
based on Riemann alignment, determining an alignment result of resting state data of a tested person marked as a target domain in the marked electroencephalogram data as a first alignment result;
based on Riemann alignment, determining an alignment result of rest state data of each tested person marked as a source domain in the marked electroencephalogram data as a second alignment result;
based on Euler alignment, determining an alignment result of task state data of each testee marked as a source domain in the marked electroencephalogram data as a third alignment result;
Updating the marked electroencephalogram data based on the first alignment result, the second alignment result and the third alignment result.
3. The training method of the zero calibration transfer learning classification model according to claim 2, wherein before the feature extraction is performed on the marked electroencephalogram data based on the feature extraction layer in the zero calibration transfer learning classification model, the training method comprises:
based on a multi-core maximum mean difference algorithm, determining the similarity of the first resting state data and the second resting state data as the similarity between each testee of the source domain and the testee of the target domain; the first rest state data are rest state data of each tested person marked as a source domain in the updated marked electroencephalogram data; the second resting state data is resting state data of a tested person marked as a target domain;
and sorting from large to small according to the similarity, screening out the last K testees marked as the source domain, updating the testees marked as the source domain in the marked electroencephalogram data, wherein K is a positive integer.
4. The training method of the zero calibration transfer learning classification model according to claim 3, wherein the acquiring electroencephalogram data of the tag domain classification label and the task classification label comprises:
Acquiring brain electricity data generated when the plurality of testees execute different motor imagery tasks;
extracting brain electrical data of 0.5 to 3.5 seconds after the motor imagery task stimulus appears, marking task classification labels corresponding to the motor imagery task on the brain electrical data, and taking the task classification labels as task state data;
extracting brain electrical data of 4.25 to 5.25 seconds after the motor imagery task stimulus appears as resting state data;
the task state data are passed through a sliding window form, and multiple experimental results of the task state data corresponding to the same motor imagery task are obtained according to a preset step length;
and expanding the duration corresponding to the rest state data in a copying mode to align the duration corresponding to the single experimental result of the task state data.
5. The training method of the zero calibration transfer learning classification model according to claim 4, wherein after obtaining the electroencephalogram data generated when the plurality of testees perform different motor imagery tasks, the training method comprises:
determining a target domain formed by any one of the plurality of testees and a source domain formed by the rest of the testees based on a left cross-validation method, and marking domain classification labels corresponding to brain electrical data of the testees; the domain classification tag includes a target domain and a source domain.
6. The method of training a zero-calibration, transition-learning classification model according to any one of claims 1 to 5, wherein the zero-calibration, transition-learning classification model is modeled based on a deep antagonistic neural network.
7. The method of training a zero calibration transfer learning classification model of any of claims 1-5, wherein the first and second loss functions are each determined using a cross entropy loss function.
8. A training device for zero calibration transfer learning classification model, comprising:
the acquisition module is used for acquiring the electroencephalogram data of the marked domain classification tag and the task classification tag as marked electroencephalogram data; the marked electroencephalogram data are resting state data and task state data generated when a plurality of testees execute different motor imagery tasks, and the domain classification label is used for indicating that the testees corresponding to the electroencephalogram data belong to a target domain or a source domain; the task classification label is used for representing a motor imagery task corresponding to task state data in the electroencephalogram data;
the feature extraction module is used for extracting features of the marked electroencephalogram data based on a feature extraction layer in the zero calibration transfer learning classification model;
The classification module is used for classifying the marked electroencephalogram data according to the extracted features by utilizing a motor imagery classification layer and a domain identification layer in the zero calibration transfer learning classification model respectively and predicting task classification labels and domain classification labels corresponding to the marked electroencephalogram data;
the first loss determination module is used for determining a first loss function based on task state data of a tested person marked as a source domain in the marked electroencephalogram data and a task classification label of the corresponding prediction;
the second loss determining module is used for determining a second loss function based on the resting state data marked as the source domain in the marked electroencephalogram data and the corresponding predicted domain classification label thereof, and the resting state data marked as the target domain in the marked electroencephalogram data and the corresponding predicted domain classification label thereof;
the training module is used for obtaining the zero calibration transfer learning classification model under the condition that the total loss function meets convergence or reaches a preset threshold value; the total loss function is determined based on the first loss function and the second loss function.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the training method of the zero calibration transfer learning classification model of any one of claims 1 to 7 when the program is executed by the processor.
10. A non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor implements a training method of a zero calibration transfer learning classification model according to any of claims 1 to 7.
CN202310558791.9A 2023-05-17 2023-05-17 Training method, device and storage medium for zero calibration transfer learning classification model Active CN116595437B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310558791.9A CN116595437B (en) 2023-05-17 2023-05-17 Training method, device and storage medium for zero calibration transfer learning classification model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310558791.9A CN116595437B (en) 2023-05-17 2023-05-17 Training method, device and storage medium for zero calibration transfer learning classification model

Publications (2)

Publication Number Publication Date
CN116595437A CN116595437A (en) 2023-08-15
CN116595437B true CN116595437B (en) 2023-10-31

Family

ID=87607625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310558791.9A Active CN116595437B (en) 2023-05-17 2023-05-17 Training method, device and storage medium for zero calibration transfer learning classification model

Country Status (1)

Country Link
CN (1) CN116595437B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117521767B (en) * 2023-12-29 2024-05-17 中国科学技术大学 Gradient calibration-based continuous learning method, system, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560937A (en) * 2020-12-11 2021-03-26 杭州电子科技大学 Method for motor imagery transfer learning by using resting state alignment
CN113180695A (en) * 2021-04-20 2021-07-30 西安交通大学 Brain-computer interface signal classification method, system, device and storage medium
WO2022266141A2 (en) * 2021-06-14 2022-12-22 Board Of Regents, The University Of Texas System Method to identify patterns in brain activity
CN116049639A (en) * 2023-03-31 2023-05-02 同心智医科技(北京)有限公司 Selective migration learning method and device for electroencephalogram signals and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112560937A (en) * 2020-12-11 2021-03-26 杭州电子科技大学 Method for motor imagery transfer learning by using resting state alignment
CN113180695A (en) * 2021-04-20 2021-07-30 西安交通大学 Brain-computer interface signal classification method, system, device and storage medium
WO2022266141A2 (en) * 2021-06-14 2022-12-22 Board Of Regents, The University Of Texas System Method to identify patterns in brain activity
CN116049639A (en) * 2023-03-31 2023-05-02 同心智医科技(北京)有限公司 Selective migration learning method and device for electroencephalogram signals and storage medium

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
Adaptive transfer learning for EEG motor imagery classification with deep Convolutional Neural Network;Kaishuo Zhang 等;《Neural Networks》;第136卷;第1-10页 *
EEG classification for motor imagery and resting state in BCI applications using multi-class Adaboost extreme learning machine;Lin Gao 等;《Review of Scientific Instrument》;第87卷(第8期);第1-8页 *
Enhanced Motor Imagery Based Brain- Computer Interface via FES and VR for Lower Limbs;Shixin Ren 等;《IEEE Transactions on Neural Systems and Rehabilitation Engineering》;第28卷(第8期);第1846-1855页 *
Multi-Source Fusion Domain Adaptation Using Resting-State Knowledge for Motor Imagery Classification Tasks;Lei Zhu 等;《IEEE Sensors Journal》;第21卷(第19期);第21772-21781页 *
基于迁移学习的运动想象脑电信号分类研究;冯洋 等;《测试技术学报》;第36卷(第5期);第376-383页 *
基于迁移学习的运动想象脑电信号分类研究;田曙光;《中国优秀硕士学位论文全文数据库 医药卫生科技辑(月刊)》(第1期);第E080-30页 *
基于迁移学习的运动想象脑电信号分类算法研究;杨飞宇;《中国优秀硕士学位论文全文数据库 基础科学辑(月刊)》(第2期);第A006-683页 *
基于迁移学习的运动想象脑电分类方法研究;曾焕生;《中国优秀硕士学位论文全文数据库 基础科学辑(月刊)》(第1期);第A006-671页 *

Also Published As

Publication number Publication date
CN116595437A (en) 2023-08-15

Similar Documents

Publication Publication Date Title
CN110507335B (en) Multi-mode information based criminal psychological health state assessment method and system
Zhang et al. Deep convolutional neural network for decoding motor imagery based brain computer interface
KR102282961B1 (en) Systems and methods for sensory and cognitive profiling
Kachenoura et al. ICA: a potential tool for BCI systems
CN110070105B (en) Electroencephalogram emotion recognition method and system based on meta-learning example rapid screening
CN109784023B (en) Steady-state vision-evoked electroencephalogram identity recognition method and system based on deep learning
Dodia et al. An efficient EEG based deceit identification test using wavelet packet transform and linear discriminant analysis
CN105654063B (en) Mental imagery brain power mode recognition methods based on the optimization of artificial bee colony time and frequency parameter
CN111265212A (en) Motor imagery electroencephalogram signal classification method and closed-loop training test interaction system
Hramov et al. Percept-related EEG classification using machine learning approach and features of functional brain connectivity
CN116595437B (en) Training method, device and storage medium for zero calibration transfer learning classification model
Juneja et al. A combination of singular value decomposition and multivariate feature selection method for diagnosis of schizophrenia using fMRI
CN114533086A (en) Motor imagery electroencephalogram decoding method based on spatial domain characteristic time-frequency transformation
Wan et al. EEG fading data classification based on improved manifold learning with adaptive neighborhood selection
Wang et al. An approach of one-vs-rest filter bank common spatial pattern and spiking neural networks for multiple motor imagery decoding
Lahiri et al. Evolutionary perspective for optimal selection of EEG electrodes and features
CN109770896A (en) Dreamland image reproducing method, device and storage medium, server
Serener et al. Geographic variation and ethnicity in diabetic retinopathy detection via deeplearning
CN114305452B (en) Cross-task cognitive load identification method based on electroencephalogram and field adaptation
CN101833669A (en) Method for extracting characteristics of event related potential generated by using audio-visual combined stimulation
CN114548165A (en) Electromyographic mode classification method capable of crossing users
Dzitac et al. Identification of ERD using fuzzy inference systems for brain-computer interface
CN107822628B (en) Epileptic brain focus area automatic positioning device and system
CN111736690B (en) Motor imagery brain-computer interface based on Bayesian network structure identification
CN113116306A (en) Consciousness disturbance auxiliary diagnosis system based on auditory evoked electroencephalogram signal analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant