CN112450949A - Electroencephalogram signal processing method and system for cognitive rehabilitation training - Google Patents

Electroencephalogram signal processing method and system for cognitive rehabilitation training Download PDF

Info

Publication number
CN112450949A
CN112450949A CN202011438172.9A CN202011438172A CN112450949A CN 112450949 A CN112450949 A CN 112450949A CN 202011438172 A CN202011438172 A CN 202011438172A CN 112450949 A CN112450949 A CN 112450949A
Authority
CN
China
Prior art keywords
electroencephalogram
scene
rehabilitation training
trained
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011438172.9A
Other languages
Chinese (zh)
Inventor
覃文军
杨广强
王玉平
刘春燕
郭辉
刘丽影
杨金柱
栗伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN202011438172.9A priority Critical patent/CN112450949A/en
Publication of CN112450949A publication Critical patent/CN112450949A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4088Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/7257Details of waveform analysis characterised by using transforms using Fourier transforms

Abstract

The embodiment of the invention relates to an electroencephalogram signal processing method and system for cognitive rehabilitation training, wherein the method comprises the following steps: acquiring a plurality of electroencephalogram signals of a trained person for one-time test under panoramic videos of various scenes, wherein the electroencephalogram signals respectively contain marks of scene types; respectively preprocessing and extracting characteristics of the plurality of electroencephalogram signals to obtain electroencephalogram characteristics; analyzing the change trend of the electroencephalogram characteristics to obtain a trend distribution map of the sensitivity of the person to be trained to various scenes; and determining at least one scene as a training scene for cognitive rehabilitation training according to the trend distribution map. According to the invention, at least one training scene used for cognitive rehabilitation training is determined from multiple scenes by analyzing the electroencephalogram characteristics after preprocessing and characteristic extraction, so that more effective stimulation is enhanced to the person to be trained, and personalized rehabilitation training can be provided for different persons to be trained.

Description

Electroencephalogram signal processing method and system for cognitive rehabilitation training
Technical Field
The invention relates to the technical field of electroencephalogram signal processing, in particular to an electroencephalogram signal processing method and system for cognitive rehabilitation training.
Background
Alzheimer's Disease (AD) is a progressive degenerative Disease of the nervous system with major clinical manifestations of cognitive decline, memory impairment, impairment of the visuospatial skills and dysfunction of execution. The current situation of alzheimer's disease urgently needs effective treatment means, and more than 99% of AD drug tests end up failing in terms of drug treatment, and the test success rate is only 1/30.
In traditional material object cognitive training, lack specific cognitive function training content that becomes more meticulous, immersive, can't accomplish training such as space cognition, scene cognition, occupation space is big moreover, needs the manpower many, and lacks real-time record feedback to the patient's condition. In the common VR auxiliary rehabilitation, personalized rehabilitation training can not be carried out according to different patients, quantitative scientific assessment can not be carried out on the rehabilitation training, the monitoring on the rehabilitation condition of the patient is lacked, and the brain activity condition of the patient is unknown.
Therefore, the prior art has the problem that personalized rehabilitation training cannot be provided.
The above drawbacks are expected to be overcome by those skilled in the art.
Disclosure of Invention
Technical problem to be solved
In order to solve the above problems in the prior art, the invention provides an electroencephalogram signal processing method and system for cognitive rehabilitation training, and solves the problem that personalized rehabilitation training cannot be provided in the prior art.
(II) technical scheme
In order to achieve the purpose, the invention adopts the main technical scheme that:
an embodiment of the present invention provides an electroencephalogram signal processing method for cognitive rehabilitation training, including:
s10, acquiring a plurality of electroencephalogram signals of a trainee performing one-time test under panoramic videos of various scenes, wherein the electroencephalogram signals respectively contain marks of scene types;
s20, respectively preprocessing and extracting features of the electroencephalogram signals to obtain electroencephalogram features;
s30, analyzing the change trend of the electroencephalogram characteristics to obtain a trend distribution map of the sensitivity of the person to be trained to various scenes;
and S40, determining at least one scene as a training scene for cognitive rehabilitation training according to the trend distribution diagram.
In one embodiment of the invention, the plurality of scenes comprise six scenes, namely ocean, sky, natural scene, city, lovely pet and home, and the scene is a VR scene.
In an embodiment of the present invention, the panoramic video is a 360-degree VR panoramic video recorded by a panoramic device, and is displayed to the trainee through a VR head display device.
In one embodiment of the present invention, the preprocessing in S20 includes sampling, setting references, detrending, removing baselines, filtering, segmenting, baseline correction, ICA, removing electro-oculi, preprocessing the plurality of electroencephalographic signals to obtain electroencephalographic EEG data in the mat format.
In an embodiment of the present invention, the feature extraction in S20 employs any one of a power spectral density PSD algorithm, a differential entropy DE algorithm, a wavelet transform DWT algorithm, and a common space mode CSP algorithm.
In one embodiment of the present invention, the electroencephalogram feature obtained in S20 is a PSD feature or a DE feature.
In one embodiment of the present invention, S30 includes:
analyzing the variation trend of the electroencephalogram characteristics to obtain an electroencephalogram characteristic distribution map;
and (4) carrying out characteristic significance analysis on the mean value and the variance of the electroencephalogram characteristic distribution map to obtain a trend distribution map of the electroencephalogram characteristics.
In one embodiment of the present invention, S40 includes:
s41, determining a scene set with sensitivity degree meeting a first preset requirement as a first result according to the mean trend distribution diagram of the electroencephalogram characteristics, and determining a scene set with sensitivity degree meeting a second preset requirement as a second result according to the variance trend distribution diagram of the electroencephalogram characteristics;
s42, taking the first result, the second result or the union of the first result and the second result as a training scene for cognitive rehabilitation training, wherein the training scene for cognitive rehabilitation training is a non-empty set.
In one embodiment of the invention, each scene provides at least two different segments of panoramic video, the method further comprising:
s50, repeating the steps S10-S30 to carry out a plurality of tests, and determining at least one scene as a training scene for cognitive rehabilitation training according to a plurality of trend distribution maps;
the playing sequence of the panoramic video is sequential playing or random playing.
Another embodiment of the present invention further provides an electroencephalogram signal processing method for cognitive rehabilitation training, including:
the panoramic video player is used for displaying panoramic videos of various scenes to a trainee;
the controller is used for controlling the playing of the panoramic video player and the synchronous acquisition of the electroencephalogram signals;
the processor is used for processing the electroencephalogram signals synchronously acquired in the panoramic video playing and determining a training scene for a person to be trained;
the signal acquisition module is used for acquiring a plurality of electroencephalogram signals of a trainee performing one-time test under panoramic videos of various scenes, wherein the electroencephalogram signals respectively contain marks of scene types;
the characteristic extraction module is used for respectively preprocessing and extracting characteristics of the plurality of electroencephalogram signals to obtain electroencephalogram characteristics;
the trend analysis module is used for analyzing the change trend of the electroencephalogram characteristics to obtain a trend distribution map of the sensitivity of the person to be trained to various scenes;
and the scene determining module is used for determining at least one scene as a training scene for cognitive rehabilitation training according to the trend distribution map.
(III) advantageous effects
The invention has the beneficial effects that: according to the electroencephalogram signal processing method and system for cognitive rehabilitation training, provided by the embodiment of the invention, at least one scene is determined from multiple scenes as a training scene for cognitive rehabilitation training by analyzing the electroencephalogram characteristics after preprocessing and characteristic extraction, so that more effective stimulation is enhanced for a person to be trained, and personalized rehabilitation training can be provided for different persons to be trained.
Drawings
Fig. 1 is a flowchart of an electroencephalogram signal processing method for cognitive rehabilitation training according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a manufacturing process of hanging a video playing component on a ball according to an embodiment of the present invention;
FIG. 3 is an unprocessed EEG graph in accordance with an embodiment of the present invention;
FIG. 4 is an EEG graph with filtering and 50Hz notch filtering removed in accordance with an embodiment of the present invention;
FIG. 5 is a schematic representation of the 30 individual components obtained after ICA in one embodiment of the present invention;
FIG. 6 is an EEG graph with vertical electro-oculogram removed in accordance with an embodiment of the present invention;
FIG. 7 is an EEG graph with horizontal electro-oculogram removal in accordance with an embodiment of the present invention;
FIG. 8 is a flowchart illustrating a step S30 in FIG. 1 according to an embodiment of the present invention;
FIG. 9 is a flowchart illustrating a step S40 in FIG. 1 according to an embodiment of the present invention;
FIG. 10 is a PSD profile of the first experimental data of the first person to be trained according to an embodiment of the present invention;
FIG. 11 is a PSD profile of second experimental data for a first person to be trained according to an embodiment of the present invention;
FIG. 12 is a PSD profile of the third experimental data of the first person to be trained according to an embodiment of the present invention;
FIG. 13 is a graph of the mean line of the PSD characteristics of the first person to be trained in the example of the present invention from three experiments;
FIG. 14 is a plot of the variance of three experimental PSD characteristics for the first person to be trained in accordance with an embodiment of the present invention;
FIG. 15 is a PSD profile of the first experiment data of the second person to be trained according to an embodiment of the present invention;
FIG. 16 is a PSD profile of a second trial data for a second person to be trained according to an embodiment of the present invention;
FIG. 17 is a PSD profile of the third experimental data of the second person to be trained according to an embodiment of the present invention;
FIG. 18 is a graph of the mean line of the PSD characteristics of the second person tested in three experiments according to the embodiment of the present invention;
FIG. 19 is a plot of the variance of the PSD signature of the second person tested three times in accordance with an embodiment of the present invention;
FIG. 20 is a DE signature graph of first time experimental data for a first person to be trained in accordance with an embodiment of the present invention;
FIG. 21 is a DE signature graph of second experimental data for a first person to be trained in accordance with an embodiment of the present invention;
FIG. 22 is a DE signature graph of third experimental data for the first person to be trained in accordance with an embodiment of the present invention;
FIG. 23 is a graph of the line of the mean of the DE signature of three experiments on the first person to be trained in an embodiment of the present invention;
FIG. 24 is a plot of the variance of the DE signature over three trials conducted on the first person to be trained in accordance with an embodiment of the present invention;
FIG. 25 is a DE signature graph of the first experimental data for the second person to be trained in accordance with an embodiment of the present invention;
FIG. 26 is a DE signature graph of second experimental data for a second person to be trained in accordance with an embodiment of the present invention;
FIG. 27 is a DE signature graph of third experimental data of a second person to be trained in accordance with an embodiment of the present invention;
FIG. 28 is a graph of the mean line of the DE signature of three experiments on a second person to be trained in an embodiment of the present invention;
FIG. 29 is a plot of the variance of the DE signature over three trials conducted on a second person to be trained in an embodiment of the present invention;
FIG. 30 is a graph of the mean line of the original EEG signals of the first person to be trained in the present embodiment;
FIG. 31 is a variance line graph of the original EEG signal of the first person to be trained in the embodiment of the present invention;
FIG. 32 is a graph of the mean line of the original EEG signals of the second person to be trained in the present embodiment;
FIG. 33 is a plot of the variance of the original EEG signal from the second person under training three times in accordance with an embodiment of the present invention;
FIG. 34 is a graph illustrating sensitivity determination by mean and ranking in accordance with an embodiment of the present invention;
FIG. 35 is a schematic of sensitivity determination by variance and ordering in accordance with an embodiment of the present invention;
fig. 36 is a schematic diagram of an electroencephalogram signal processing system for cognitive rehabilitation training according to another embodiment of the present invention.
Detailed Description
For the purpose of better explaining the present invention and to facilitate understanding, the present invention will be described in detail by way of specific embodiments with reference to the accompanying drawings.
All technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The invention designs and realizes VR-EEG scene memory cognition rehabilitation training system and method by adopting a mode of combining VR (Electroencephalogram) analysis and EEG (Electroencephalogram for short) analysis. The method designs a panoramic video player at a VR end, firstly, some panoramic videos in life are collected, and the collected panoramic videos are clipped and classified, so that a person to be trained can watch the videos at the VR end; secondly, collecting EEG data of the person to be trained while watching the panoramic video; then, after obtaining the EEG data of the person to be trained, the panoramic video interested by the person to be trained can be analyzed through an EEG analysis algorithm; finally, a video which is interested by the person to be trained can be subjected to targeted training, and the AD person to be trained can be subjected to targeted training to meet personalized requirements.
Fig. 1 is a flowchart of an electroencephalogram signal processing method for cognitive rehabilitation training according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
as shown in fig. 1, in step S10, acquiring a plurality of electroencephalogram signals of a person to be trained performing one-time test under panoramic videos of a plurality of scenes, wherein the plurality of electroencephalogram signals respectively include a mark of a scene type;
as shown in fig. 1, in step S20, preprocessing and feature extraction are performed on the plurality of electroencephalogram signals, respectively, to obtain electroencephalogram features;
as shown in fig. 1, in step S30, the trend of the change of the electroencephalogram features is analyzed to obtain a trend distribution map of the sensitivity of the person to be trained to various scenes;
as shown in fig. 1, in step S40, at least one scene is determined as a training scene for cognitive rehabilitation training according to the trend distribution map.
Based on the above, the invention analyzes the electroencephalogram characteristics after the preprocessing and the characteristic extraction, and further determines at least one of a plurality of scenes as a training scene for cognitive rehabilitation training, thereby enhancing more effective stimulation to the person to be trained, and being capable of providing personalized rehabilitation training for different persons to be trained.
The following takes a panoramic video in a VR scene as an example, and specifically introduces each step of the method shown in fig. 1:
in step S10, a plurality of electroencephalogram signals, which are tested by the person to be trained once under the panoramic video of a plurality of scenes, are obtained, and each of the plurality of electroencephalogram signals contains a mark of a scene type.
In one embodiment of the invention, the plurality of scenes comprise six scenes, namely ocean, sky, natural scene, city, lovely pet and home, and the scene is a VR scene. Correspondingly, the panoramic video is a 360-degree VR panoramic video recorded by panoramic equipment and is displayed to the trainee through VR head display equipment.
Before a VR panoramic video is manufactured, firstly, the manufacturing requirement of a VR scene is determined, namely the type of the VR video is required to be manufactured, the VR panoramic video designed in the VR-EEG scene memory cognition rehabilitation training system comprises six types of videos including ocean videos, sky videos, natural wind and light videos, cities, lovely pets and home, each type of video can respectively comprise a plurality of short videos, and the duration of each short video is about 30 s.
When the VR panoramic video is manufactured, live-action shooting is needed, a shooting manufacturing team performs video or panoramic recording in a live-action view-finding place, and then the video or the panoramic is output to be a full-action video. Therefore, in the design of the panoramic video player, since the VR video is a 360-degree panoramic video recorded by a panoramic device, a simple video playing component cannot be completed, and the material needs to be set as an inner surface material, and then the mirror image is turned over, because the video is opposite from the inner surface, and the property of the material of the ball is modified by a shader (shader). Then, the video playing component is hung on the ball, the video can be paved on the whole ball material, and the panoramic video playing is realized, and fig. 2 is a schematic diagram of the manufacturing process of hanging the video playing component on the ball in the embodiment of the invention.
After production, all VIDEOs are placed in a LIST < VIDEO > to control the playback sequence. The ball is enlarged by ten times to reduce the influence of the relative displacement of the real world on the picture. And finally, a video playing component in the VR head display is placed in the center of the ball, the helmet is set to rotate only, and the video time stamp is recorded and stored in a local disk.
Six types of videos are played during VR scene playing, two videos are selected from the six types of videos to be played, ten videos are played in total, the time length of each video is about 30s, and the total time length is about 5 minutes. The electroencephalogram signals which are tested once are obtained through one-time playing, the played videos are panoramic videos containing various scenes, each panoramic video is provided with a mark of a scene type, and for example, the types of the ocean, the sky, the natural wind and light, the city, the lovely pet and the home are respectively expressed by English letters A-F.
The EEG signal acquisition method is as follows:
(1) firstly, wearing an electroencephalogram cap and a VR head display for a person to be trained;
(2) the method comprises the steps that the VR panoramic video is played, meanwhile, EEG signal data are collected, the electroencephalogram collection time and the panoramic video playing time are strictly consistent, and a timestamp can be recorded when the panoramic video is played;
(3) automatically quitting after the panoramic video is played, and then stopping collecting the EEG;
(4) and intercepting the corresponding EEG segment according to the playing and stopping time of the panoramic video to finish the EEG acquisition.
In step S20, preprocessing and feature extraction are performed on the plurality of electroencephalogram signals, respectively, to obtain electroencephalogram features.
In an embodiment of the present invention, the VR-EEG scene memory cognition rehabilitation training system needs to preprocess the EEG data before analyzing the EEG data, so as to eliminate noise and interference in the EEG data, where the preprocessing uses the EEGLAB Toolbox in MATLAB, and the EEGLAB is a Toolbox of MATLAB (Toolbox) and is mainly used to process EEG (electroencephalogram) and MEG (magnetoencephalogram) and other continuous and event-related electrophysiological signals, such as ECG (electrocardiogram), and the EEGLAB can perform a series of analyses on the electrophysiological signals, including Independent Component Analysis (ICA), Time-frequency Analysis (Time-frequency Analysis, TFA), artifact elimination, event-related statistical Analysis, and several models for visualizing the data.
In one embodiment of the present invention, the preprocessing in S20 includes sampling, setting references, detrending, removing baselines, filtering, segmenting, baseline correction, ICA, removing electro-oculi, preprocessing the plurality of electroencephalographic signals to obtain electroencephalographic EEG data in the mat format.
FIG. 3 is an untreated EEG graph in accordance with an embodiment of the present invention. During EEG preprocessing, processing such as downsampling, setting reference, detrending, removing baseline, filtering, ICA, Adjust, and removing blinks is performed. In the preprocessing, the sampling rate is firstly set to be 500Hz, the purpose is to compress the data volume, and the sampling rate of 500Hz already meets the requirement of system precision; selecting two reference electrodes; and then, carrying out filtering treatment, wherein the filtering treatment is divided into high-pass filtering and low-pass filtering. High-pass filtering means that high-frequency signals can normally pass, while low-frequency signals below a set threshold are blocked and attenuated. Low-pass filtering refers to the normal passage of low-frequency signals, while high-frequency signals that exceed a set threshold are blocked and attenuated. Generally, if time-frequency analysis is needed at the later stage, the filtering range can be selected to be wider, and 0.1-100Hz is selected. If only the traditional ERP analysis is performed, about 1-30Hz can be selected. In addition, if filtering of 0.1-100Hz is performed, notch filtering of 50Hz may be performed in order to eliminate interference. FIG. 4 is an EEG graph with filtering and 50Hz notch filtering removed in accordance with an embodiment of the present invention.
The steps of segmentation and baseline correction follow, either before or after removal of the electro-oculi. It may be preferable to run at a slower speed after the removal of the electro-oculogram, since continuous data is better at running ICA, except for a larger amount of data. However, if there is sound and body movement in the experimental design, many artifacts and data disorder can be caused, and the segmentation can be performed first, but the segmentation can be performed as long as possible. The ICA step is performed, which is mainly to remove artifacts. The ICA operation requires about 300-400 steps, and the processing speed and time of the computer can be improved according to the segmentation condition. FIG. 5 is a schematic representation of the 30 individual components obtained after ICA in one embodiment of the present invention.
Then, the step of removing the electro-oculogram comprises removing the vertical electro-oculogram and removing the horizontal electro-oculogram, which may be removed first and then the horizontal electro-oculogram, fig. 6 is an EEG diagram of removing the vertical electro-oculogram according to an embodiment of the present invention, and fig. 7 is an EEG diagram of removing the horizontal electro-oculogram according to an embodiment of the present invention.
The EEG data pre-processed by step S20 is saved in mat format for subsequent EEG feature extraction.
In an embodiment of the present invention, after the EEG data is preprocessed, the signal features of the EEG data need to be extracted, the feature extraction in step S20 adopts any one of a power spectral density PSD algorithm, a differential entropy DE algorithm, a wavelet transform DWT algorithm, and a common spatial mode CSP algorithm, and the electroencephalogram features obtained in step S20 are PSD features or DE features. The feature extraction method comprises the following steps:
(1) power Spectral Density (PSD) algorithm:
the signal is usually represented in the form of a wave, such as a sound wave, an electromagnetic wave, etc., and when the functional spectral density of the wave is multiplied by a suitable coefficient, the power carried by the wave per unit frequency will be obtained. The PSD of the signal exists if and only if the signal is a generalized stationary process, and the spectral density of f (t) and the autocorrelation of f (t) form a fourier transform pair, typically a fourier transform is used for PSD estimation.
One of the results of the fourier analysis is the Parseval theorem, which states that the sum of squares of the functions is equal to the sum of the squares of their fourier transforms.
Figure BDA0002821377110000101
The electroencephalogram features mainly include the amplitude, frequency, variance, mean value and other statistical features of the electrical signals. These methods generally have poor effects on low signal-to-noise ratio electroencephalogram signals, and therefore cannot use the time domain features alone as the final classification features. The basic method of frequency domain feature analysis is to transform the time-series electroencephalogram signal into the frequency domain through fourier transform, wherein the power spectral density is the commonly used frequency domain feature.
The PSD can reflect the relation between the power and the frequency of the electroencephalogram signal and can be used for observing the change of the signal in each frequency band. Given that the autocorrelation function of a discrete random signal x (n) is r (k), the power density of the signal can be calculated from the discrete fourier transform as follows:
Figure BDA0002821377110000102
wherein r (k) is E [ x (n) x*(n+k)]And E represents the mathematical expectation of the signal, being the complex conjugate.
(2) Differential Entropy (DE) algorithm:
the differential entropy DE is a new electroencephalogram signal feature, which is defined as follows:
Figure BDA0002821377110000103
where X represents a timing signal and f (X) is a probability density function of X. Since in a certain frequency band after band-pass filtering the signal substantially satisfies a gaussian probability distribution.
To simplify the calculation of the variance, another method may be used to calculate the differential entropy. For a discrete signal sequence x [ n ], its short-time Fourier transform (STFT) is expressed as:
Figure BDA0002821377110000111
wherein ω [ n ]]Is a window function, X (m, ω)k) Is x [ n ]]ω[n-m]Fourier transform of (omega)kN-1, N being the total number of sampling points.
(3) Common Spatial Pattern (CSP) algorithm:
the CSP is a space domain filtering feature extraction algorithm under two classification tasks, and can extract space distribution components of each type from multi-channel brain-computer interface data. The basic principle of the public space mode algorithm is to find a group of optimal space filters for projection by utilizing the diagonalization of a matrix, so that the variance difference of two types of signals is maximized, and the feature vector with higher discrimination is obtained.
Suppose X1And X2And respectively forming a multi-channel induced response time-space signal matrix under two classification imagination motion tasks, wherein the dimensionalities of the multi-channel induced response time-space signal matrix are N x T, N is the number of electroencephalogram channels, and T is the number of samples collected by each channel. To calculate its covariance matrix, we now assume N < T. Under the two brain electrical imagination tasks, a mathematical model of a composite source is generally adopted to describe the EEG signal, and the influence caused by noise is generally ignored for the convenience of calculation. X1And X2Can be written as:
Figure BDA0002821377110000112
wherein S1And S2Respectively represent two types of tasks, and the two source signals are not assumed to be mutually linearly independent; sMRepresenting the source signal common to both types of tasks, assuming S1Is composed of m1 sources, S2Is composed of m2 sources, then C1And C2Is formed by S1And S2The relevant m1 and m2 common spatial patterns, since each spatial pattern is an N x 1-dimensional vector, are now used to represent the distribution weights of the signals caused by a single source signal over the N leads. CMIs represented byMThe goal of the CSP algorithm is to design the spatial filters F1 and F2 to be null for the corresponding common spatial patternAn intermediate factor W.
(4) Wavelet Transform (DWT) algorithm:
discretized wavelet transform
Figure BDA0002821377110000113
Abbreviated as WTx(j,k),
Figure BDA0002821377110000114
Wherein j is 0,1,2, …; k is equal to Z and is called cj,kAre discrete wavelet transform coefficients, simply referred to as wavelet coefficients.
Can take a0=2,b0When it is 1
Figure BDA0002821377110000115
Value of 20,21,…,2j. At this time, the basis function in the continuous wavelet transform is denoted by ψjk(t),
Figure BDA0002821377110000121
Accordingly, the discrete wavelet transform may be represented as
Figure BDA0002821377110000122
In step S30, the trend of the change of the electroencephalogram features is analyzed to obtain a trend distribution map of the sensitivity of the person to be trained to various scenes.
Fig. 8 is a flowchart of step S30 in fig. 1 according to an embodiment of the present invention, and as shown in fig. 8, step S30 in this embodiment includes:
s31, analyzing the change trend of the electroencephalogram characteristics to obtain an electroencephalogram characteristic distribution map;
and S32, performing feature significance analysis on the mean value and the variance of the electroencephalogram feature distribution map to obtain a trend distribution map of the electroencephalogram features. The trend profile may be presented in the form of a line graph, i.e. a mean line graph and a variance line graph, respectively, and the fluctuation intensity of the line graph indicates the sensitivity.
In one embodiment of the invention, the electroencephalogram characteristic can be a PSD characteristic or a DE characteristic, taking the PSD characteristic as an example, firstly, the change trend of the PSD characteristic is analyzed to obtain a PSD characteristic distribution map; then, feature significance analysis is carried out on the mean value and the variance of the PSD feature distribution diagram to obtain a trend distribution diagram of the PSD feature, namely a mean value line diagram and a variance line diagram of the PSD feature.
In step S40, at least one scene is determined as a training scene for cognitive rehabilitation training according to the trend profile.
Fig. 9 is a flowchart of step S40 in fig. 1 according to an embodiment of the present invention, and as shown in fig. 9, step S40 in this embodiment includes:
and step S41, determining a scene set with sensitivity meeting a first preset requirement as a first result according to the mean trend distribution diagram of the electroencephalogram characteristics, and determining a scene set with sensitivity meeting a second preset requirement as a second result according to the variance trend distribution diagram of the electroencephalogram characteristics.
Taking PSD characteristics as an example, in this step, the sensitivity is determined according to a mean line graph of the PSD characteristics, and the specific determination process is as follows: firstly, determining a reference value P1, wherein the reference value can be preset or determined according to PSD characteristics determined by the acquisition result of one test; then, when the difference between the mean values of two adjacent acquisition points (each acquisition point corresponds to a section of panoramic video) is judged to be greater than N1 times of P1, the first preset requirement is met, wherein the numerical value of the multiple N1 is selected according to experience, for example, the numerical value can be selected from 1.8-2.2 according to needs, and a first result is obtained; similarly, a second result is obtained in the same manner for the variance line graph of the PSD features, and the first result and the second result are respectively a quantitative description of the sensitivity from the perspective of the mean and the variance, so as to screen out a scene with strong sensitivity.
Step S42, the first result, the second result or the union of the first result and the second result is used as a training scene for cognitive rehabilitation training, and the training scene for cognitive rehabilitation training is a non-empty set.
Still taking the PSD feature as an example, in this step, a non-empty training scene is determined as a training scene for cognitive rehabilitation training based on a scene screened by the mean line graph or the variance line graph.
If the DE feature is taken as an example, the step is also determined in the same manner as a training scenario for cognitive rehabilitation training, and is not described here again.
By using the PSD characteristic or the DE characteristic as the trend distribution map of the electroencephalogram characteristic in the step S40, a scene which is more effective for the stimulation of the person to be trained, namely a scene with strong sensitivity, is selected, and then the person to be trained can be stimulated pertinently and effectively.
In an embodiment of the present invention, in addition to determining a training scenario according to a test result, the scheme of the present invention may obtain a more accurate result through multiple training. For example, each scene provides at least two different segments of panoramic video, the method further comprising:
and S50, repeating the steps S10-S30 to carry out a plurality of tests, and determining at least one scene as a training scene for cognitive rehabilitation training according to a plurality of trend distribution maps, wherein the playing sequence of the panoramic video is sequential playing or random playing.
Sequential playing and random playing of VR panoramic videos are mainly realized in a VR-EEG scene memory cognition rehabilitation training system, and the sequential playing is to select two videos from each video in sequence, then splice the videos together in sequence and play the videos in sequence in VR glasses; the random playing is to adopt a random function, randomly select two short videos in each type of video, then randomly splice the two short videos together, and play randomly in VR glasses.
According to the method, after EEG data features are extracted, EEG features need to be analyzed, the sensitivities of a person to be trained to panoramic videos in different scenes are obtained, visual display is achieved, then a video type which has a large influence on the EEG of the person to be trained is obtained according to an analysis result, and stimulation is repeatedly enhanced.
The following steps S30, S40 and S50 are described with reference to specific embodiments for different trainees:
firstly, dividing electroencephalogram signals acquired under a single experiment into single samples by using a sliding window with the window length of 2s and the overlapping length of 1s, obtaining 58 samples under a single stimulation segment (when the number of the samples under the single stimulation segment exceeds 58, only the first 58 samples are taken to ensure that the number of the samples corresponding to each stimulation segment is consistent), finally obtaining 580 samples under the single experiment, and selecting 10 videos for analysis of each sample. After sample data is obtained, PSD estimation and DE are adopted for feature extraction, original feature distribution is compared, and feature significance analysis is carried out through statistical variables such as feature mean values, variance and the like.
(1) PSD feature analysis
For the first person to be trained: fig. 10 is a PSD feature distribution diagram of first-time experimental data of a first person to be trained in the embodiment of the present invention, fig. 11 is a PSD feature distribution diagram of second-time experimental data of the first person to be trained in the embodiment of the present invention, and fig. 12 is a PSD feature distribution diagram of third-time experimental data of the first person to be trained in the embodiment of the present invention, where the abscissa is the number of 10 video segments and the ordinate is a PSD feature value. As shown in fig. 10 to 12, the mean and variance trends of the PSD features were substantially the same in three experiments.
Corresponding mean line graph and variance line graph are obtained after calculation according to the PSD characteristic values, fig. 13 is a mean line graph of three times of experiment PSD characteristics of the first person to be trained in the embodiment of the invention, and fig. 14 is a variance line graph of three times of experiment PSD characteristics of the first person to be trained in the embodiment of the invention. As can be seen from fig. 13, in the first experiment, the mean of the PSD signature corresponding to segment 10 taken the maximum, and the two videos belong to the same category of stimulating material. I.e. to show that the first person to be trained has a large physiological response to such video stimulus material and a weaker response to the segments 2, 3, 4, 6. As can be seen from the combination of fig. 13 and 14, the mean and variance of the PSD signature corresponding to segment 5 in the third experiment was not as significant as the first and second experiments, and relatively significantly reflected video 1, 7 and 10.
For the second person to be trained: fig. 15 is a PSD feature distribution diagram of the first time experimental data of the second person to be trained in the embodiment of the present invention, fig. 16 is a PSD feature distribution diagram of the second time experimental data of the second person to be trained in the embodiment of the present invention, fig. 17 is a PSD feature distribution diagram of the third time experimental data of the second person to be trained in the embodiment of the present invention, the abscissa is the number of 10 video segments, and the ordinate is the PSD feature value. FIG. 18 is a line graph of the mean of the PSD characteristics of the second person to be trained in the third experiment according to the present invention, and FIG. 19 is a line graph of the variance of the PSD characteristics of the second person to be trained in the third experiment according to the present invention. And (3) synthesizing the three experimental data, and finding out that the fragment 6 is the highest in the first experiment and the fragment 10 is the highest in the second experiment according to the PSD characteristic distribution diagram, so that the original electroencephalogram signals of the fragment 6 in the first experiment and the fragment 10 in the second experiment can be preliminarily judged to have problems. It can be seen from the figure that the physiological response of the second person to be trained is not very significant for segments 1,2, 3, 4, 8, whereas the response to segments 5, 7, 9 is more significant. This result is basically the same as the first person to be trained, but there is a certain difference, that is, it indicates that different persons to be trained have different sensitivity to the same scene, so it is necessary to perform experiments for each person to be trained, find out the stimulation material that can cause the physiological signal of each person to be trained to change significantly, that is, find out the training scene for cognitive rehabilitation training, so as to provide the person with targeted training and stimulation.
(2) DE signature analysis
For the first person to be trained: FIG. 20 is a DE distribution graph showing the first test data of the first person to be trained in the embodiment of the present invention, FIG. 21 is a DE distribution graph showing the second test data of the first person to be trained in the embodiment of the present invention, and FIG. 22 is a DE distribution graph showing the third test data of the first person to be trained in the embodiment of the present invention. Fig. 23 is a line graph of the mean of the characteristics of the DE three times tested for the first person to be trained in the embodiment of the present invention, and fig. 24 is a line graph of the variance of the characteristics of the DE three times tested for the first person to be trained in the embodiment of the present invention. As can be seen from fig. 20 to 24, the mean values of the DE features corresponding to the segments 10, 5, 10, respectively, have a maximum value, and the trainee has a weak response to the segments 2, 3, 4, 6.
For the second person to be trained: FIG. 25 is a distribution diagram of DE features for the first time experiment data of the second person to be trained in the embodiment of the present invention, FIG. 26 is a distribution diagram of DE features for the second time experiment data of the second person to be trained in the embodiment of the present invention, and FIG. 27 is a distribution diagram of DE features for the third time experiment data of the second person to be trained in the embodiment of the present invention. Fig. 28 is a line graph of the mean of the characteristics of DE obtained by three experiments on the second person to be trained in the embodiment of the present invention, and fig. 29 is a line graph of the variance of the characteristics of DE obtained by three experiments on the second person to be trained in the embodiment of the present invention. It can be seen from the combination of fig. 25 to 29 that the person to be trained is not very sensitive to segments 1,2, 3, 4, 8, but responds significantly to segments 5, 7, 9.
(3) Direct statistical feature analysis of raw brain electrical signals
Fig. 30 is a line graph of the mean of the original electroencephalogram signal obtained by three times of experiments on the first person to be trained in the embodiment of the present invention, and fig. 31 is a line graph of the variance of the original electroencephalogram signal obtained by three times of experiments on the first person to be trained in the embodiment of the present invention. As can be seen from FIGS. 30 and 31, the original EEG signals of two trainees were directly analyzed, the standard deviation of the first and second experiments of the first trainee under the segments 5 and 10 is larger, which is consistent with the previous PSD characteristic analysis or DE characteristic analysis, but the standard deviation of the third experiment does not have this phenomenon. In addition, the mean distribution of the three experiments is not especially regular.
FIG. 32 is a graph of the mean of the original EEG signal obtained by three experiments on the second person to be trained, and FIG. 33 is a graph of the variance of the original EEG signal obtained by three experiments on the second person to be trained. As can be seen from FIGS. 32 and 33, the second trainee responded better to fragments 5, 6 and 7 in the first experiment, to fragments 5, 7 and 10 and to fragments 5, 7 and 9 in the second and third experiments, respectively, and to fragments 2 and 4 in the third experiment.
Compared with the PSD characteristics of fig. 10 to 19 and the DE analysis of fig. 20 to 29, the direct characteristic analysis of fig. 30 to 33 can provide a better effect of rehabilitation training for cognitive memory because the scenes determined by the PSD characteristic analysis or the DE characteristic analysis are more irritating to the patient.
The PSD estimation and the differential entropy DE are characterized by being used for carrying out feature significance analysis on original feature distribution, statistical variables such as mean values and variances, judging the sensitivity of a person to be trained to videos according to analysis results, sequencing the sensitivity, and displaying the analysis results on an interface. FIG. 34 is a graph illustrating sensitivity determination by mean and ranking in accordance with an embodiment of the present invention; FIG. 35 is a diagram illustrating sensitivity determination according to variance and ordering in accordance with an embodiment of the present invention.
In summary, according to the electroencephalogram signal processing method for cognitive rehabilitation training provided by the invention, the electroencephalogram features after preprocessing and feature extraction are analyzed to obtain the trend distribution map of the sensitivity of the person to be trained to various scenes, and then at least one of the scenes is determined as a training scene for cognitive rehabilitation training, so that more effective stimulation is enhanced for the person to be trained, and personalized rehabilitation training can be provided for different persons to be trained. At least two different panoramic videos are provided for each scene, and through multiple tests, a more accurate result is obtained, and an accurate scene video is provided for training.
Fig. 36 is a schematic diagram of an electroencephalogram signal processing system for cognitive rehabilitation training according to another embodiment of the present invention, and as shown in fig. 36, the system 100 includes: a panoramic video player 110, a controller 120 and a processor 130, wherein the panoramic video player 110 is used for displaying panoramic videos of various scenes to a person to be trained; the controller 120 is used for controlling the playing of the panoramic video player and the synchronous acquisition of the electroencephalogram signals; the processor 130 is configured to process the electroencephalogram signals synchronously acquired during playing of the panoramic video, and determine a training scene for the person to be trained.
The processor 130 comprises a signal acquisition module 131, a feature extraction module 132, a trend analysis module 133 and a scene determination module 134, wherein the signal acquisition module 131 is used for acquiring a plurality of electroencephalogram signals of a trainee performing one-time test under panoramic videos of various scenes, and the plurality of electroencephalogram signals respectively contain marks of scene types; the feature extraction module 132 is configured to perform preprocessing and feature extraction on the multiple electroencephalogram signals respectively to obtain electroencephalogram features; the trend analysis module 133 is configured to analyze a variation trend of the electroencephalogram characteristics to obtain a trend distribution map of sensitivities of the person to be trained to multiple scenes; the scene determining module 134 is configured to determine at least one scene as a training scene for cognitive rehabilitation training according to the trend distribution map.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the invention. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. An electroencephalogram signal processing method for cognitive rehabilitation training, characterized by comprising:
s10, acquiring a plurality of electroencephalogram signals of a trainee performing one-time test under panoramic videos of various scenes, wherein the electroencephalogram signals respectively contain marks of scene types;
s20, respectively preprocessing and extracting features of the electroencephalogram signals to obtain electroencephalogram features;
s30, analyzing the change trend of the electroencephalogram characteristics to obtain a trend distribution map of the sensitivity of the person to be trained to various scenes;
and S40, determining at least one scene as a training scene for cognitive rehabilitation training according to the trend distribution diagram.
2. The method for processing brain electrical signals for cognitive rehabilitation training according to claim 1, wherein the plurality of scenes include six kinds of ocean, sky, natural scene, city, lovely pet and home, and the scenes are VR scenes.
3. The method of claim 2, wherein the panoramic video is a 360-degree VR panoramic video recorded by a panoramic device and presented to the person to be trained through a VR headset.
4. The method of brain electrical signal processing for cognitive rehabilitation according to claim 1, wherein the preprocessing in S20 includes sampling, setting references, detrending, removing baselines, filtering, segmenting, baseline correction, ICA, removing electro-oculi, preprocessing the plurality of brain electrical signals to obtain electroencephalogram (EEG) data in mat format.
5. The EEG signal processing method for cognitive rehabilitation training as claimed in claim 4, wherein the feature extraction in S20 employs any one of a power spectral density PSD algorithm, a differential entropy DE algorithm, a wavelet transform DWT algorithm, and a common space mode CSP algorithm.
6. The brain electrical signal processing method for cognitive rehabilitation training according to claim 1, wherein the brain electrical feature obtained in S20 is a PSD feature or a DE feature.
7. The brain electrical signal processing method for cognitive rehabilitation training of claim 6, wherein S30 includes:
analyzing the variation trend of the electroencephalogram characteristics to obtain an electroencephalogram characteristic distribution map;
and (4) carrying out characteristic significance analysis on the mean value and the variance of the electroencephalogram characteristic distribution map to obtain a trend distribution map of the electroencephalogram characteristics.
8. The brain electrical signal processing method for cognitive rehabilitation training of claim 7, wherein S40 includes:
s41, determining a scene set with sensitivity degree meeting a first preset requirement as a first result according to the mean trend distribution diagram of the electroencephalogram characteristics, and determining a scene set with sensitivity degree meeting a second preset requirement as a second result according to the variance trend distribution diagram of the electroencephalogram characteristics;
s42, taking the first result, the second result or the union of the first result and the second result as a training scene for cognitive rehabilitation training, wherein the training scene for cognitive rehabilitation training is a non-empty set.
9. The brain electrical signal processing method for cognitive rehabilitation training according to any one of claims 1-8, wherein each scene provides at least two different segments of panoramic video, the method further comprising:
s50, repeating the steps S10-S30 to carry out a plurality of tests, and determining at least one scene as a training scene for cognitive rehabilitation training according to a plurality of trend distribution maps;
wherein the playing sequence of the panoramic video is sequential playing or random playing.
10. An electroencephalogram signal processing system for cognitive rehabilitation training, comprising:
the panoramic video player is used for displaying panoramic videos of various scenes to a trainee;
the controller is used for controlling the playing of the panoramic video player and the synchronous acquisition of the electroencephalogram signals;
the processor is used for processing the electroencephalogram signals synchronously acquired in the panoramic video playing and determining a training scene for a person to be trained;
the signal acquisition module is used for acquiring a plurality of electroencephalogram signals of a trainee performing one-time test under panoramic videos of various scenes, wherein the electroencephalogram signals respectively contain marks of scene types;
the characteristic extraction module is used for respectively preprocessing and extracting characteristics of the plurality of electroencephalogram signals to obtain electroencephalogram characteristics;
the trend analysis module is used for analyzing the change trend of the electroencephalogram characteristics to obtain a trend distribution map of the sensitivity of the person to be trained to various scenes;
and the scene determining module is used for determining at least one scene as a training scene for cognitive rehabilitation training according to the trend distribution map.
CN202011438172.9A 2020-12-07 2020-12-07 Electroencephalogram signal processing method and system for cognitive rehabilitation training Pending CN112450949A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011438172.9A CN112450949A (en) 2020-12-07 2020-12-07 Electroencephalogram signal processing method and system for cognitive rehabilitation training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011438172.9A CN112450949A (en) 2020-12-07 2020-12-07 Electroencephalogram signal processing method and system for cognitive rehabilitation training

Publications (1)

Publication Number Publication Date
CN112450949A true CN112450949A (en) 2021-03-09

Family

ID=74800509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011438172.9A Pending CN112450949A (en) 2020-12-07 2020-12-07 Electroencephalogram signal processing method and system for cognitive rehabilitation training

Country Status (1)

Country Link
CN (1) CN112450949A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638263A (en) * 2022-03-15 2022-06-17 华南理工大学 Building space satisfaction evaluation method based on electroencephalogram signals
CN115862810A (en) * 2023-02-24 2023-03-28 深圳市铱硙医疗科技有限公司 VR rehabilitation training method and system with quantitative evaluation function

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103394161A (en) * 2013-07-16 2013-11-20 山西大学 Cerebral alpha wave feedback regulation acupoint magnetic stimulation system
CN106388793A (en) * 2016-09-06 2017-02-15 华南理工大学 Alzheimer's disease adjuvant therapy system based on VR (virtual reality) technology and physiological sign monitoring
CN106983505A (en) * 2017-05-08 2017-07-28 天津医科大学 A kind of neuroelectricity activity dependence analysis method based on comentropy
CN108245154A (en) * 2018-01-24 2018-07-06 福州大学 The method that blink section in brain electricity or eye electricity is accurately determined using rejecting outliers
CN109875509A (en) * 2019-02-27 2019-06-14 京东方科技集团股份有限公司 The test macro and method of Alzheimer Disease patient rehabilitation training effect
CN109961018A (en) * 2019-02-27 2019-07-02 易念科技(深圳)有限公司 Electroencephalogramsignal signal analysis method, system and terminal device
KR20190093953A (en) * 2018-02-02 2019-08-12 주식회사 비온시이노베이터 Servise providing system for congnitive training
CN110415788A (en) * 2018-04-27 2019-11-05 深圳市前海安测信息技术有限公司 System and method based on the auxiliary Alzheimer Disease patient memory training of VR technology
CN111466931A (en) * 2020-04-24 2020-07-31 云南大学 Emotion recognition method based on EEG and food picture data set
US20200294652A1 (en) * 2019-03-13 2020-09-17 Bright Cloud International Corporation Medication Enhancement Systems and Methods for Cognitive Benefit

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103394161A (en) * 2013-07-16 2013-11-20 山西大学 Cerebral alpha wave feedback regulation acupoint magnetic stimulation system
CN106388793A (en) * 2016-09-06 2017-02-15 华南理工大学 Alzheimer's disease adjuvant therapy system based on VR (virtual reality) technology and physiological sign monitoring
CN106983505A (en) * 2017-05-08 2017-07-28 天津医科大学 A kind of neuroelectricity activity dependence analysis method based on comentropy
CN108245154A (en) * 2018-01-24 2018-07-06 福州大学 The method that blink section in brain electricity or eye electricity is accurately determined using rejecting outliers
KR20190093953A (en) * 2018-02-02 2019-08-12 주식회사 비온시이노베이터 Servise providing system for congnitive training
CN110415788A (en) * 2018-04-27 2019-11-05 深圳市前海安测信息技术有限公司 System and method based on the auxiliary Alzheimer Disease patient memory training of VR technology
CN109875509A (en) * 2019-02-27 2019-06-14 京东方科技集团股份有限公司 The test macro and method of Alzheimer Disease patient rehabilitation training effect
CN109961018A (en) * 2019-02-27 2019-07-02 易念科技(深圳)有限公司 Electroencephalogramsignal signal analysis method, system and terminal device
US20200294652A1 (en) * 2019-03-13 2020-09-17 Bright Cloud International Corporation Medication Enhancement Systems and Methods for Cognitive Benefit
CN111466931A (en) * 2020-04-24 2020-07-31 云南大学 Emotion recognition method based on EEG and food picture data set

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张冠华等: "《面向情绪识别的脑电特征研究综述》", 《中国科学:信息科学》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638263A (en) * 2022-03-15 2022-06-17 华南理工大学 Building space satisfaction evaluation method based on electroencephalogram signals
CN115862810A (en) * 2023-02-24 2023-03-28 深圳市铱硙医疗科技有限公司 VR rehabilitation training method and system with quantitative evaluation function
CN115862810B (en) * 2023-02-24 2023-10-17 深圳市铱硙医疗科技有限公司 VR rehabilitation training method and system with quantitative evaluation function
US11977679B1 (en) 2023-02-24 2024-05-07 Shenzhen Yiwei Medical Technology Co., Ltd VR rehabilitation training method and system with quantitative evaluation function

Similar Documents

Publication Publication Date Title
Mammone et al. Automatic artifact rejection from multichannel scalp EEG by wavelet ICA
CN106407733A (en) Depression risk screening system and method based on virtual reality scene electroencephalogram signal
EP1788937A2 (en) Method for adaptive complex wavelet based filtering of eeg signals
Groen et al. The time course of natural scene perception with reduced attention
CN109602417A (en) Sleep stage method and system based on random forest
CN112450949A (en) Electroencephalogram signal processing method and system for cognitive rehabilitation training
WO2024083059A1 (en) Working memory task magnetoencephalography classification system based on machine learning
Zou et al. Automatic EEG artifact removal based on ICA and Hierarchical Clustering
CN113576498B (en) Visual and auditory aesthetic evaluation method and system based on electroencephalogram signals
Moser et al. Classification and detection of single evoked brain potentials using time-frequency amplitude features
CN111671421B (en) Electroencephalogram-based children demand sensing method
Feige Oscillatory brain activity and its analysis on the basis of MEG and EEG
Perera et al. EEG signal analysis of real-word reading and nonsense-word reading between adults with dyslexia and without dyslexia
CN115981458A (en) Visual stimulation method, brain-computer training method and brain-computer training system
CN113255786B (en) Video quality evaluation method based on electroencephalogram signals and target salient characteristics
CN113143291B (en) Electroencephalogram feature extraction method under rapid sequence visual presentation
CN110068466A (en) Vehicle sound quality evaluation method based on brain wave
Lei et al. Common spatial pattern ensemble classifier and its application in brain-computer interface
Celka et al. Noise reduction in rhythmic and multitrial biosignals with applications to event-related potentials
CN112137614A (en) Electroencephalogram-based motion feature identification method related to three-dimensional video comfort level
CN112215057A (en) Electroencephalogram signal classification method based on three-dimensional depth motion
CN115715677B (en) Emotion recognition model training method, training device, equipment and storage medium
CN110613446A (en) Signal processing method and device
Ince et al. ECoG based brain computer interface with subset selection
Yun et al. A Study on Training Data Selection Method for EEG Emotion Analysis Using Artificial Neural Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210309