WO2018094720A1 - 基于临床脑电信号控制机械手运动的脑机接口系统及其应用 - Google Patents

基于临床脑电信号控制机械手运动的脑机接口系统及其应用 Download PDF

Info

Publication number
WO2018094720A1
WO2018094720A1 PCT/CN2016/107436 CN2016107436W WO2018094720A1 WO 2018094720 A1 WO2018094720 A1 WO 2018094720A1 CN 2016107436 W CN2016107436 W CN 2016107436W WO 2018094720 A1 WO2018094720 A1 WO 2018094720A1
Authority
WO
WIPO (PCT)
Prior art keywords
clinical
eeg
signal
module
brain
Prior art date
Application number
PCT/CN2016/107436
Other languages
English (en)
French (fr)
Inventor
张韶岷
李悦
王东
蔡邦宇
朱君明
张建民
郑筱祥
吴朝晖
潘纲
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江大学 filed Critical 浙江大学
Publication of WO2018094720A1 publication Critical patent/WO2018094720A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/50Prostheses not implantable in the body
    • A61F2/68Operating or control means
    • A61F2/70Operating or control means electrical
    • A61F2/72Bioelectric control, e.g. myoelectric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection

Definitions

  • the invention belongs to the technical field of brain-computer interface, and particularly relates to a brain-computer interface system for controlling manipulator motion based on clinical cortical EEG signals.
  • the brain-computer interface is a new type of technology that uses the computing system to analyze brain activity signals and convert them into control commands, allowing users to directly control the effectors (muscles, mice, keyboards, etc.) in real time.
  • the clinical application of this technology can greatly help deaf patients or physically disabled people to reconstruct motor function. According to the statistics of the China Disabled Persons' Federation, as of 2010, there were 24.72 million physical disabilities in China, most of which were upper limb dysfunction and finger removal or defect. Therefore, the application of brain-computer interface technology to the clinic will greatly improve the quality of life of disabled people.
  • the brain-computer interface can be divided into an implanted brain-computer interface and a non-implanted brain-computer interface according to the degree of penetration of the electrode to the brain when collecting the EEG signal.
  • the non-implanted brain-computer interface uses scalp electrodes or in vitro sensors to observe brain nerve activity, without the risk of surgical craniotomy, but the temporal and spatial resolution is low, the training samples are large, and the robustness in changing environmental conditions is poor. It has not been used for complicated hand brain interface control.
  • the implantable brain-computer interface uses multi-channel electrodes to collect intracranial neuron signals, has high spatial and temporal resolution, and is not susceptible to noise interference, and can provide more accurate EEG information, but because of the greatest degree of invasion, the risk of surgery and prognosis is large.
  • the collecting electrode is a needle array electrode, which is susceptible to biocompatibility, rejection reaction and electrode shedding after long-term implantation, and the signal is easily attenuated, which is not conducive to long-term clinical application. How to balance the quality and invasiveness of EEG signals has always been difficult for brain-computer interface from experimental non-human animal research to clinical transformation. point.
  • cortical EEG signals have been widely concerned because they are collected by a subdural-covered patch electrode and do not invade the cerebral cortex, and have the advantages of high temporal and spatial resolution and long-term stability.
  • the cortical EEG signal has long been used for the localization of refractory epilepsy lesions, with mature related electrode implantation techniques and postoperative intervention techniques, but there are few related applications in the field of brain-computer interface.
  • the purpose of the invention is to provide a medical clinical cortical EEG signal as a brain-computer interface signal source, and provide a brain-computer interface system for grasping the movement function reconstruction, which is convenient for the clinically disabled patients to control the external prosthesis through the EEG. Grip behavior.
  • the present invention provides a brain-computer interface system for controlling manipulator motion based on a clinical cortical EEG signal, comprising a signal acquisition module, an EEG feature extraction and decoding module, a robot control module, and a peripheral module.
  • the signal acquisition module preprocesses the collected clinical EEG signals and inputs them to the EEG feature extraction and decoding module, and the EEG feature extraction and decoding module extracts the characteristics of the preprocessed EEG signals, and the robot control module performs preprocessing on the The characteristics of the EEG signals are classified, and the class label is sent to the robot to complete the gesture movement; the peripheral module supervises and feeds back the tasks performed by the robot.
  • the signal acquisition module is configured to process clinical EEG signals and acquire motion task start time and motion gesture categories.
  • the preprocessing of the clinical EEG signal by the signal acquisition module includes:
  • the clinical EEG signal is shunted by a splitter, and the clinical EEG signal is divided into two paths, one is input into the hospital recording system, and the other is input to the neural signal acquisition device;
  • the brain-computer interface system of the present invention is in use
  • the system should be recorded independently of the hospital, so the EEG signal needs to be shunted.
  • the specific process of shunting is as follows: the medical clinical EEG signal enters the splitter through the clinical medical electrode, and the splitter copies one signal into two signals that are completely consistent with the inflow signal, one of which enters the hospital recording system and the other enters the nerve Signal acquisition instrument
  • the signal amplification of the clinical EEG signal is performed by a neural signal acquisition device, and band pass filtering is performed;
  • the nerve signal acquisition instrument has an amplifier inside to amplify the clinical EEG signal; the bandpass filter uses hardware filtering, the bandpass range is 0.3-500Hz, the notch is 50Hz, and each channel is visually observed by the display of the neural signal acquisition instrument.
  • the original signal except for channels that are more disturbed by noise;
  • the filtered clinical EEG signal is stored at the PC control end at a sampling rate of 1 KHz.
  • the EEG feature extraction and decoding module is built in the PC control end, and is used for extracting the characteristics of the EEG signal in the specific frequency domain after filtering and real-time decoding motion gesture, mainly for filtering the clinical EEG signal through the multi-window spectrum.
  • the method estimates the power spectral density at time-frequency, and then performs normalization and processing to obtain the time-frequency characteristics of the clinical EEG signals on each channel. Next, according to the time-frequency characteristics of each channel, the motion function is selected. Channels, clinical EEG signal activation time and frequency bands; Finally, using the selected channel feature quantities, the Support Vector Machine (SVM) classifiers for multi-classification are trained for the classification of multiple gestures.
  • SVM Support Vector Machine
  • the robot control module is built in the PC control end, and sends a command to the robot through the serial port end of the PC, and is used to control the robot to execute the corresponding motion gesture according to the instruction.
  • the peripheral module includes a voice module, a display module, a data glove, and a camera module.
  • the display module is used to prompt the user to perform a motion gesture;
  • the voice module is used to prompt the user to start the task and real-time feedback of the gesture execution;
  • the data glove wears On the user's hands, it is used for real-time recording of the user's hand movement;
  • the camera module is used for recording and indirect observation of the user's hand movement.
  • the prosthetic movement using the brain-computer interface system is divided into two stages, namely the offline training stage. And the online forecasting phase.
  • the offline test phase is used to construct the optimal predictive model, including the selection of feature parameters and the optimization of classifier parameters.
  • the online forecasting phase is used to analyze the EEG signals of users in real time online with a well-established brain-computer interface system. And make a gesture category prediction, and then control the external robot to make the corresponding gesture.
  • the steps in the offline training phase are:
  • the EEG acquisition module collects the clinical EEG signals, and preprocesses the clinical EEG signals to obtain the EEG signals in the specific frequency domain after filtering;
  • the EEG feature extraction and decoding module extracts the characteristics of the clinical EEG signal in the specific frequency domain after filtering, obtains the channel feature quantity, and acquires the corresponding gesture category through the PC end;
  • step (1) The specific steps of step (1) are:
  • step (2) The specific steps of step (2) are:
  • step (2-1) when extracting frequency domain features, a slip length of 300 ms is utilized.
  • the moving window moves at a step of 100ms each time, and the intercepted filtered EEG signal in a specific frequency domain calculates its energy in the frequency domain by a multi-window spectrum method.
  • step (2-2) the steps to normalize the power spectral density are:
  • S 1 (t), S 2 (t), ... S 10 (t) are the EEG signals of the first 10 time windows of the visual cue
  • mean( ⁇ ) is the mean function
  • std( ⁇ ) is the variance function
  • S The baseline_ave is the mean of the power spectral density in the resting state of the grasping task
  • the S baseline_std is the variance of the power spectral density in the resting state of the grasping task
  • S i (t) is the power spectral density value of each time window after the start of the motion
  • the power spectral density on each time window is normalized in the frequency domain by the above formula.
  • the low frequency and high frequency EEG signals can be obtained by taking the average of the power spectral density at 5 Hz as the frequency resolution, and subtracting the mean value of the basic EEG signal, dividing by the variance of the basic EEG signal. Chemical.
  • the selected motion-related channels have an increase in motion at low frequencies (0.3-15 Hz) and high frequencies (70-135 Hz), and decrease with motion at intermediate frequencies (15-35 Hz).
  • the channel of the feature The 10 windows after the prompt are used as the activation time of the clinical EEG signal.
  • the channel feature quantity is a 1*n vector, where n is the channel number The number, the product of the frequency domain dimension of 5 Hz as the frequency resolution and the activation time of the clinical EEG signal.
  • step (3) the channel feature quantity and the corresponding gesture category are input to the SVM decoder, and the best SVM feature is trained by the cross-validation method to obtain a pre-judgment model as a decoding model of the online prediction stage.
  • the libsvm toolkit is used to implement multi-gesture classification.
  • the steps in the online forecasting phase are:
  • the EEG acquisition module collects clinical EEG signals, uses the splitter to shunt the clinical EEG signals, and then amplifies and bandpasses the clinical EEG signals through a neural signal acquisition device to obtain a filtered specific frequency domain.
  • the EEG feature extraction and decoding module obtains the clinical EEG signal from the buffer of the neural signal acquisition device and calculates the power spectral density on the frequency band of the motion-related channel after obtaining the task prompt from the PC control terminal. Do normalization;
  • the robot control module classifies the normalized features by using the trained prediction model, sends the class label to the robot through the PC serial port, and completes the gesture movement.
  • the peripheral module supervises and feeds back the tasks performed by the robot.
  • step (b) the clinical EEG signal is acquired from the neural signal acquisition device every 100 ms, and the power spectrum is calculated using the first 200 ms information.
  • step (c) the characteristics of the clinical EEG signal within the activation time of 600 ms after the task prompt are used for gesture category recognition, and the external prosthesis is sent to the external prosthesis through the PC serial port to control the prosthetic movement.
  • All task-related instructions for performing the prosthetic motion process using the brain-computer interface system are controlled by the main program written in C language on the PC side, and the main program simultaneously synchronizes external event time information and clinical cortical EEG signals.
  • the main program on the PC first prompts to configure the relevant parameters, then sends the specified gesture type through the display, and sends the task through the audio. Instructions and task completion feedback.
  • the start time of the motion task is the system time when the PC console sends the task prompt minus the start time of the EEG signal record.
  • the invention uses the clinical EEG signal as the signal source of the brain-computer interface system to realize the precise control of the synchronous online hand movement, which will greatly facilitate the clinical transformation of the sports type, especially the brain-computer interface of the hand movement, thereby helping the hand The disabled person resumes the gripping movement function.
  • the entire system is independent of the clinical system and does not affect the documentation of the clinical system.
  • the system design is simple, the task setting is simple and easy to understand, and does not impose an additional burden on the user's understanding and execution.
  • the system also considers portability and builds with as few devices as possible to facilitate clinical access and withdrawal at any time.
  • FIG. 1 is a schematic diagram of a brain-computer interface system of the present invention
  • FIG. 2 is a flow chart of an offline training phase of an application method of a brain-computer interface system according to the present invention
  • FIG. 3 is a flow chart of an online prediction stage of an application method of a brain-computer interface system according to the present invention.
  • FIG. 4 is a diagram of a PC control terminal interface of the present invention.
  • the user and the system need to be pre-processed, including: the user needs to perform clinical medical cortical electroencephalography electrode implantation surgery, and is familiar with the gesture motion control task.
  • the user needs to complete the task in a more comfortable position, the line of sight is flush with the display screen and the rest of the moving parts except the hand are kept as still as possible.
  • the brain-computer interface system package of the clinical cortical EEG control robot movement of the present invention Including: PC-side control system, hospital recording system, splitter, neural signal acquisition device, display, industrial camera, robot, data glove and speaker, in which the neural signal acquisition device is connected through the network cable and PC, the industrial camera through USB and PC The end of the connection, the data glove is connected to the PC through the USB, and the PC control system controls the entire experimental process.
  • the filter parameter of the neural signal acquisition device is set to 0.3-500 Hz through the PC-side control system, the sampling rate is 1 KHz, and the path for the signal storage by the PC end is set at the same time.
  • the neural signal acquisition device, the display, the speaker, the industrial camera and the data glove are simultaneously turned on for testing.
  • the neural signal collecting device collects, preprocesses and records the clinical EEG signal; the display synchronously displays the gesture category. Prompt picture; use the industrial camera and data gloves to record the user's hand movement status synchronously, which is convenient for remote recording and observing the user's hand movement state.
  • the TTL level is sent to the nerve signal collecting instrument through the analog port.
  • the correctness of the performance of the speaker feedback gesture is given to the user.
  • the motion task is based on a single grasp and the training is repeated until the end of the training sample collection.
  • the clinical EEG signals for gesture prediction are loaded and gesture prediction analysis is performed.
  • the gesture pre-judgment category is converted into the robot-specific gesture setting, and is sent to the robot through the serial port.
  • the robot prepares in real time in a static state during the entire task, and immediately performs gesture switching upon receiving the instruction sent by the serial port.
  • the brain-computer interface system is used to control the prosthesis to perform simple exercise, which is divided into two phases, the first phase is the offline training phase, and the second phase is the online prediction phase.
  • the offline training phase is specifically as follows:
  • Step 1 The clinical EEG signal is shunted by the splitter, and the clinical EEG signal is divided into two paths, one is input into the hospital recording system, and the other is input into the neural signal collecting instrument.
  • the built brain-computer interface system should be independent of the hospital record system during use, This requires shunting the EEG signal.
  • the specific process of shunting is as follows: the medical clinical EEG signal enters the splitter through the clinical medical electrode, and the splitter copies one signal into two signals that are completely consistent with the inflow signal, one of which enters the hospital recording system and the other enters the nerve Signal processing system.
  • step 2 the input clinical electroencephalogram signal is amplified by a neural signal acquisition device, and bandpass filtering is performed to obtain an EEG signal in a specific frequency domain after filtering.
  • the bandpass filter uses hardware filtering, the bandpass range is 0.3-500Hz, and the working notch is 50Hz. Observe the original signal of each channel with the naked eye to remove the channel that is more disturbed by noise.
  • Step 3 Estimating the power spectral density at the time-frequency of the filtered clinical EEG signal using the multi-window spectrum.
  • a sliding window with a length of 300 ms is used to move in steps of 100 ms, and the filtered EEG signal in the specific frequency domain is calculated by the multi-window spectrum estimation algorithm to calculate its energy in the frequency domain.
  • step 4 the power spectral density is normalized to obtain the time-frequency characteristics of the clinical EEG signals on each channel.
  • the EEG signal of the resting state of 10 windows before the visual cue in the current grasping task is calculated, and the mean and variance of the power spectral density under the resting state of the current grasping task are obtained, and the calculation formula is:
  • S 1 (t), S 2 (t), ... S 10 (t) are the EEG signals of the first 10 time windows of the visual cue
  • mean( ⁇ ) is the mean function
  • std( ⁇ ) is the variance function
  • S The baseline_ave is the mean of the power spectral density in the resting state of the grasping task
  • the S baseline_std is the variance of the power spectral density in the resting state of the grasping task
  • S i (t) is the power spectral density value of each time window after the start of the motion
  • the power spectral density on each time window is normalized in the frequency domain by the above formula.
  • Step 5 According to the time-frequency characteristics of each channel, select the channel related to the motion function, the activation time of the clinical EEG signal, and the frequency band to obtain the channel feature quantity.
  • the selected motion-related channels are channels with increased motion at low frequencies (0.3-15 Hz) and high frequencies (70-135 Hz) with motion reduction at intermediate frequencies (15-35 Hz).
  • the 10 windows after the prompt are used as the activation time of the clinical EEG signal.
  • the channel feature quantity is a 1*n vector, where n is the product of the number of channels, the frequency domain dimension with 5 Hz frequency resolution, and the activation time of clinical EEG signals.
  • step 6 the corresponding gesture category is obtained through the PC.
  • step 7 the channel feature quantity and the corresponding gesture category are input into the SVM classifier, and the best SVM feature is trained by using the cross-validation method, and the pre-judgment model is obtained as the decoding model of the online prediction stage.
  • the online prediction phase is specifically as follows:
  • Step 1 using a splitter to split the clinical EEG signal, and amplifying and bandpass filtering the clinical EEG signal through a neural signal acquisition device to obtain an EEG signal in a specific frequency domain after filtering;
  • Step 2 After obtaining the task initiated by the PC control end, the EEG feature extraction and decoding module acquires the clinical EEG signal from the buffer of the neural signal acquisition device and calculates the power spectral density on the frequency band of the motion-related channel, and Do normalization;
  • Step 3 The robot control module classifies the normalized features by using the trained SVM classifier, and sends the class label to the robot through the PC serial port to complete the gesture movement, and at the same time, Peripheral modules monitor and feedback the tasks performed by the robot.
  • Figure 4 is the main program interface diagram of the PC.
  • the interface is written in C language. After connecting the system path according to Figure 1, first open the neural signal acquisition device, and then connect the neural signal (Neuroport) and the port settings, and then according to The time-frequency characteristics of the offline neural signal, the channel and frequency band of the decoding model are selected, and finally the serial port connected by the robot and the template of the three gesture movements of the data glove are set.
  • the gesture template of the data glove can be obtained by repeatedly performing gesture training while the user wears the data glove.
  • a 3-second preparation phase the user is required to keep the palm up and the hand to relax, and at the beginning there is a voice prompting the user to be prepared, that is, to maintain concentration.
  • a voice prompting the user to be prepared that is, to maintain concentration.
  • some kind of gesture photo will appear randomly on the display with equal probability.
  • the user needs to perform the stage imagination and execute the gesture immediately after the visual prompt.
  • the hand should maintain a gesture gesture before the end of the prompting task.
  • the screen prompts the gesture to relax and wait for the start of the next task.
  • the system setting robot can perform three gestures of “stone”, “scissor” and “cloth”, and the user performs gesture movement by the instruction placed on the display in front of the bed.
  • the user's hands and arms remain stationary until the exercise begins.
  • a single gesture gripping motion begins with an audible prompt "prepare.”
  • a red plus sign will appear on the screen, prompting the user to pay attention to the plus sign and keep the palms up and relaxed.
  • the next time is a resting state, which lasts for 2-2.5 seconds. After the resting state is over, the red plus sign will be replaced by the gesture picture, and the gesture picture will randomly display the three gestures. Any kind of gesture.
  • the user can respond to the gesture and maintain the gesture state until the last red dot is displayed to prompt the hand to relax.
  • the entire gesture phase lasts for 2-3.5 seconds.
  • the user can then relax and switch to a resting state. Voice prompts the correctness of this task to the user.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • Neurosurgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Cardiology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Transplantation (AREA)
  • Neurology (AREA)
  • Vascular Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • User Interface Of Digital Computer (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

一种基于临床皮层脑电信号控制机械手运动的脑机接口系统及其应用方法,该系统包括信号采集模块、脑电特征提取及解码模块、机械手控制模块以及外设模块,信号采集模块将采集到的临床脑电信号进行预处理后输入到脑电特征提取及解码模块,脑电特征提取及解码模块提取预处理的脑电信号的特征,机械手控制模块对预处理后的脑电信号的特征进行分类,并将类标发送到机械手,完成手势运动;外设模块监督和反馈机械手执行的任务。该系统及方法利用时空分辨率较高侵入程度小的临床皮层脑电信号,可实现高精度的在线机械手手势控制。

Description

基于临床脑电信号控制机械手运动的脑机接口系统及其应用 技术领域
本发明属于脑机接口技术领域,尤其涉及一种基于临床皮层脑电信号控制机械手运动的脑机接口系统。
背景技术
脑机接口是一种新型的,仅利用计算系统解析大脑活动信号并将其转化为控制指令,就可以让用户直接对效应器(肌肉,鼠标,键盘等)进行实时控制的技术。该技术的临床应用实施可以极大地帮助瘫痪病人或者肢残人士重建运动功能。据中国残疾人联合会统计,截止2010年,中国共有2472万肢体残疾,其中,大部分为上肢功能障碍和手指截除或缺损。因此,将脑机接口技术应用于临床将极大地改善残疾人的生活质量。
目前,脑机接口根据采集脑电信号时电极对大脑的侵入程度,可分为植入式脑机接口和非植入式脑机接口。其中,非植入式脑机接口采用头皮电极或者体外传感器观测大脑神经活动,无外科开颅手术风险,但时空分辨率较低,训练样本大,对于变化环境条件中的稳健性较差,目前为止还不能用于复杂的手部脑机接口控制。植入式脑机接口利用多通道电极采集颅内神经元信号,具有高时空分辨率,并且不易受到噪声干扰,可以提供较为精准的脑电信息,但由于侵入程度最大,手术及预后风险大,并且采集电极为针式阵列电极,长期植入后易受到生物相容性,排异反应以及电极脱落等影响,信号易衰减,不利于临床长期应用。如何平衡脑电信号质量与侵入性一直是脑机接口从实验非人动物研究到临床转化过程中的难 点。
近年来,由于皮层脑电信号是通过硬脑膜下覆盖的贴片电极采集且不侵入大脑皮层,同时又具有高时空分辨率和长期稳定性的优点,因此受到广泛地关注。在临床上,该皮层脑电信号长期用于难治性癫痫病灶的定位,具有成熟的相关电极植入技术和术后干预技术,而在脑机接口领域的相关应用还较少。
发明内容
本发明的目的在于将医用临床皮层脑电信号作为脑机接口信号源,提供一种用于抓握运动功能重建的脑机接口系统,帮助临床上肢体残疾病人通过脑电控制外部假肢执行简单的抓握行为。
为实现上述目的,本发明提出了一种基于临床皮层脑电信号控制机械手运动的脑机接口系统,包括信号采集模块、脑电特征提取及解码模块、机械手控制模块以及外设模块,所述的信号采集模块将采集到的临床脑电信号进行预处理后输入到脑电特征提取及解码模块,脑电特征提取及解码模块提取预处理的脑电信号的特征,机械手控制模块对预处理后的脑电信号的特征进行分类,并将类标发送到机械手,完成手势运动;所述的外设模块监督和反馈机械手执行的任务。
所述的信号采集模块用于对临床脑电信号进行处理以及运动任务开始时间和运动手势类别的获取。
所述的信号采集模块对临床脑电信号进行的预处理包括:
首先,通过分线器对临床脑电信号进行分流,将临床脑电信号分成两路,一路输入医院记录系统,另一路输入神经信号采集仪;
为了不影响医院记录系统的纪录,本发明脑机接口系统在使用过程中 应独立于医院记录系统,因此需要对脑电信号进行分流。分流的具体过程为:医用临床脑电信号通过临床医用电极进入分线器,分线器将一路信号复制成为与流入信号完全一致的两路信号,其中一路信号进入医院记录系统,另一路进入神经信号采集仪;
然后,通过神经信号采集仪对临床脑电信号进行信号放大,带通滤波;
神经信号采集仪内部具有一个放大器,对临床脑电信号进行放大;带通滤波选用硬件滤波,带通范围为0.3-500Hz,陷波为50Hz,利用神经信号采集仪的显示屏肉眼观察每个通道的原始信号,除去受到噪声干扰较大的通道;
最终,将滤波后的临床脑电信号以1KHz的采样率存储于PC控制端。
所述的脑电特征提取及解码模块内置于PC控制端,用于提取滤波后特定频域的脑电信号的特征和实时解码运动手势,主要是对滤波后的临床脑电信号通过多窗谱方法估计时间-频率上的功率谱密度,然后做归一化后处理,得到每个通道上临床脑电信号的时频特征,接下来,根据每个通道的时频特性,挑选与运动功能相关的通道、临床脑电信号激活时间以及频段;最后,利用挑选出的通道特征量,训练可用于多分类的支持向量机(Support Vector Machine,SVM)分类器,用于多种手势的分类。
所述的机械手控制模块内置于PC控制端,通过PC串口端发送指令到机械手,用于控制机械手按照指令执行相应的运动手势。
所述的外设模块包括语音模块、显示模块、数据手套以及摄像模块,显示模块用于提示用户需要执行的运动手势;语音模块用于提示用户任务开始以及手势执行情况的实时反馈;数据手套穿戴于用户双手上,用于用户手部运动的实时记录;摄像模块用于用户手部运动的记录和非直接观察。
利用脑机接口系统进行假肢运动分为两个阶段,分别为离线训练阶段 和在线预测阶段。离线测试阶段用于构建最优的预判模型,具体包括特征参数的选取以及分类器参数的优化;在线预测阶段用于实时在线的用构建好的脑机接口系统对用户的脑电信号进行分析,并做出手势类别预测,然后控制外部机械手做出相应的手势。
离线训练阶段的步骤为:
(1)脑电采集模块采集临床脑电信号,并对临床脑电信号进行预处理,得到滤波后特定频域的脑电信号;
(2)脑电特征提取及解码模块提取滤波后特定频域的临床脑电信号的特征,得到通道特征量,并通过PC端获取对应的手势类别;
(3)将通道特征量和对应的手势类别输入到SVM分类器中,进行训练,得到预判模型。
步骤(1)的具体步骤为:
(1-1)利用分线器对临床脑电信号进行分流,将临床脑电信号分成两路,一路输入医院记录系统,另一路输入神经信号采集仪;
(1-2)利用神经信号采集仪对输入的临床脑电信号进行放大,带通滤波,得到滤波后特定频域的脑电信号。
步骤(2)的具体步骤为:
(2-1)利用多窗谱方法对滤波后的临床脑电信号进行估计,得到临床脑电信号的时间-频率上的功率谱密度;
(2-2)对功率谱密度做归一化处理,得到每个通道上临床脑电信号的时频特征;
(2-3)根据每个通道的时频特性,挑选与运动功能相关的通道、临床脑电信号激活时间以及频段,得到通道特征量。
在步骤(2-1)中,在提取频域特征时,利用一个长度为300ms的滑 动窗每次以步进为100ms移动,截取的滤波后特定频域的脑电信号通过多窗谱方法计算其在频域上的能量。
在步骤(2-2)中,对功率谱密度做归一化的步骤为:
(2-2-1)对当前抓握任务中视觉提示前1秒,即10个窗的静息状态的脑电信号进行计算,获得当前抓握任务静息状态下的功率谱密度均值和方差,计算公式为:
Sbaseline_ave=mean(S1(t),S2(t),…S10(t))
Sbaseline_std=std(S1(t),S2(t),…S10(t))
其中,S1(t),S2(t),…S10(t)为视觉提示前10个时间窗的脑电信号,mean(·)为均值函数,std(·)为方差函数,Sbaseline_ave为抓握任务静息状态下的功率谱密度均值,Sbaseline_std为抓握任务静息状态下的功率谱密度的方差;
(2-2-2)对运动开始后的脑电信号的功率谱密度做归一化处理,归一化公式为:
Figure PCTCN2016107436-appb-000001
其中,Si(t)为运动开始后的每个时间窗的功率谱密度值,通过以上公式使得每个时间窗上的功率谱密度在频域上得到归一化。
为了降低计算的维度,可以将低频和高频脑电信号求取以5Hz为频率分辨率下的功率谱密度平均值,并减去基础脑电信号均值,除以基础脑电信号方差做归一化。
在步骤(2-3)中,挑选出的与运动相关的通道为具有在低频(0.3-15Hz)和高频(70-135Hz)上随运动增高,在中频(15-35Hz)上随运动降低特性的通道。将提示后的10个窗作为临床脑电信号激活时间。
在步骤(2-3)中,通道特征量为一个1*n的向量,其中n为通道个 数、以5Hz为频率分辨率的频域维度及临床脑电信号激活时间三者的乘积。
在步骤(3)中,将通道特征量与对应的手势类别输入到SVM解码器,利用交叉验证方法训练得出最佳SVM特征,得到预判模型,作为在线预测阶段的解码模型。在matlab界面中,利用的是libsvm工具包实现多手势分类。
在线预测阶段的步骤为:
(a)脑电采集模块采集临床脑电信号,利用分线器对临床脑电信号进行分流,然后通过神经信号采集仪对临床脑电信号进行放大和带通滤波,得到滤波后特定频域的脑电信号;
(b)脑电特征提取及解码模块在得到PC控制端发来的任务开始提示后,从神经信号采集仪的缓冲区获取临床脑电信号并计算与运动相关通道频段上的功率谱密度,并做归一化处理;
(c)机械手控制模块利用已经训练好的预测模型对归一化的特征进行分类,将类标通过PC串口发送到机械手,完成手势运动,同时,外设模块监督和反馈机械手执行的任务。
在步骤(b)中,每隔100ms从神经信号采集仪中获取临床脑电信号,并利用前200ms信息计算功率谱。
在步骤(c)中,利用任务提示后的600ms激活时间内的临床脑电信号特征用于手势类别识别,并通过PC串口发送指令给外部假肢,控制假肢运动。
利用脑机接口系统进行假肢运动过程的所有任务相关指令由PC端的用C语言编写的主程序控制,主程序同时同步外部事件时间信息以及临床皮层脑电信号。在一次手势控制实验中,PC端主程序首先会提示对相关参数进行配置,然后通过显示器发送指定的手势类型,通过音响发送任务 指令以及任务完成反馈。
运动任务开始时间为PC控制端发送任务提示时的系统时间减去脑电信号记录的起始时间。
本发明将临床脑电信号作为脑机接口系统的信号源,实现同步在线手部运动的精确控制,将极大地有利于运动型,特别是手部运动的脑机接口的临床转化,从而帮助手部残障人士恢复抓握运动功能。整套系统独立于临床系统,不影响临床系统的记录。系统设计简洁,任务设置简单易懂,不会对用户的理解和执行造成额外的负担。系统同时还兼顾便携性,用尽可能少的设备搭建,方便临床随时接入和撤出。
附图说明
图1为本发明的脑机接口系统示意图;
图2为本发明脑机接口系统应用方法离线训练阶段流程图;
图3为本发明脑机接口系统应用方法在线预测阶段流程图;
图4为本发明的PC控制端界面图。
具体实施方式
为了更为具体地描述本发明,下面结合附图及具体实施方式对本发明的技术方案进行详细说明。
在利用本发明脑机接口系统前,需要对用户和系统进行预先的处理,包括:用户需要做临床医用皮层脑电电极植入手术,并熟悉手势运动控制任务。用户需要以较为舒适的姿势完成任务,视线与显示屏幕齐平并保持除手以外其余运动部位尽可能静止不动。
如图1所示,本发明临床皮层脑电控制机械手运动的脑机接口系统包 括:PC端控制系统、医院记录系统、分线器、神经信号采集仪、显示器、工业摄像头、机械手、数据手套以及音箱,其中神经信号采集仪通过网线和PC端连接,工业摄像头通过USB与PC端连接,数据手套通过USB与PC端相连,PC端控制系统控制整个实验流程。
利用此脑机接口系统进行测试的过程为:
首先,通过PC端控制系统设置神经信号采集仪的滤波参数为0.3-500Hz,采样率为1KHz,同时设定PC端用于信号存储的路径。然后同步打开神经信号采集仪、显示器、音箱、工业摄像头以及数据手套进行试验,在试验的过程中,神经信号采集仪对临床脑电信号进行采集、预处理以及记录存储;显示器同步显示手势类别的提示图片;利用工业摄像头和数据手套同步记录用户的手部运动状况,方便远程记录并观察用户的手部运动状态,摄像开始记录时还会通过模拟口向神经信号采集仪发送TTL高电平,用于神经信号的同步;此外,利用音箱反馈手势执行的正确性给用户。运动任务以单次抓握为基础,重复训练直到训练样本部分采集结束。最后加载用于手势预测的临床脑电信号,进行手势预测分析。之后在预测阶段,将手势预判类别转化为机械手对应手势设定,通过串口发送给机械手,机械手在整个任务过程中以静态状态实时准备,一旦接收到串口发送的指令立即进行手势切换。
利用该脑机接口系统控制假肢执行简单的运动,分为两个阶段,第一个阶段为离线训练阶段,第二个阶段为在线预测阶段。
如图2所示,离线训练阶段具体为:
步骤1,利用分线器对临床脑电信号进行分流,将临床脑电信号分成两路,一路输入医院记录系统,另一部分输入神经信号采集仪。
由于所搭建的脑机接口系统在使用过程中应独立于医院记录系统,因 此需要对脑电信号进行分流。分流的具体过程为:医用临床脑电信号通过临床医用电极进入分线器,分线器将一路信号复制成为与流入信号完全一致的两路信号,其中一路信号进入医院记录系统,另一路进入神经信号处理系统。
步骤2,利用神经信号采集仪对输入的临床脑电信号进行放大,带通滤波,得到滤波后特定频域的脑电信号。
带通滤波选用硬件滤波,带通范围为0.3-500Hz,工作陷波50Hz。肉眼观察每个通道的原始信号,除去受到噪声干扰较大的通道。
步骤3,利用多窗谱对滤波后的临床脑电信号估计其时间-频率上的功率谱密度。
在提取频域特征时,利用一个长度为300ms的滑动窗每次以步进为100ms移动,截取的滤波后特定频域的脑电信号通过多窗谱估计算法计算其在频域上的能量。
步骤4,对功率谱密度做归一化处理,得到每个通道上临床脑电信号的时频特征。
首先,对当前抓握任务中视觉提示前1秒,即10个窗的静息状态的脑电信号进行计算,获得当前抓握任务静息状态下的功率谱密度均值和方差,计算公式为:
Sbaseline_ave=mean(S1(t),S2(t),…S10(t))
Sbaseline_std=std(S1(t),S2(t),…S10(t))
其中,S1(t),S2(t),…S10(t)为视觉提示前10个时间窗的脑电信号,mean(·)为均值函数,std(·)为方差函数,Sbaseline_ave为抓握任务静息状态下的功率谱密度均值,Sbaseline_std为抓握任务静息状态下的功率谱密度的方差;
然后,对运动开始后的脑电信号的功率谱密度做归一化处理,归一化 公式为:
Figure PCTCN2016107436-appb-000002
其中,Si(t)为运动开始后的每个时间窗的功率谱密度值,通过以上公式使得每个时间窗上的功率谱密度在频域上得到归一化。
步骤5,根据每个通道的时频特性,挑选与运动功能相关的通道、临床脑电信号激活时间以及频段,得到通道特征量。
挑选出的与运动相关的通道为具有在低频(0.3-15Hz)和高频(70-135Hz)上随运动增高,在中频(15-35Hz)上随运动降低特性的通道。将提示后的10个窗作为临床脑电信号激活时间。通道特征量为一个1*n的向量,其中n为通道个数、以5Hz为频率分辨率的频域维度及临床脑电信号激活时间三者的乘积。
步骤6,通过PC端获取对应的手势类别。
步骤7,将通道特征量和对应的手势类别输入到SVM分类器中,利用交叉验证方法训练得出最佳SVM特征,得到预判模型,作为在线预测阶段的解码模型。
如图3所示,在线预测阶段具体为:
步骤1,利用分线器对临床脑电信号进行分流,并通过神经信号采集仪对临床脑电信号进行放大和带通滤波,得到滤波后特定频域的脑电信号;
步骤2,脑电特征提取及解码模块在得到PC控制端发来的任务开始提示后,从神经信号采集仪的缓冲区获取临床脑电信号并计算与运动相关通道频段上的功率谱密度,并做归一化处理;
步骤3,机械手控制模块利用已经训练好的SVM分类器对归一化的特征进行分类,将类标通过PC串口发送到机械手,完成手势运动,同时, 外设模块监督和反馈机械手执行的任务。
图4为PC端主程序界面图,该界面用C语言编写,根据图1连接好系统通路后,首先打开神经信号采集仪,依次进行神经信号采集仪(Neuroport)连接以及端口的设置,然后根据离线神经信号的时频特性,对解码模型的通道和频段进行选择,最后设置机械手连接的串口以及数据手套的三种手势运动的模板。其中数据手套的手势模板通过用户在佩戴数据手套的同时重复执行手势训练即可获取。
依次对以上参数进行设置并确保用户准备好进行任务执行后点击“开始试验”,即可开始整个脑机接口系统,包括脑电信号的采集,特征提取及解码,模型的训练以及最后手势预测和机械手控制。在点击“停止实验”后即可停止整个脑机接口系统实验,并暂停神经信号,视频信号等的存储。
每次用户抓握任务总长控制在10秒以内,包括3秒钟的准备阶段,4秒钟的手势执行阶段,以及3秒的手势放松阶段。在准备阶段要求用户将手保持掌心向上,手部放松姿势,在开始时有语音提示用户做好准备,即保持注意力集中。准备阶段结束后,显示屏上会等概率随机出现某种手势照片,用户需要在视觉提示后立即执行阶段想象并执行手势。在提示任务结束出现之前,手部要保持手势姿势。任务完成后,屏幕会提示手势放松并等待下一个任务的开始。
本系统设定机械手可以执行“石头”,“剪刀”,“布”三种手势,用户通过放在床前的显示器上的指示进行手势运动。在运动开始前,用户的手及手臂保持静止状态。单次手势抓握运动开始有声音提示“准备”。同时屏幕上会出现一个红色的加号,提示用户注意加号,并保持手掌向上放松。接下来的时间为静息状态,随机持续2-2.5秒。静息状态结束后,红色的加号会用手势图片代替,手势图片随机等概率地显示三种手势中的 任一一种手势。用户需要即可对手势做出相应,并保持手势状态直到最后显示红点提示手部可以进行放松。整个手势阶段持续2-3.5秒。之后用户即可放松手,转换到静息状态。语音提示本次任务的正确性反馈给用户。
以上所述的具体实施方式对本发明的技术方案和有益效果进行了详细说明,应理解的是以上所述仅为本发明的最优选实施例,并不用于限制本发明,凡在本发明的原则范围内所做的任何修改、补充和等同替换等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种基于临床皮层脑电信号控制机械手运动的脑机接口系统,其特征在于,包括信号采集模块、脑电特征提取及解码模块、机械手控制模块以及外设模块,所述的信号采集模块将采集到的临床脑电信号进行预处理后输入到脑电特征提取及解码模块,脑电特征提取及解码模块提取预处理的脑电信号的特征,机械手控制模块对预处理后的脑电信号的特征进行分类,并将类标发送到机械手,完成手势运动;所述的外设模块监督和反馈机械手执行的任务。
  2. 根据权利要求1所述基于临床皮层脑电信号控制机械手运动的脑机接口系统,其特征在于:所述的信号采集模块对临床脑电信号进行的预处理包括:
    首先,通过分线器对临床脑电信号进行分流,将临床脑电信号分成两路,一路输入医院记录系统,另一路输入神经信号采集仪;
    然后,通过神经信号采集仪对临床脑电信号进行信号放大,带通滤波;
    最终,将滤波后的临床脑电信号存储于PC控制端。
  3. 根据权利要求1所述基于临床皮层脑电信号控制机械手运动的脑机接口系统,其特征在于:所述的脑电特征提取及解码模块内置于PC控制端,提取预处理的脑电信号的特征的过程包括:
    首先,对滤波后的临床脑电信号通过多窗谱方法估计时间-频率上的功率谱密度;
    然后做归一化后处理,得到每个通道上临床脑电信号的时频特征;
    接下来,根据每个通道的时频特性,挑选与运动功能相关的通道、临床脑电信号激活时间以及频段。
  4. 根据权利要求3所述基于临床皮层脑电信号控制机械手运动的脑机接口系统,其特征在于:挑选出的与运动相关的通道具有的特性为:通道在频率范围为0.3-15Hz的低频和频率范围为70-135Hz的高频上随运动增高,在频率范围为15-35Hz的中频上随运动降低。
  5. 根据权利要求1所述基于临床皮层脑电信号控制机械手运动的脑机接口系统,其特征在于:所述的外设模块包括语音模块、显示模块、数据手套以及摄像模块,显示模块用于提示用户需要执行的运动手势;语音模块用于提示用户任务开始以及手势执行情况的实时反馈;数据手套穿戴于用户双手上,用于用户手部运动的实时记录;摄像模块用于用户手部运动的记录。
  6. 根据权利要求1~5任一权利要求所述的脑机接口系统的应用方法,其特征在于,分为离线训练阶段和在线预测阶段;离线测试阶段用于构建最优的预判模型;在线预测阶段用于实时在线的用构建好的脑机接口系统对用户的脑电信号进行分析,并做出手势类别预测,然后控制外部机械手做出相应的手势。
  7. 根据权利要求6所述的应用方法,其特征在于,离线训练阶段的步骤为:
    (1)脑电采集模块采集临床脑电信号,并对临床脑电信号进行预处理,得到滤波后特定频域的脑电信号;
    (2)脑电特征提取及解码模块提取滤波后特定频域的临床脑电信号的特征,得到通道特征量,并通过PC端获取对应的手势类别;
    (3)将通道特征量和对应的手势类别输入到SVM分类器中,进行训练,得到预判模型。
  8. 根据权利要求7所述的应用方法,其特征在于,步骤(1)的具体 步骤为:
    (1-1)利用分线器对临床脑电信号进行分流,将临床脑电信号分成两路,一路输入医院记录系统,另一路输入神经信号采集仪;
    (1-2)利用神经信号采集仪对输入的临床脑电信号进行放大,带通滤波,得到滤波后特定频域的脑电信号。
  9. 根据权利要求7所述的应用方法,其特征在于,步骤(2)的具体步骤为:
    (2-1)利用多窗谱方法对滤波后的临床脑电信号进行估计,得到临床脑电信号的时间-频率上的功率谱密度;
    (2-2)对功率谱密度做归一化处理,得到每个通道上临床脑电信号的时频特征;
    (2-3)根据每个通道的时频特性,挑选与运动功能相关的通道、临床脑电信号激活时间以及频段,得到通道特征量。
  10. 根据权利要求6所述的应用方法,其特征在于,在线预测阶段的步骤为:
    (a)脑电采集模块采集临床脑电信号,利用分线器对临床脑电信号进行分流,然后通过神经信号采集仪对临床脑电信号进行放大和带通滤波,得到滤波后特定频域的脑电信号;
    (b)脑电特征提取及解码模块在得到PC控制端发来的任务开始提示后,从神经信号采集仪的缓冲区获取临床脑电信号并计算与运动相关通道频段上的功率谱密度,并做归一化处理;
    (c)机械手控制模块利用已经训练好的预测模型对归一化的特征进行分类,将类标通过PC串口发送到机械手,完成手势运动,同时,外设模块监督和反馈机械手执行的任务。
PCT/CN2016/107436 2016-11-24 2016-11-28 基于临床脑电信号控制机械手运动的脑机接口系统及其应用 WO2018094720A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611052250.5 2016-11-24
CN201611052250.5A CN106726030B (zh) 2016-11-24 2016-11-24 基于临床脑电信号控制机械手运动的脑机接口系统及其应用

Publications (1)

Publication Number Publication Date
WO2018094720A1 true WO2018094720A1 (zh) 2018-05-31

Family

ID=58912403

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/107436 WO2018094720A1 (zh) 2016-11-24 2016-11-28 基于临床脑电信号控制机械手运动的脑机接口系统及其应用

Country Status (2)

Country Link
CN (1) CN106726030B (zh)
WO (1) WO2018094720A1 (zh)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657560A (zh) * 2018-11-24 2019-04-19 天津大学 机械手臂控制在线脑-机接口系统及实现方法
CN111012339A (zh) * 2020-01-07 2020-04-17 南京邮电大学 一种基于脑电信号和生物阻抗数据生理状况监控设备
CN111317468A (zh) * 2020-02-27 2020-06-23 腾讯科技(深圳)有限公司 脑电信号分类方法、装置、计算机设备和存储介质
CN111522445A (zh) * 2020-04-27 2020-08-11 兰州交通大学 智能控制方法
CN111543983A (zh) * 2020-04-02 2020-08-18 天津大学 一种基于神经网络的脑电信号通道选择方法
CN111736690A (zh) * 2020-05-25 2020-10-02 内蒙古工业大学 基于贝叶斯网络结构辨识的运动想象脑机接口
CN112070141A (zh) * 2020-09-01 2020-12-11 燕山大学 一种融合注意力检测的ssvep异步分类方法
CN112631173A (zh) * 2020-12-11 2021-04-09 中国人民解放军国防科技大学 脑控无人平台协同控制系统
CN114027855A (zh) * 2021-12-13 2022-02-11 北京航空航天大学 一种识别头部运动意图的脑电信号解码方法及系统
US20220061742A1 (en) * 2020-08-28 2022-03-03 Covidien Lp Determining composite signals from at least three electrodes
CN114237385A (zh) * 2021-11-22 2022-03-25 中国人民解放军军事科学院军事医学研究院 一种基于非侵入脑电信号的有人机脑控交互系统
CN115153983A (zh) * 2022-06-15 2022-10-11 哈尔滨工业大学 基于机器视觉和眼动追踪的灵巧假手控制系统、设备、方法及存储介质
CN117137498A (zh) * 2023-09-15 2023-12-01 北京理工大学 基于注意力定向和运动意图脑电的紧急状况检测方法

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107315478B (zh) * 2017-07-05 2019-09-24 中国人民解放军第三军医大学 一种运动想象上肢智能康复机器人系统及其训练方法
CN112022175A (zh) * 2020-09-09 2020-12-04 东南大学 一种手部自然动作脑电测量装置
CN112515685B (zh) * 2020-11-10 2023-03-24 上海大学 基于时频共融的多通道脑电信号通道选择方法
CN113208618A (zh) * 2021-04-06 2021-08-06 北京脑陆科技有限公司 一种基于eeg信号的大小便排泄预警方法、系统
CN113288180A (zh) * 2021-05-14 2021-08-24 南昌大学 基于非侵入式脑机接口的脑控系统及其实现方法
CN113655884A (zh) * 2021-08-17 2021-11-16 河北师范大学 设备控制方法、终端及系统
CN114138111B (zh) * 2021-11-11 2022-09-23 深圳市心流科技有限公司 一种肌电智能仿生手的全系统控制交互方法
CN114936574A (zh) * 2022-04-27 2022-08-23 昆明理工大学 一种基于bci的高灵活度机械手系统及其实现方法
CN118778797A (zh) * 2023-04-04 2024-10-15 曹庆恒 一种脑机信号安全系统及其使用方法
CN117130490B (zh) * 2023-10-26 2024-01-26 天津大学 一种脑机接口控制系统及其控制方法和实现方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050131311A1 (en) * 2003-12-12 2005-06-16 Washington University Brain computer interface
CN101947152A (zh) * 2010-09-11 2011-01-19 山东科技大学 仿人形义肢的脑电-语音控制系统及工作方法
US20110295338A1 (en) * 2010-05-27 2011-12-01 Albert-Ludwigs-Universitat Bci apparatus for stroke rehabilitation
CN202223388U (zh) * 2011-08-30 2012-05-23 西安交通大学苏州研究院 一种可穿戴的脑控智能假肢
US20140277582A1 (en) * 2013-03-15 2014-09-18 Neurolutions, Inc. Brain-controlled body movement assistance devices and methods
US20140330394A1 (en) * 2007-06-05 2014-11-06 Washington University Methods and systems for controlling body parts and devices using ipsilateral motor cortex and motor related cortex
CN105578954A (zh) * 2013-09-25 2016-05-11 迈恩德玛泽股份有限公司 生理参数测量和反馈系统
CN105943207A (zh) * 2016-06-24 2016-09-21 吉林大学 一种基于意念控制的智能假肢运动系统及其控制方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1744073A (zh) * 2005-09-26 2006-03-08 天津大学 利用小波神经网络提取想象动作电位的方法
CN100571617C (zh) * 2007-12-25 2009-12-23 天津大学 站起想象动作脑电的信号采集和特征提取方法
CN101488189B (zh) * 2009-02-04 2012-01-18 天津大学 基于独立分量自动聚类处理的脑电信号处理方法
CN102096468A (zh) * 2011-01-20 2011-06-15 中山大学 一种基于脑-机接口的家电遥控装置及方法
CN103268149B (zh) * 2013-04-19 2016-06-15 杭州电子科技大学 一种基于脑机接口的实时主动式系统控制方法
CN103425249A (zh) * 2013-09-06 2013-12-04 西安电子科技大学 基于正则化csp和src的脑电信号分类识别方法及其遥控系统
CN103885445B (zh) * 2014-03-20 2016-05-11 浙江大学 一种脑控动物机器人系统以及动物机器人的脑控方法
CN104571504B (zh) * 2014-12-24 2017-07-07 天津大学 一种基于想象动作的在线脑‑机接口方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050131311A1 (en) * 2003-12-12 2005-06-16 Washington University Brain computer interface
US20140330394A1 (en) * 2007-06-05 2014-11-06 Washington University Methods and systems for controlling body parts and devices using ipsilateral motor cortex and motor related cortex
US20110295338A1 (en) * 2010-05-27 2011-12-01 Albert-Ludwigs-Universitat Bci apparatus for stroke rehabilitation
CN101947152A (zh) * 2010-09-11 2011-01-19 山东科技大学 仿人形义肢的脑电-语音控制系统及工作方法
CN202223388U (zh) * 2011-08-30 2012-05-23 西安交通大学苏州研究院 一种可穿戴的脑控智能假肢
US20140277582A1 (en) * 2013-03-15 2014-09-18 Neurolutions, Inc. Brain-controlled body movement assistance devices and methods
CN105578954A (zh) * 2013-09-25 2016-05-11 迈恩德玛泽股份有限公司 生理参数测量和反馈系统
CN105943207A (zh) * 2016-06-24 2016-09-21 吉林大学 一种基于意念控制的智能假肢运动系统及其控制方法

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657560A (zh) * 2018-11-24 2019-04-19 天津大学 机械手臂控制在线脑-机接口系统及实现方法
CN111012339A (zh) * 2020-01-07 2020-04-17 南京邮电大学 一种基于脑电信号和生物阻抗数据生理状况监控设备
CN111317468A (zh) * 2020-02-27 2020-06-23 腾讯科技(深圳)有限公司 脑电信号分类方法、装置、计算机设备和存储介质
CN111317468B (zh) * 2020-02-27 2024-04-19 腾讯科技(深圳)有限公司 脑电信号分类方法、装置、计算机设备和存储介质
CN111543983A (zh) * 2020-04-02 2020-08-18 天津大学 一种基于神经网络的脑电信号通道选择方法
CN111543983B (zh) * 2020-04-02 2023-04-18 天津大学 一种基于神经网络的脑电信号通道选择方法
CN111522445A (zh) * 2020-04-27 2020-08-11 兰州交通大学 智能控制方法
CN111736690A (zh) * 2020-05-25 2020-10-02 内蒙古工业大学 基于贝叶斯网络结构辨识的运动想象脑机接口
US20220061742A1 (en) * 2020-08-28 2022-03-03 Covidien Lp Determining composite signals from at least three electrodes
CN112070141B (zh) * 2020-09-01 2024-02-02 燕山大学 一种融合注意力检测的ssvep异步分类方法
CN112070141A (zh) * 2020-09-01 2020-12-11 燕山大学 一种融合注意力检测的ssvep异步分类方法
CN112631173A (zh) * 2020-12-11 2021-04-09 中国人民解放军国防科技大学 脑控无人平台协同控制系统
CN114237385B (zh) * 2021-11-22 2024-01-16 中国人民解放军军事科学院军事医学研究院 一种基于非侵入脑电信号的有人机脑控交互系统
CN114237385A (zh) * 2021-11-22 2022-03-25 中国人民解放军军事科学院军事医学研究院 一种基于非侵入脑电信号的有人机脑控交互系统
CN114027855A (zh) * 2021-12-13 2022-02-11 北京航空航天大学 一种识别头部运动意图的脑电信号解码方法及系统
CN114027855B (zh) * 2021-12-13 2022-09-23 北京航空航天大学 一种识别头部运动意图的脑电信号解码方法及系统
CN115153983A (zh) * 2022-06-15 2022-10-11 哈尔滨工业大学 基于机器视觉和眼动追踪的灵巧假手控制系统、设备、方法及存储介质
CN115153983B (zh) * 2022-06-15 2024-04-12 哈尔滨工业大学 基于机器视觉和眼动追踪的灵巧假手控制系统、设备、方法及存储介质
CN117137498A (zh) * 2023-09-15 2023-12-01 北京理工大学 基于注意力定向和运动意图脑电的紧急状况检测方法

Also Published As

Publication number Publication date
CN106726030A (zh) 2017-05-31
CN106726030B (zh) 2019-01-04

Similar Documents

Publication Publication Date Title
WO2018094720A1 (zh) 基于临床脑电信号控制机械手运动的脑机接口系统及其应用
Guger et al. How many people are able to operate an EEG-based brain-computer interface (BCI)?
Alomari et al. Automated classification of L/R hand movement EEG signals using advanced feature extraction and machine learning
Rak et al. Brain-computer interface as measurement and control system the review paper
Rao et al. Brain-computer interfacing [in the spotlight]
CN105361880B (zh) 肌肉运动事件的识别系统及其方法
CN103699226B (zh) 一种基于多信息融合的三模态串行脑-机接口方法
Prashant et al. Brain computer interface: A review
CN103699217A (zh) 一种基于运动想象和稳态视觉诱发电位的二维光标运动控制系统及方法
Naveen et al. Brain computing interface for wheel chair control
Giudice et al. 1D Convolutional Neural Network approach to classify voluntary eye blinks in EEG signals for BCI applications
Huong et al. Classification of left/right hand movement EEG signals using event related potentials and advanced features
Zheng et al. A portable wireless eye movement-controlled human-computer interface for the disabled
CN113359991A (zh) 一种面向残疾人的智能脑控机械臂辅助进食系统及方法
CN115670481B (zh) 意识关联康复机器臂运动意愿提取及协同控制方法
Akhanda et al. Detection of cognitive state for brain-computer interfaces
Park et al. Application of EEG for multimodal human-machine interface
Xing et al. The development of EEG-based brain computer interfaces: potential and challenges
CN114098768A (zh) 基于动态阈值和EasyTL的跨个体表面肌电信号手势识别方法
Bandara et al. Differentiation of signals generated by eye blinks and mouth clenching in a portable brain computer interface system
Kæseler et al. Brain patterns generated while using a tongue control interface: a preliminary study with two individuals with ALS
Matsuno et al. Machine learning using brain computer interface system
Lokare et al. Comparing wearable devices with wet and textile electrodes for activity recognition
Radeva et al. Human-computer interaction system for communications and control
Rao et al. A reliable eye blink based home automation system using false free detection algorithm

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16922195

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16922195

Country of ref document: EP

Kind code of ref document: A1