CN114864044A - Hand function rehabilitation training system integrating eye movement direction classification and augmented reality technology - Google Patents
Hand function rehabilitation training system integrating eye movement direction classification and augmented reality technology Download PDFInfo
- Publication number
- CN114864044A CN114864044A CN202210524747.1A CN202210524747A CN114864044A CN 114864044 A CN114864044 A CN 114864044A CN 202210524747 A CN202210524747 A CN 202210524747A CN 114864044 A CN114864044 A CN 114864044A
- Authority
- CN
- China
- Prior art keywords
- module
- electroencephalogram
- augmented reality
- electroencephalogram signals
- patient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 24
- 230000004424 eye movement Effects 0.000 title claims abstract description 22
- 238000005516 engineering process Methods 0.000 title claims abstract description 16
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 17
- 238000007781 pre-processing Methods 0.000 claims abstract description 12
- 230000033001 locomotion Effects 0.000 claims abstract description 11
- 210000005252 bulbus oculi Anatomy 0.000 claims abstract description 10
- 238000007405 data analysis Methods 0.000 claims abstract description 8
- 238000000034 method Methods 0.000 claims description 11
- 238000010606 normalization Methods 0.000 claims description 7
- 230000009471 action Effects 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 210000003128 head Anatomy 0.000 claims description 3
- 208000006011 Stroke Diseases 0.000 abstract description 7
- 230000003993 interaction Effects 0.000 abstract description 6
- 230000006870 function Effects 0.000 description 20
- 230000000694 effects Effects 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 206010008190 Cerebrovascular accident Diseases 0.000 description 2
- 241000220223 Fragaria Species 0.000 description 2
- 235000016623 Fragaria vesca Nutrition 0.000 description 2
- 235000011363 Fragaria x ananassa Nutrition 0.000 description 2
- 208000012886 Vertigo Diseases 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 230000002490 cerebral effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000001508 eye Anatomy 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 231100000889 vertigo Toxicity 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 210000001097 facial muscle Anatomy 0.000 description 1
- 210000001652 frontal lobe Anatomy 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 201000003152 motion sickness Diseases 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 230000004936 stimulating effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/30—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/398—Electrooculography [EOG], e.g. detecting nystagmus; Electroretinography [ERG]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Public Health (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Physical Education & Sports Medicine (AREA)
- Epidemiology (AREA)
- General Physics & Mathematics (AREA)
- Primary Health Care (AREA)
- General Engineering & Computer Science (AREA)
- Psychiatry (AREA)
- Psychology (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Ophthalmology & Optometry (AREA)
- Rehabilitation Tools (AREA)
Abstract
The invention discloses a hand function rehabilitation training system fusing eye movement direction classification and augmented reality technology, which comprises electroencephalogram acquisition equipment, a data analysis module, a data preprocessing module, a convolutional neural network module and a hand function rehabilitation module, wherein the electroencephalogram acquisition equipment acquires original electroencephalogram signals of channels related to eyeball movement of a patient, the data analysis module receives the electroencephalogram signals and analyzes the electroencephalogram signals into data format processable electroencephalogram signals, the data preprocessing module preprocesses the electroencephalogram signals, the convolutional neural network module receives the preprocessed electroencephalogram signals and conducts model training and testing, and after a classification result is obtained, the classification result is used as an instruction for interacting with the hand function rehabilitation module. The invention combines augmented reality and eye movement recognition technology, brings more comfortable rehabilitation experience to the stroke patient, meets the interaction requirements of the patient, and improves the enthusiasm of the patient for participating in rehabilitation.
Description
Technical Field
The invention relates to a hand function rehabilitation training system, in particular to a hand function rehabilitation training system integrating eye movement direction classification and augmented reality technology.
Background
With the aging of the population in China, the incidence of cerebral apoplexy is increasing, the daily activities of patients are seriously affected, and the basic hand motions such as grasping, holding, finger-pointing and the like are difficult to complete in main clinical manifestations. In order to promote the hand function rehabilitation of stroke patients more effectively, a rehabilitation training system based on Brain-Computer Interface (BCI) technology is developed and is more and more widely applied to clinical treatment.
However, patients in the early stage of rehabilitation mainly have prone positions, and need to go through a long adaptation and familiarity stage to formally enter a rehabilitation course, and the attention of the patients is often hard to concentrate during the period, so that the rehabilitation process and the rehabilitation effect are influenced. The reason is that the traditional computer display method is difficult to improve the enthusiasm of the patient for actively participating in the rehabilitation training. Even the emergence of virtual reality technique can provide the lifelike experience of certain degree, still can bring motion sickness and tired problem for the patient, still can influence the comfort level that the patient carried out hand rehabilitation training at last. Augmented reality is a novel display technology which is continuously developed on the basis of virtual reality, can solve the problem of vertigo of people in the virtual reality, helps the integration of brain high-level centers by stimulating a plurality of sense organs, establishes a new information receiving and processing channel, activates a coupling mechanism between two brain hemispheres by combining a bilateral cooperative rehabilitation training method, and is more beneficial to hand function rehabilitation.
In addition, the patient has daily interactive demand in the early stage of rehabilitation, the conventional hand function rehabilitation system cannot meet the daily interactive demand of the patient, the patient is difficult to independently complete tasks such as rehabilitation training and the like, but a rehabilitation therapist is required to complete a large amount of auxiliary work, so that the labor cost is extremely high. The intention expression based on the bioelectrical signal is a natural and convenient man-machine interaction mode, and the movement of the eyeball is one of the main modes, so that the eye movement is a proper control instruction. In fact, before entering the BCI rehabilitation course, the patient can realize human-computer interaction through eye movement, and the interaction requirements are met.
The eye movement can be captured in various ways, but the traditional method has low applicability to patients in early stage of rehabilitation, such as: firstly, the method for measuring the electrooculogram by adopting the patch electrode mode has the advantages that the electrode is pasted around eyes to reduce the visual field, so that the electrode is easy to cause discomfort of facial muscles after being worn for a long time, and simultaneously, the signal quality is influenced, and the electrode is not suitable for the rehabilitation process; secondly, the eye tracker captures the eye movement by means of computer image, but not only the equipment cost but also the learning cost of the rehabilitation therapist are required to be borne.
Disclosure of Invention
In view of the above situation, in order to overcome the defects of the prior art, the present invention provides a hand function rehabilitation training system combining eye movement direction classification and augmented reality technology.
The invention solves the technical problems through the following technical scheme: a hand function rehabilitation training system integrating eye movement direction classification and augmented reality technology is characterized by comprising electroencephalogram acquisition equipment, a data analysis module, a data preprocessing module, a convolutional neural network module and a hand function rehabilitation module, wherein the electroencephalogram acquisition equipment acquires original electroencephalogram signals of channels related to eyeball movement of a patient, the data analysis module receives the electroencephalogram signals and analyzes the electroencephalogram signals into electroencephalogram signals with a data format capable of being processed, the data preprocessing module preprocesses the electroencephalogram signals, the convolutional neural network module receives the preprocessed electroencephalogram signals and conducts model training and testing, and after classification results are obtained, the classification results are used as instructions interacting with the hand function rehabilitation module.
Preferably, the head of the patient wears electroencephalogram acquisition equipment and augmented reality equipment, and the affected hand wears pneumatic gloves.
Preferably, the patient selects an action to be rehabilitated at a system interface in the augmented reality device, thereby entering a corresponding rehabilitation scenario.
Preferably, the electroencephalogram acquisition equipment uses an international 10-20 lead distribution mode.
Preferably, the data preprocessing module employs the following algorithm: removing the baseline algorithm, normalizing the algorithm and fitting the algorithm by the least square method.
Preferably, the convolutional neural network module comprises a convolutional neural network model, and the convolutional neural network model comprises the following seven layers: an input layer, a batch normalization layer, a convolution layer, a max pooling layer, a flattening layer, a full link layer, and an output layer.
The positive progress effects of the invention are as follows: in order to reduce the rehabilitation cost, solve the vertigo of the virtual reality technology, meet the interaction requirements of patients and improve the enthusiasm of the patients for participating in rehabilitation, the invention combines the augmented reality and the eye movement recognition technology and brings more comfortable rehabilitation experience to stroke patients. The equipment is light and friendly, and is convenient for the patient in the early stage of cerebral apoplexy rehabilitation to use in a prone position at any time and any place, the display part adopts an augmented reality technology, and a digital interactive rehabilitation scene is superposed under the condition that the surrounding real environment is visible, so that the equipment brings stronger interest and guidance for the patient, and the enthusiasm of the patient for rehabilitation training is further aroused. Based on the technical background of the stroke rehabilitation training system, the invention can obtain the eye movement data of the patient in the early stage of rehabilitation without additional equipment, and is more accurate and more stable. The invention also takes the eyeball motion as a control instruction to interact with an augmented reality scene, realizes technical innovation, and can meet the interaction requirement of the initial patient to a certain extent while promoting the attention performance of the patient. The eye movement direction recognition algorithm adopted by the invention can obtain higher accuracy, can verify the feasibility of the method, further ensures the accuracy of the eye movement control command, brings better rehabilitation experience to patients, and has great practicability.
Drawings
Fig. 1 is a block diagram of a hand function rehabilitation training system incorporating eye movement direction classification and augmented reality technology according to the present invention.
Fig. 2 is a schematic diagram of a hand function rehabilitation training system integrating eye movement direction classification and augmented reality technology according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
In the description of the present invention, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention.
As shown in figure 1, the hand function rehabilitation training system integrating the eye movement direction classification and the augmented reality technology comprises an electroencephalogram acquisition device 1, a data analysis module 2, a data preprocessing module 3, a convolutional neural network module 4 and a hand function rehabilitation module 5, wherein the electroencephalogram acquisition device acquires original electroencephalogram signals of channels related to eyeball movement of a patient, the data analysis module receives the electroencephalogram signals and analyzes the electroencephalogram signals into electroencephalogram signals with a data format capable of being processed, the data preprocessing module preprocesses the electroencephalogram signals, the convolutional neural network module receives the preprocessed electroencephalogram signals and conducts model training and testing, and after a classification result is obtained, the classification result is used as an instruction for interacting with the hand function rehabilitation module.
As shown in fig. 2, the head of the patient is worn with an electroencephalogram acquisition device and an augmented reality device, and the affected hand is worn with a pneumatic glove. The patient selects an action to be rehabilitated at the system interface in the augmented reality device, thereby entering a corresponding rehabilitation scenario. In the recovered scene through the specified time of vision and hearing suggestion patient in the progress bar demonstration, accomplish the training action of requirement in the scene, for example snatch the strawberry in the recovered scene to this side position through the affected side hand, the patient uses healthy side hand to snatch the strawberry in the recovered scene to this side position, and pneumatic gloves starts when accomplishing this action, drives affected side hand and carries out same gripping action training. Visual feedback is provided to the patient indicating that the completion is correct or not. Can play arbitrary one of four kinds of background music automatically in the recovered scene, the patient carries out song switching selection to it through eyeball's upper and lower side-to-side motion, and every direction respectively represents a song.
The electroencephalogram acquisition equipment adopts an international 10-20 lead distribution mode, acquires electroencephalogram data of four channels including frontal lobe positions F7, T3, F8 and T4, and has a sampling rate of 300 Hz.
The data preprocessing module adopts the following algorithm:
first, remove baseline algorithm: the EEG signal from each channel of the respective electrodes can be represented as follows (1):
S(m,n)=[s(m,1)s(m,2)…s(m,n-1)s(m,n)]………………(1)
where S is the EEG signal, m is the serial number of the EEG channel, n is the nth sample point in the discrete time series, and S (m, n) represents the amplitude of the EEG signal for the m channel in the nth sample point.
Subtracting the average of the amplitudes at all the sampling points in the channel from the amplitude at each sampling point achieves removing the baseline, namely the equations (2) and (3):
X i =S i -μ(S)………………(2)
wherein, X i Is the baseline-removed value, S, of each channel signal i Is the original value of each channel signal and μ (S) is the average value of each channel signal.
Secondly, a normalization algorithm: in order to make the model more accurate and accelerate the convergence rate of the learning algorithm in the subsequent model training stage, feature scaling, i.e., normalization, needs to be performed on the data from which the baseline is removed. Since EEG data contains positive and negative polarities, to preserve such features, normalization is performed using equation (4):
wherein, X normalized Is a normalized value; x max Is X before normalization i The maximum value of the absolute value of (a), i.e., + Max (abs (X) i ));X min Is X i Negative of the maximum absolute value of (a), i.e., -Max (abs (X) i )). Respectively combine X max And X min Conversion to equation (.4) yields the following results, as in equation (5):
thirdly, a least square fitting algorithm: finally, the best matching function of the data is found by minimizing the sum of squared errors based on the least square method, and a polynomial is used for fitting the original data, and the problem is that: for a given set (x) i ,y i ) I is 0,1,2, …, n, and k-th order (k) is obtained to minimize the total error Q<n) is as in formula (6) and (7):
a 0 is the polynomial coefficient a found j K is an integer within n, and j is a term in the summation symbol.
Wherein the total error Q is considered to be with respect to a j (j-0, 1,2, …, k). Thus, the above problem can be equivalent to a multivariate function extremum problem, as shown in equations (6) and (7):
from formula (8), we can obtain a j (j ═ 0,1,2, …, k) of a fitting polynomial p (x).
The convolutional neural network module 4 includes a convolutional neural network model, which includes seven layers: an input layer, a batch normalization layer, a convolution layer, a max pooling layer, a flattening layer, a full link layer, and an output layer.
An input layer: the input values are in a matrix with the pre-processed EEG data, with dimensions [900 × 4 × 1 ]. 4 is the number of channels selected previously, 900 is the sample point data for each channel, and 1 is the data depth.
And (3) rolling layers: setting a 2D convolution kernel containing time and channels, wherein the size is [10 multiplied by 2], the convolution step size is [2 multiplied by 1], the output number of the filter is 20, padding is adopted to carry out 0 filling on input data, and the activation function is ELU. The output shape is [450 × 4 × 20 ].
Batch standardization layer: the output size of the previous layer is not changed, and the input and the output are [450 × 4 × 20 ]. The parameters of the batch standardization are all default configurations.
Maximum pooling layer: set the maximum pooling window of size [10 × 2], maximum pooling step [2 × 1 ]. The output shape is [221 × 3 × 20 ].
Flattening the layer: vectorizing the input size, and stacking into a one-dimensional large vector output with the shape of [13260 ].
Full connection layer: the goal is four-classification, so set the output space dimension to 4, the activation function to Softmax, and the output shape to [4 ].
An output layer: and outputting the classification results of the four classes.
In the system, the eye movement is a control instruction, and by means of an augmented reality interface, a patient switches music tracks in a scene by controlling the movement of eyeballs and performs hand function rehabilitation training by matching with pneumatic gloves of a device terminal, so that the system is particularly suitable for the initial rehabilitation training of stroke patients. The invention utilizes the eyeball motion electric signal component in the electroencephalogram signal to recognize and classify the eye movements in the upper, lower, left and right directions through preprocessing and a convolutional neural network. The invention provides a visual augmented reality interface and a rehabilitation scene for a patient, and the patient can realize music switching in each scene through the movement of eyeballs in four directions, namely up, down, left and right.
While there have been shown and described what are at present considered the fundamental principles and essential features of the invention and its advantages, it will be apparent to those skilled in the art that the invention is not limited to the details of the foregoing exemplary embodiments, but is capable of other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.
Claims (6)
1. A hand function rehabilitation training system integrating eye movement direction classification and augmented reality technology is characterized by comprising electroencephalogram acquisition equipment, a data analysis module, a data preprocessing module, a convolutional neural network module and a hand function rehabilitation module, wherein the electroencephalogram acquisition equipment acquires original electroencephalogram signals of channels related to eyeball movement of a patient, the data analysis module receives the electroencephalogram signals and analyzes the electroencephalogram signals into electroencephalogram signals with a data format capable of being processed, the data preprocessing module preprocesses the electroencephalogram signals, the convolutional neural network module receives the preprocessed electroencephalogram signals and conducts model training and testing, and after classification results are obtained, the classification results are used as instructions interacting with the hand function rehabilitation module.
2. The system of claim 1, wherein the patient wears an electroencephalogram acquisition device and an augmented reality device on the head and pneumatic gloves on the affected hand.
3. The system of claim 2, wherein the patient selects an action to be rehabilitated at a system interface of the augmented reality device to enter a corresponding rehabilitation scenario.
4. The eye movement direction classification and augmented reality fused hand function rehabilitation training system according to claim 1, wherein the electroencephalogram acquisition device uses an international 10-20 lead distribution mode.
5. The system of claim 1, wherein the data preprocessing module employs the following algorithm: removing the baseline algorithm, normalizing the algorithm and fitting the algorithm by the least square method.
6. The system of claim 1, wherein the convolutional neural network module comprises a convolutional neural network model, and wherein the convolutional neural network model comprises the following seven layers: an input layer, a batch normalization layer, a convolution layer, a max pooling layer, a flattening layer, a full link layer, and an output layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210524747.1A CN114864044A (en) | 2022-05-13 | 2022-05-13 | Hand function rehabilitation training system integrating eye movement direction classification and augmented reality technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210524747.1A CN114864044A (en) | 2022-05-13 | 2022-05-13 | Hand function rehabilitation training system integrating eye movement direction classification and augmented reality technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114864044A true CN114864044A (en) | 2022-08-05 |
Family
ID=82636434
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210524747.1A Pending CN114864044A (en) | 2022-05-13 | 2022-05-13 | Hand function rehabilitation training system integrating eye movement direction classification and augmented reality technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114864044A (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113274032A (en) * | 2021-04-29 | 2021-08-20 | 上海大学 | Cerebral apoplexy rehabilitation training system and method based on SSVEP + MI brain-computer interface |
-
2022
- 2022-05-13 CN CN202210524747.1A patent/CN114864044A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113274032A (en) * | 2021-04-29 | 2021-08-20 | 上海大学 | Cerebral apoplexy rehabilitation training system and method based on SSVEP + MI brain-computer interface |
Non-Patent Citations (1)
Title |
---|
HAODONG ZHUANG ETC: "EEG Based Eye Movements Multi-Classification Using Convolutional Neural Network", 《PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE》, 31 July 2021 (2021-07-31), pages 7191 - 7195, XP033981130, DOI: 10.23919/CCC52363.2021.9550039 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chen et al. | Adaptive asynchronous control system of robotic arm based on augmented reality-assisted brain–computer interface | |
CN109366508A (en) | A kind of advanced machine arm control system and its implementation based on BCI | |
Ktena et al. | A virtual reality platform for safe evaluation and training of natural gaze-based wheelchair driving | |
CN101711709A (en) | Method for controlling electrically powered artificial hands by utilizing electro-coulogram and electroencephalogram information | |
CN110675933B (en) | Finger mirror image rehabilitation training system | |
CN108646915B (en) | Method and system for controlling mechanical arm to grab object by combining three-dimensional sight tracking and brain-computer interface | |
CN111584030A (en) | Idea control intelligent rehabilitation system based on deep learning and complex network and application | |
CN112597967A (en) | Emotion recognition method and device for immersive virtual environment and multi-modal physiological signals | |
CN113143676B (en) | Control method of external limb finger based on brain-muscle-electricity cooperation | |
Pathirage et al. | A vision based P300 brain computer interface for grasping using a wheelchair-mounted robotic arm | |
Wang et al. | Intelligent wearable virtual reality (VR) gaming controller for people with motor disabilities | |
Tang et al. | Wearable supernumerary robotic limb system using a hybrid control approach based on motor imagery and object detection | |
Abbasi-Asl et al. | Brain-computer interface in virtual reality | |
Guo et al. | Lw-CNN-based myoelectric signal recognition and real-time control of robotic arm for upper-limb rehabilitation | |
CN114864044A (en) | Hand function rehabilitation training system integrating eye movement direction classification and augmented reality technology | |
Amor et al. | A deep learning based approach for Arabic Sign language alphabet recognition using electromyographic signals | |
Nandikolla et al. | Hybrid bci controller for a semi-autonomous wheelchair | |
CN115624338A (en) | Upper limb stimulation feedback rehabilitation device and control method thereof | |
CN114936574A (en) | High-flexibility manipulator system based on BCI and implementation method thereof | |
CN114732577A (en) | Artificial hand control system and method based on camera and electromyographic signals | |
JPH03241414A (en) | Operation device by head skin preparation potential pattern | |
CN115024735A (en) | Cerebral apoplexy patient rehabilitation method and system based on movement intention recognition model | |
Chen et al. | Mechatronic implementation and trajectory tracking validation of a BCI-based human-wheelchair interface | |
Foresi et al. | Human-robot cooperation via brain computer interface in assistive scenario | |
Van Leeuwen | Development of a Hybrid Brain-Computer Interface Controller for a Semi-Autonomous Wheelchair |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |