CN114504468A - Hand all-finger rehabilitation training and evaluation system based on artificial intelligence technology - Google Patents

Hand all-finger rehabilitation training and evaluation system based on artificial intelligence technology Download PDF

Info

Publication number
CN114504468A
CN114504468A CN202210114284.1A CN202210114284A CN114504468A CN 114504468 A CN114504468 A CN 114504468A CN 202210114284 A CN202210114284 A CN 202210114284A CN 114504468 A CN114504468 A CN 114504468A
Authority
CN
China
Prior art keywords
rehabilitation
finger
module
layer
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210114284.1A
Other languages
Chinese (zh)
Other versions
CN114504468B (en
Inventor
高忠科
孙新林
马超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202210114284.1A priority Critical patent/CN114504468B/en
Publication of CN114504468A publication Critical patent/CN114504468A/en
Application granted granted Critical
Publication of CN114504468B publication Critical patent/CN114504468B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H1/00Apparatus for passive exercising; Vibrating apparatus; Chiropractic devices, e.g. body impacting devices, external devices for briefly extending or aligning unbroken bones
    • A61H1/02Stretching or bending or torsioning apparatus for exercising
    • A61H1/0274Stretching or bending or torsioning apparatus for exercising for the upper limbs
    • A61H1/0285Hand
    • A61H1/0288Fingers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/01Constructive details
    • A61H2201/0157Constructive details portable
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/12Driving means
    • A61H2201/1238Driving means with hydraulic or pneumatic drive
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5023Interfaces to the user
    • A61H2201/5043Displays
    • A61H2201/5046Touch screens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2230/00Measuring physical parameters of the user
    • A61H2230/08Other bio-electrical signals
    • A61H2230/10Electroencephalographic signals
    • A61H2230/105Electroencephalographic signals used as a control parameter for the apparatus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Public Health (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Veterinary Medicine (AREA)
  • Mathematical Physics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Computing Systems (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Physiology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Epidemiology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Rehabilitation Therapy (AREA)
  • Pain & Pain Management (AREA)
  • Fuzzy Systems (AREA)
  • Primary Health Care (AREA)
  • Psychology (AREA)

Abstract

A hand all-finger rehabilitation training and evaluating system based on artificial intelligence technology comprises the following components in sequential connection: the system comprises a portable electroencephalogram acquisition device, a human-computer interaction interface, an electroencephalogram intelligent decoding module and an intelligent rehabilitation hand device, wherein the system comprises two working modes, namely a rehabilitation training mode and a rehabilitation effect evaluation mode; and in the rehabilitation training mode, displaying the rehabilitation action of the user, performing motor imagery of corresponding action by the user, and driving the corresponding finger joint to move through the air pump according to the imagery to finish the corresponding rehabilitation action. In the rehabilitation effect evaluation mode, the user selects rehabilitation motions, makes corresponding to the selected finger motions, and evaluates the rehabilitation effect according to the finger motions of the user. The intelligent rehabilitation hand device is driven to do corresponding actions by identifying the action intention of the user, and assists the user to complete the finger rehabilitation training of the hand. The user can also carry out the evaluation of the hand rehabilitation effect, form the rehabilitation training closed loop, promote the rehabilitation training more efficiently.

Description

Hand all-finger rehabilitation training and evaluation system based on artificial intelligence technology
Technical Field
The invention relates to a hand rehabilitation device. In particular to a hand all-finger rehabilitation training and evaluating system based on an artificial intelligence technology.
Background
The investigation shows that the cerebral apoplexy becomes the leading cause of the disability of adults in China. The stroke has the characteristics of high morbidity and high disability rate, and the stroke patients in China have large base numbers, longer treatment period and poorer recovery effect. Stroke can cause damage to a portion of the brain region of a patient, thereby losing control over a portion of the limb. The hands are used as important organs of the human body and play an extremely important role in completing daily activities. Therefore, the recovery of the hand function of the stroke patient is very important. In addition, rehabilitation training is also required for patients who have undergone hand surgery to recover hand function. The existing research shows that compared with the traditional passive rehabilitation mode, the active rehabilitation requires the patient to actively cooperate with the rehabilitation process, and better rehabilitation effect can be provided. The key point of active rehabilitation lies in restoring the natural synchronization of brain consciousness and hand movement, so that the rehabilitation movement of the hand needs to be matched with the consciousness signal of the human brain. Electroencephalography (EEG) is a general reflection of the activity of the brain neurons in the cerebral cortex, and contains a great deal of physiological and pathological information, representing the state of activity and thinking of the human brain. Therefore, the active rehabilitation mode combined with the electroencephalogram signal detection technology can timely identify the rehabilitation action consciousness of the patient, drive the rehabilitation equipment to assist the patient to complete the action, and accelerate the rehabilitation speed.
In recent years, portable electroencephalogram acquisition devices have gained more and more attention. Compared with the traditional electroencephalogram acquisition equipment, the portable electroencephalogram acquisition equipment has smaller volume and mass, the portability and the usability are greatly improved on the premise of not reducing the quality of acquired signals, and the cost and the power consumption are further reduced. Under the condition that the patient needs to be treated at home, the portable electroencephalogram acquisition equipment can provide the condition for home electroencephalogram monitoring for the patient.
The electroencephalogram signal data have the characteristics of nonlinearity, complex characteristics and low signal-to-noise ratio, the electroencephalogram needs to be specially inspected by experienced experts in the traditional manual inspection method, the electroencephalogram characteristics need to be manually extracted for analysis in the traditional machine learning algorithm, the methods need to be manually operated and inspected, long-time monitoring is not facilitated, and the important characteristics of the electroencephalogram signal can be omitted due to the influence of subjective factors. As the most advanced theory of machine Learning, Deep Learning (Deep Learning) has a strong superiority in processing large data information, and has been widely applied in the research of electroencephalogram signals. Deep learning is an end-to-end learning method, which can directly extract and learn deeper intrinsic characteristics from input signals and classify the characteristics. To date, many deep learning architectures have been proposed and applied in the fields of electroencephalogram analysis, rehabilitation training, and the like.
Disclosure of Invention
The invention aims to solve the technical problem of overcoming the defects of the prior art and provides a hand all-finger rehabilitation training and evaluating system based on an artificial intelligence technology, which can effectively identify and correctly classify the motor imagery electroencephalogram signals of the finger part and promote the rehabilitation of the motor function of the finger.
The technical scheme adopted by the invention is as follows: the utility model provides a hand indicates rehabilitation training and evaluation system entirely based on artificial intelligence technique, including connecting gradually: the system comprises a portable electroencephalogram acquisition device, a human-computer interaction interface, an electroencephalogram intelligent decoding module and an intelligent rehabilitation hand device, wherein the system comprises two working modes, namely a rehabilitation training mode and a rehabilitation effect evaluation mode;
in the rehabilitation training mode, a user selects rehabilitation actions through the human-computer interaction interface and performs motor imagery of corresponding actions according to screen prompts; the portable electroencephalogram acquisition equipment acquires EEG electroencephalogram signals from the brain of a user; the EEG intelligent decoding module decodes the collected EEG signals through a deep learning technology, judges whether a user performs motor imagery of corresponding actions or not, and sends a judgment result to intelligent rehabilitation hand equipment; the intelligent rehabilitation hand device drives the corresponding finger joint to move through the air pump on the basis of the judgment result of the electroencephalogram intelligent decoding module, so as to complete the corresponding rehabilitation action;
in the rehabilitation effect evaluation mode, a user firstly selects rehabilitation motions through the human-computer interaction interface and then makes motions corresponding to the selected fingers; the intelligent rehabilitation hand device sends the signal of the finger action to a human-computer interaction interface in a wireless mode to evaluate the rehabilitation effect.
The rehabilitation training mode and the rehabilitation effect evaluation mode support 12 finger actions, which are respectively as follows: the bending thumb, the bending index finger, the bending middle finger, the bending ring finger, the bending little finger, the bending thumb + the index finger, the bending thumb + the middle finger, the bending thumb + the ring finger, the bending thumb + the little finger, the bending full finger and the stretching full finger.
The portable electroencephalogram acquisition equipment comprises: the brain electrode cap and the brain electrode cap in the connecting wire are in direct contact with the scalp of a user through electrodes, collect EEG electroencephalograms on the surface of the scalp, and are connected with the physiological electric signal collecting-converting module through Y2 interfaces in the connecting wire, the brain electrode cap and the connecting wire for transmitting the EEG electroencephalograms; the integrated Wi-Fi module is responsible for reading data from the physiological electric signal acquisition-conversion module and sending the data to the electroencephalogram intelligent decoding module.
The electroencephalogram cap and the connecting wire thereof acquire EEG electroencephalogram signals of sixteen electrodes corresponding to the electroencephalogram cap, namely FP1, FP2, F3, Fz, F4, FCz, T3, C3, Cz, C4, T4, P3, Pz, P4, Oz and A1 of a user through the electrodes; the electrode distribution of the brain electrode cap conforms to 10/20 international standard leads.
The power supply circuit adopts a 4.2V rechargeable lithium battery for power supply, and controls a charging mode and a working mode through a switch: when the switch is turned off, the power circuit enters a charging mode, and a user connects the equipment to the 5V power supply interface by using the USB cable for charging; in a charging mode, the power circuit adopts an integrated transformer module to electrically isolate the circuit board from an external power supply, so that the circuit damage caused by overvoltage is prevented; when the switch is turned on, the power supply circuit enters a working mode, and the charging mode is forbidden; in the working mode, the power supply circuit adopts a plurality of low-noise linear voltage regulators with different output voltages, and the power supply requirements of different devices on the circuit board are met.
The man-machine interaction interface comprises: the system comprises a touch display screen, a Wi-Fi module, a Bluetooth module, a voice prompt module and an MCU (microprogrammed control unit) processor which is respectively connected with the touch display screen, the Wi-Fi module, the Bluetooth module and the voice prompt module, wherein the MCU processor adopts the working mode of an embedded operating system to complete the driving of the touch display screen, the data receiving and sending of the Wi-Fi module, the data receiving and sending of the Bluetooth module and the playing of voice prompt through the voice prompt module; the user selects actions and sets parameters through the touch display screen, and carries out rehabilitation training or rehabilitation effect evaluation according to the picture prompt of the touch display screen; the MCU processor is in data communication with the brain electricity intelligent decoding module through the Wi-Fi module, reads sensor information of the intelligent rehabilitation hand device through the Bluetooth module, and transmits action selection and parameter setting information to the intelligent rehabilitation hand device.
The EEG intelligent decoding module adopts a deep learning method to analyze and process EEG signals, and specifically comprises the following steps:
1) preprocessing EEG electroencephalogram signals, including two stages of filtering and noise reduction and data enhancement;
in the filtering and noise reduction stage, Notch filtering is carried out on 50Hz power frequency interference by adopting a Notch filter, then filtering processing is carried out on the EEG signals subjected to Notch filtering by adopting a Butterworth band-pass filter group, the EEG signals are divided into 4 frequency bands of theta (4-7Hz), alpha (8-12Hz), beta (13-30Hz) and gamma (31-50Hz), and the preprocessed EEG signals are obtained
Figure BDA0003495741300000031
Wherein c represents a frequency band, L represents a data length, and g represents a channel number;
in the data enhancement stage, the preprocessed EEG electroencephalogram signals of 4 frequency bands
Figure BDA0003495741300000032
Respectively carrying out data segmentation through sliding windows with the length of l, wherein the sliding step length of the sliding windows is b, and the sliding windows are not overlapped with each other; the jth sliding window data is represented as
Figure BDA0003495741300000033
Wherein
Figure BDA0003495741300000034
Representing the p-th data point in the g-th channel of the c-th band,
Figure BDA0003495741300000035
represents one sample formed by the jth sliding window; setting a label for each sample, wherein the label is whether the user performs motor imagery of corresponding action within the jth sliding window time;
2) building a deep convolutional neural network model based on an attention mechanism, inputting all samples and corresponding labels into the deep convolutional neural network model based on the attention mechanism, and carrying out full supervision training on the deep convolutional neural network model based on the attention mechanism: firstly, training by using samples of all users to obtain a pre-training model, and then, based on the pre-training model, carrying out fine tuning by using the sample of each user to obtain a fine tuning model matched with each user.
The attention-based deep convolutional neural network model comprises 4 branches, wherein each branch comprises the following components in serial connection:
(2.1) a data input layer, wherein the input data is the preprocessed EEG electroencephalogram signals corresponding to theta, alpha, beta or gamma frequency bands of the user
Figure BDA0003495741300000036
(2.2) a first convolution layer, the size of convolution kernel is 1 × l, the number of convolution kernels is 32, the regularization coefficient is 0.01, and l is the length of the sliding window;
(2.3) a second convolution layer having a convolution kernel size of 1 × 16 and a convolution kernel number of 32;
(2.4) a first cascade layer, wherein the output of the first convolution layer and the output of the second convolution layer are spliced in series according to the last dimension;
(2.5) a third convolution layer, wherein the convolution kernel size is gx 1, the depth multiplying power is 1, the regularization coefficient is 0.01, and g is the number of channels;
(2.6) a first batch normalization layer for accelerating model training and reducing overfitting;
(2.7) a first activation function layer, using Elu activation functions;
(2.8) a first average pooling layer having a pooling core size of 1 x 4;
(2.9) a first Dropout layer, with a Dropout probability of 0.5;
(2.10) a fourth convolution layer with convolution kernel size of 1 × 16, depth magnification of 1, and regularization coefficient of 0.01;
(2.11) a second batch normalization layer for accelerating model training and reducing overfitting;
(2.12) a second activation function layer, using Elu activation functions;
(2.13) a second average pooling layer with a pooling kernel size of 1 x 8;
(2.14) a second Dropout layer, with a Dropout probability of 0.5;
(2.15) an attention mechanism module, the attention mechanism module comprising:
(2.15.1) a Reshape layer converting the output size of the second Dropout layer to 8 × 64;
(2.15.2) a first fully connected layer, a neuron number of 64, and an activation function of tanh;
(2.15.3) a first rearrangement layer that swaps the 1 st dimension of the output with the 2 nd dimension;
(2.15.3) a second fully-connected layer, having a neuron number of 1 and an activation function of softmax;
(2.15.4) a second realignment layer to swap dimension 1 of the output with dimension 2;
(2.15.5) a multiplication layer for element-by-element multiplying the output of the second Dropout by the elements of the second rearrangement layer;
(2.15.6) a custom addition layer that adds the outputs of the multiplication layers in dimension 2;
(2.15.7) a flattening layer that expands the output of the custom addition layer into a one-dimensional sequence;
splicing the flattened layer outputs of the 4 branches by using a second series layer, and connecting the output of the second series layer to a third fully-connected layer; and the third full-connection layer uses softmax as an activation function, the number of the neurons is 2, and the judgment result of the specified action is output.
The intelligent rehabilitation hand equipment comprises a pneumatic glove, a control module arranged at the back of the hand of the pneumatic glove, a curvature sensor arranged at the back side of each finger of the pneumatic glove and connected with the control module through a lead respectively, a pressure sensor arranged at the front end of each finger of the pneumatic glove, and an air pump connected with each finger of the pneumatic glove through an air path respectively; the pneumatic gloves drive the fingers to independently bend or stretch by using the air pump, and are made of soft elastic gloves, so that the pneumatic gloves are convenient to wear by a user and the comfort level is improved; the bending sensor outputs different voltage values according to the current finger bending angle, and the pressure sensor outputs different voltage values according to the current finger exertion degree; the control module is composed of an integrated Bluetooth module, and the integrated Bluetooth module reads voltage values output by each curvature sensor and each pressure sensor in a polling mode through a multiplex switch and sends the voltage values to an MCU processor in a human-computer interaction interface; meanwhile, the control module receives action selection and parameter setting information sent by the MCU processor in the human-computer interaction interface and decodes the action selection and parameter setting information into an air pumping/inflating control signal, an air path selection signal and a speed control signal.
The air pump receives an air pumping/inflating control signal, an air path selection signal and a speed control signal from the control module, and the air path controls the action of each finger of the pneumatic glove.
The user selects a system mode before using, namely a rehabilitation training mode or a rehabilitation effect evaluation mode is selected; wherein the content of the first and second substances,
(1) the rehabilitation training mode comprises the following use steps:
(1.1) selecting actions needing rehabilitation training by a user, and setting training parameters;
(1.2) the user performs motor imagery of corresponding actions according to the screen prompt;
(1.3) the EEG intelligent decoding module decodes EEG signals of a user, judges whether motor imagery of corresponding actions is performed or not, and transmits the result to an MCU processor (25) in the human-computer interaction interface (2) through Wi-Fi;
(1.4) the MCU processor determines whether to drive the air pump corresponding to the finger according to the classification result so as to drive the corresponding finger to move;
(2) the rehabilitation effect evaluation mode comprises the following use steps:
(2.1) selecting actions needing rehabilitation effect evaluation by a user, and setting evaluation parameters;
(2.2) making corresponding finger actions by the user according to the screen prompt;
(2.3) reading real-time numerical values of a curvature sensor and a pressure sensor of a corresponding finger by a control module in the intelligent rehabilitation hand device, and sending the real-time numerical values to an MCU processor in a human-computer interaction interface;
and (2.4) the MCU processor gives a rehabilitation effect evaluation grade according to the received information.
The hand all-finger rehabilitation training and evaluating system based on the artificial intelligence technology can realize accurate acquisition, effective identification and correct classification of EEG (electroencephalogram) signals, and drives intelligent rehabilitation hand equipment to perform corresponding actions by identifying the action intention of a user so as to assist the user in completing the hand all-finger rehabilitation training. Meanwhile, the user can evaluate the hand rehabilitation effect by means of the invention to form a rehabilitation training closed loop, thereby more efficiently promoting rehabilitation training.
Drawings
FIG. 1 is a block diagram of a hand all-finger rehabilitation training and evaluation system based on artificial intelligence technology according to the present invention;
FIG. 2 is a block diagram showing the rehabilitation effect evaluation mode according to the present invention;
FIG. 3 is a block diagram of the portable electroencephalogram acquisition device in the present invention;
FIG. 4 is a block diagram showing the construction of a human-computer interface according to the present invention;
FIG. 5 is a diagram of action selection of the human-computer interface of the present invention;
FIG. 6 is an analysis flow chart of the electroencephalogram intelligent decoding module in the invention;
FIG. 7 is a block diagram of a deep convolutional neural network model based on an attention mechanism according to the present invention;
FIG. 8 is a block diagram of the attention mechanism module of the present invention;
FIG. 9 is a schematic structural diagram of the intelligent rehabilitation hand device of the present invention;
FIG. 10 is a schematic view showing the connection of the air pump according to the present invention;
fig. 11 is a schematic diagram of a hand all-finger rehabilitation training mode and a rehabilitation effect evaluation mode according to the present invention.
Detailed Description
The hand full-finger rehabilitation training and evaluation system based on the artificial intelligence technology is described in detail below with reference to the embodiment and the accompanying drawings.
As shown in fig. 1, the hand full-finger rehabilitation training and evaluation system based on the artificial intelligence technology of the present invention comprises: the system comprises a portable electroencephalogram acquisition device 1, a human-computer interaction interface 2, an electroencephalogram intelligent decoding module 3 and an intelligent rehabilitation hand device 4, wherein the system comprises two working modes, namely a rehabilitation training mode and a rehabilitation effect evaluation mode;
in the rehabilitation training mode, a user selects rehabilitation actions through the human-computer interaction interface 2 and performs motor imagery of corresponding actions according to screen prompts; the portable electroencephalogram acquisition equipment 1 acquires EEG electroencephalogram signals from the brain of a user; the EEG intelligent decoding module 3 decodes the collected EEG signals through a deep learning technology, judges whether a user performs motor imagery of corresponding actions or not, and sends a judgment result to the intelligent rehabilitation hand device 4; the intelligent rehabilitation hand device 4 drives the corresponding finger joint to move through the air pump based on the judgment result of the electroencephalogram intelligent decoding module 3, and completes the corresponding rehabilitation action;
as shown in fig. 2, in the rehabilitation effect evaluation mode, the user first selects rehabilitation motions through the human-computer interface 2, and then makes motions corresponding to the selected fingers; the intelligent rehabilitation hand device 4 sends the finger action signals to the MCU processor 24 in the human-computer interaction interface 2 in a wireless mode, and then the MCU processor 24 carries out rehabilitation effect evaluation according to sensor signal data.
As shown in fig. 5, the rehabilitation training mode and the rehabilitation effect evaluation mode both support 12 finger movements, which are: the bending thumb, the bending index finger, the bending middle finger, the bending ring finger, the bending little finger, the bending thumb + the index finger, the bending thumb + the middle finger, the bending thumb + the ring finger, the bending thumb + the little finger, the bending full finger and the stretching full finger.
As shown in fig. 3, the portable electroencephalogram acquisition device 1 comprises: the brain electrode cap and the connecting wire 11 are used for collecting EEG electroencephalogram signals, the physiological electric signal collecting-converting module 12 is used for amplifying and converting the EEG signals, the integrated Wi-Fi module 13 is used for controlling the physiological electric signal collecting-converting module 12, and the power circuit 14 is respectively connected with the physiological electric signal collecting-converting module 12 and the integrated Wi-Fi module 13, wherein the brain electrode cap in the brain electrode cap and the connecting wire 11 is in direct contact with the scalp of a user through electrodes, collects EEG electroencephalogram signals on the surface of the scalp, and is connected with the physiological electric signal collecting-converting module 12 through Y2 interfaces in the connecting wire, the brain electrode cap and the connecting wire 11 and is used for transmitting the EEG signals; the integrated Wi-Fi module 13 is responsible for reading data from the physiological electric signal acquisition-conversion module 12 and sending the data to the electroencephalogram intelligent decoding module 3.
The brain electrode cap and the connecting wire 11 thereof obtain the EEG brain electrical signals of sixteen electrodes including FP1, FP2, F3, Fz, F4, FCz, T3, C3, Cz, C4, T4, P3, Pz, P4, Oz and A1 corresponding to the brain electrode cap of a user through the electrodes; the electrode distribution of the brain electrode cap conforms to 10/20 international standard leads.
The physiological electric signal collecting-converting module 12 is composed of a plurality of analog input modules with high common mode rejection ratio for receiving EEG electroencephalogram signals collected by an EEG cap, a low-noise programmable gain amplifier PGA for amplifying the EEG electroencephalogram signals and a biological electric signal collecting chip of a high-resolution synchronous sampling analog-to-digital converter ADC for converting the analog signals into digital signals.
The integrated Wi-Fi module 13 adopts a module with an ESP-12F model, has an IEEE 802.11b/g/n radio frequency wireless communication function, has IIC, UART, SPI, ADC, and GPIO common interfaces, and is configured to adjust the PGA amplification factor and ADC sampling rate of the physiological electrical signal acquisition-conversion module 12, and read the EEG electroencephalogram signals acquired by the physiological electrical signal acquisition-conversion module 12 through the SPI interface. The integrated Wi-Fi module 13 works in an AP mode, establishes wireless connection with the electroencephalogram intelligent decoding module 3, and transmits EEG electroencephalogram signals.
The power circuit 14 adopts a 4.2V rechargeable lithium battery for power supply, and controls a charging mode and a working mode through a switch: when the switch is turned off, the power circuit enters a charging mode, and a user connects the equipment to the 5V power supply interface by using the USB cable for charging; in a charging mode, the power circuit adopts an integrated transformer module to electrically isolate the circuit board from an external power supply, so that the circuit damage caused by overvoltage is prevented; when the switch is turned on, the power supply circuit enters a working mode, and the charging mode is forbidden; in the working mode, the power supply circuit adopts a plurality of low-noise linear voltage regulators with different output voltages, and the power supply requirements of different devices on the circuit board are met.
As shown in fig. 4, the human-computer interaction interface 2 includes: the mobile phone comprises a touch display screen 21, a Wi-Fi module 22, a Bluetooth module 23 and a voice prompt module 24, and an MCU (microprogrammed control Unit) 25 which is respectively connected with the touch display screen 21, the Wi-Fi module 22, the Bluetooth module 23 and the voice prompt module 24, wherein the MCU 25 is used for controlling the touch display screen 21, the Wi-Fi module 22, the Bluetooth module 23 and the voice prompt module 24 to work; the MCU processor 25 adopts the working mode of an embedded operating system to complete the driving of the touch display screen 21, the data receiving and sending of the Wi-Fi module 22, the data receiving and sending of the Bluetooth module 23 and the playing of voice prompts through the voice prompt module 24; the user selects actions and sets parameters through the touch display screen 21, and carries out rehabilitation training or rehabilitation effect evaluation according to the picture prompt of the touch display screen 21; the MCU processor 25 is in data communication with the brain electrical intelligent decoding module 3 through the Wi-Fi module 22, reads sensor information of the intelligent rehabilitation hand device 4 through the Bluetooth module 23, and transmits action selection and parameter setting information to the intelligent rehabilitation hand device 4.
In the rehabilitation training mode, the user firstly selects an action to be trained through the touch display screen 21 of the human-computer interaction interface 2, and adds the action to a training queue. The training queue is used for storing action combinations required by the rehabilitation user, and the maximum support is 10 actions. Subsequently, the user inputs the number of times that the action needs to be trained through the touch display screen 21, and the training is repeated 10 times by default for each action, and is supported for 30 times at most for each action. After training is started, the touch display screen 21 displays a picture of a current training action, a user performs motor imagery of a corresponding action according to the picture, the portable electroencephalogram acquisition device 1 acquires EEG electroencephalograms of the user in real time, the electroencephalogram intelligent decoding module 3 analyzes and processes the EEG electroencephalograms, judges whether the user performs the motor imagery of the corresponding action or not, sends a classification result to the Wi-Fi module 22 in the human-computer interaction interface 2, and then the MCU processor 25 determines whether to drive an air pump motor of a corresponding finger part or not according to the classification result.
In the rehabilitation effect evaluation mode, the user firstly selects an action to be trained through the touch display screen 21 of the human-computer interaction interface 2, and adds the action to a training queue. The training queue is used for storing action combinations required by the rehabilitation user, the highest support is used for storing 10 actions, the evaluation times of each action are fixedly set to be 5 times, and the user cannot adjust the action combinations. After the evaluation is started, the touch display screen 21 displays a picture of the current evaluation action, the user performs actual finger movement according to the picture, the intelligent rehabilitation hand device 4 collects the curvature sensor and pressure sensor signals arranged on each finger, sensor signal data are wirelessly transmitted to the MCU processor 25 in the human-computer interaction interface 2 through Bluetooth, and then the MCU processor 25 obtains a rehabilitation effect evaluation result according to the sensor signal data.
As shown in fig. 6, the intelligent electroencephalogram decoding module 3 analyzes and processes an EEG electroencephalogram signal by using a deep learning method, and specifically includes the following steps:
1) preprocessing EEG signals, including two stages of filtering and noise reduction and data enhancement;
in the filtering and noise reduction stage, Notch filtering is carried out on 50Hz power frequency interference by adopting a Notch filter, then filtering processing is carried out on the EEG signals subjected to Notch filtering by adopting a Butterworth band-pass filter group, the EEG signals are divided into 4 frequency bands of theta (4-7Hz), alpha (8-12Hz), beta (13-30Hz) and gamma (31-50Hz), and the preprocessed EEG signals are obtained
Figure BDA0003495741300000071
Wherein c represents a frequency band, L represents a data length, and g represents a channel number;
in the data enhancement stage, the preprocessed EEG electroencephalogram signals of 4 frequency bands
Figure BDA0003495741300000072
Respectively carrying out data segmentation through sliding windows with the length of l, wherein the sliding step length of the sliding windows is b, and the sliding windows are not overlapped with each other; the jth sliding window data is represented as
Figure BDA0003495741300000073
Wherein
Figure BDA0003495741300000074
Representing the p-th data point in the g-th channel of the c-th band,
Figure BDA0003495741300000075
representing one sample formed by the jth sliding window. And setting a label for each sample, wherein the label is whether the user performs the motor imagery of the corresponding action in the jth sliding window time.
2) Building a deep convolutional neural network model based on an attention mechanism, inputting all samples and corresponding labels into the deep convolutional neural network model based on the attention mechanism, and carrying out full supervision training on the deep convolutional neural network model based on the attention mechanism: firstly, training by using samples of all users to obtain a pre-training model, and then, based on the pre-training model, carrying out fine tuning by using the sample of each user to obtain a fine tuning model matched with each user. The Batchsize used in the pre-training model training process is 256, the learning rate is 0.001, and the training times are 500 times, and the Batchsize used in the fine model training process is 16, the learning rate is 0.0001, and the training times are 200 times. The structure of the pre-training model is completely consistent with that of the fine tuning model.
As shown in fig. 7, the deep convolutional neural network model based on attention mechanism includes 4 branches, each branch corresponds to a sample input of a θ or α or β or γ frequency band, and includes, in series:
(2.1) a data input layer, wherein the input data is the preprocessed EEG electroencephalogram signals corresponding to theta, alpha, beta or gamma frequency bands of the user
Figure BDA0003495741300000076
(2.2) a first convolution layer, the size of convolution kernel is 1 × l, the number of convolution kernels is 32, the regularization coefficient is 0.01, and l is the length of the sliding window;
(2.3) a second convolution layer having a convolution kernel size of 1 × 16 and a convolution kernel number of 32;
(2.4) a first cascade layer, wherein the output of the first convolution layer and the output of the second convolution layer are spliced in series according to the last dimension;
(2.5) a third convolution layer, wherein the convolution kernel size is gx 1, the depth multiplying power is 1, the regularization coefficient is 0.01, and g is the number of channels;
(2.6) a first batch normalization layer for accelerating model training and reducing overfitting;
(2.7) a first activation function layer, using Elu activation functions;
(2.8) a first average pooling layer having a pooling core size of 1 x 4;
(2.9) a first Dropout layer, with a Dropout probability of 0.5;
(2.10) a fourth convolution layer with convolution kernel size of 1 × 16, depth magnification of 1, and regularization coefficient of 0.01;
(2.11) a second batch normalization layer for accelerating model training and reducing overfitting;
(2.12) a second activation function layer, using Elu activation functions;
(2.13) a second average pooling layer with a pooling kernel size of 1 x 8;
(2.14) a second Dropout layer, with a Dropout probability of 0.5;
(2.15) an attention mechanism module, as shown in fig. 8, comprising:
(2.15.1) a Reshape layer converting the output size of the second Dropout layer to 8 × 64;
(2.15.2) a first fully connected layer, a neuron number of 64, and an activation function of tanh;
(2.15.3) a first rearrangement layer that swaps the 1 st dimension of the output with the 2 nd dimension;
(2.15.3) a second fully-connected layer, having a neuron number of 1 and an activation function of softmax;
(2.15.4) a second realignment layer to swap dimension 1 of the output with dimension 2;
(2.15.5) a multiplication layer for element-by-element multiplying the output of the second Dropout by the elements of the second rearrangement layer;
(2.15.6) a custom addition layer that adds the outputs of the multiplication layers in dimension 2;
(2.15.7) a flattening layer that expands the output of the custom addition layer into a one-dimensional sequence;
splicing the flattened layer outputs of the 4 branches by using a second series layer, and connecting the output of the second series layer to a third fully-connected layer; and the third full-connection layer uses softmax as an activation function, the number of the neurons is 2, and the judgment result of the specified action is output.
As shown in fig. 9, the intelligent rehabilitation hand device 4 includes a pneumatic glove 41, a control module 44 installed at the back of the hand of the pneumatic glove, a curvature sensor 42 installed at the back of each finger of the pneumatic glove and connected to the control module 44 through a wire, a pressure sensor 43 installed at the front end of each finger of the pneumatic glove, and an air pump 45 connected to each finger of the pneumatic glove 41 through an air path; the pneumatic gloves 41 use the air pump 45 to drive the fingers to independently bend or stretch, and are made of soft elastic gloves, so that the pneumatic gloves are convenient to wear by a user and the comfort level is improved; the bending sensor 42 outputs different voltage values according to the current bending angle of the finger, and reflects the bending angles of different fingers in real time; the pressure sensor 43 outputs different voltage values according to the current finger exertion degree, and reflects the exertion degrees of different fingers in real time; the control module 44 is composed of an integrated bluetooth module with model number NRF52832, and the integrated bluetooth module polls and reads the voltage values output by each curvature sensor 42 and the pressure sensor 43 through a multiplex switch, and sends the voltage values to the MCU processor 25 in the human-computer interaction interface 2; meanwhile, the control module 44 receives the action selection and parameter setting information sent by the MCU processor 25 in the human-computer interaction interface 2, and decodes the action selection and parameter setting information into an air pumping/inflating control signal, an air path selection signal, and a speed control signal. The MCU processor (24) gives a rehabilitation effect evaluation grade according to the sensor information.
As shown in fig. 10, the air pump 45 receives the air pumping/inflating control signal, the air path selection signal, and the speed control signal from the control module 44, and controls the movement of each finger of the pneumatic glove 41 through the air path. The air pumping/inflating control signal is used for controlling the air flow direction of the air pump and further controlling the extension or bending of the fingers; the gas path selection signal is used for selecting specific finger extension or bending; the speed control signal is used to control the speed of finger extension or flexion.
TABLE 1 rehabilitation effect evaluation grade table
Figure BDA0003495741300000091
As shown in fig. 11, in the hand all-finger rehabilitation training and evaluation system based on the artificial intelligence technology of the present invention, the user selects the system mode before using the system, i.e. selects the rehabilitation training mode or the rehabilitation effect evaluation mode; wherein the content of the first and second substances,
(1) the rehabilitation training mode comprises the following use steps:
(1.1) selecting actions needing rehabilitation training by a user, and setting training parameters;
(1.2) the user performs motor imagery of corresponding actions according to the screen prompt;
(1.3) the EEG intelligent decoding module (3) decodes EEG signals of a user, judges whether motor imagery of corresponding actions is performed or not, and transmits the result to the MCU processor 25 in the human-computer interaction interface 2 through Wi-Fi;
(1.4) the MCU processor (24) determines whether to drive the air pump 45 of the corresponding finger according to the classification result so as to drive the corresponding finger to move;
(2) the rehabilitation effect evaluation mode comprises the following use steps:
(2.1) selecting actions needing rehabilitation effect evaluation by a user, and setting evaluation parameters;
(2.2) making corresponding finger actions by the user according to the screen prompt;
(2.3) reading real-time numerical values of a curvature sensor (42) and a pressure sensor (43) of a corresponding finger by a control module (44) in the intelligent rehabilitation hand device (4), and sending the real-time numerical values to an MCU (microprogrammed control unit) processor (25) in the human-computer interaction interface (2);
and (2.4) the MCU processor (24) gives a rehabilitation effect evaluation grade according to the received information.
It will be readily understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention and is not intended to limit the invention thereto. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. The utility model provides a hand indicates rehabilitation training and evaluation system entirely based on artificial intelligence technique, including connecting gradually: the system comprises portable electroencephalogram acquisition equipment (1), a human-computer interaction interface (2), an electroencephalogram intelligent decoding module (3) and intelligent rehabilitation hand equipment (4), and is characterized in that the system comprises two working modes, namely a rehabilitation training mode and a rehabilitation effect evaluation mode;
in the rehabilitation training mode, a user selects rehabilitation actions through the human-computer interaction interface (2), and performs motor imagery of corresponding actions according to screen prompts; the portable electroencephalogram acquisition equipment (1) acquires an EEG electroencephalogram signal from the brain of a user; the EEG intelligent decoding module (3) decodes the collected EEG signals through a deep learning technology, judges whether a user performs motor imagery of corresponding actions or not, and sends a judgment result to the intelligent rehabilitation hand device (4); the intelligent rehabilitation hand device (4) drives the corresponding finger joint to move through the air pump on the basis of the judgment result of the electroencephalogram intelligent decoding module (3) so as to complete the corresponding rehabilitation action;
in the rehabilitation effect evaluation mode, a user firstly selects rehabilitation motions through the human-computer interaction interface (2) and then makes motions corresponding to the selected fingers; the intelligent rehabilitation hand device (4) sends the signal of the finger action to the human-computer interaction interface (2) through wireless, and rehabilitation effect evaluation is carried out.
2. The hand full-finger rehabilitation training and evaluation system based on artificial intelligence technology according to claim 1, wherein the rehabilitation training mode and the rehabilitation effect evaluation mode both support 12 finger actions, respectively: the bending thumb, the bending index finger, the bending middle finger, the bending ring finger, the bending little finger, the bending thumb + the index finger, the bending thumb + the middle finger, the bending thumb + the ring finger, the bending thumb + the little finger, the bending full finger and the stretching full finger.
3. The hand full-finger rehabilitation training and evaluation system based on artificial intelligence technology as claimed in claim 1, wherein said portable brain electrical acquisition device (1) comprises: the brain electrode cap and the connecting wire (11) are used for collecting EEG (electroencephalogram) signals, the physiological electric signal collecting-converting module (12) is used for amplifying and converting the EEG signals, the integrated Wi-Fi module (13) is used for controlling the physiological electric signal collecting-converting module (12), and the power circuit (14) is respectively connected with the physiological electric signal collecting-converting module (12) and the integrated Wi-Fi module (13), wherein the brain electrode cap in the brain electrode cap and the connecting wire (11) is in direct contact with the scalp of a user through electrodes, collects EEG (electroencephalogram) signals on the surface of the scalp, and is connected with the physiological electric signal collecting-converting module (12) through the connecting wire and Y2 interfaces in the brain electrode cap and the connecting wire (11) and used for transmitting the EEG electroencephalogram signals; the integrated Wi-Fi module (13) is responsible for reading data from the physiological electric signal acquisition-conversion module (12) and sending the data to the electroencephalogram intelligent decoding module (3).
4. The artificial intelligence technology-based hand all-finger rehabilitation training and evaluation system as claimed in claim 3, wherein the brain electrode cap and its connection line (11) obtain the EEG brain signals of sixteen electrodes corresponding to the user's FP1, FP2, F3, Fz, F4, FCz, T3, C3, Cz, C4, T4, P3, Pz, P4, Oz and A1 through the electrodes; the electrode distribution of the brain electrode cap conforms to 10/20 international standard leads.
5. The hand all-finger rehabilitation training and evaluation system based on artificial intelligence technology as claimed in claim 3, wherein said power circuit (14) is powered by 4.2V rechargeable lithium battery, and the charging mode and the working mode are controlled by a switch: when the switch is turned off, the power circuit enters a charging mode, and a user connects the equipment to the 5V power supply interface by using the USB cable for charging; in a charging mode, the power circuit adopts an integrated transformer module to electrically isolate the circuit board from an external power supply, so that the circuit damage caused by overvoltage is prevented; when the switch is turned on, the power supply circuit enters a working mode, and the charging mode is forbidden; in the working mode, the power supply circuit adopts a plurality of low-noise linear voltage regulators with different output voltages, and the power supply requirements of different devices on the circuit board are met.
6. The hand full-finger rehabilitation training and evaluation system based on artificial intelligence technology as claimed in claim 1, wherein said human-computer interface (2) comprises: the mobile phone comprises a touch display screen (21), a Wi-Fi module (22), a Bluetooth module (23) and a voice prompt module (24), and an MCU (25) which is respectively connected with the touch display screen (21), the Wi-Fi module (22), the Bluetooth module (23) and the voice prompt module (24), wherein the MCU processor (24) adopts a working mode of an embedded operating system to complete driving of the touch display screen (21), data receiving and sending of the Wi-Fi module (22), data receiving and sending of the Bluetooth module (23) and voice prompt playing through the voice prompt module (24); the user selects actions and sets parameters through the touch display screen (21), and carries out rehabilitation training or rehabilitation effect evaluation according to the picture prompt of the touch display screen (21); the MCU processor (25) is in data communication with the electroencephalogram intelligent decoding module (3) through the Wi-Fi module (22), reads sensor information of the intelligent rehabilitation hand device (4) through the Bluetooth module (23), and transmits action selection and parameter setting information to the intelligent rehabilitation hand device (4).
7. The hand all-finger rehabilitation training and evaluation system based on the artificial intelligence technology as claimed in claim 1, wherein the EEG intelligent decoding module (3) adopts a deep learning method to analyze and process EEG signals, and specifically comprises the following steps:
1) preprocessing EEG signals, including two stages of filtering and noise reduction and data enhancement;
in the filtering and noise reduction stage, Notch filtering is carried out on 50Hz power frequency interference by adopting a Notch filter, then filtering processing is carried out on the EEG signals subjected to Notch filtering by adopting a Butterworth band-pass filter group, the EEG signals are divided into 4 frequency bands of theta (4-7Hz), alpha (8-12Hz), beta (13-30Hz) and gamma (31-50Hz), and the preprocessed EEG signals are obtained
Figure FDA0003495741290000021
Wherein c represents a frequency band and L represents a numberAccording to the length, g represents the number of channels;
in the data enhancement stage, the preprocessed EEG electroencephalogram signals of 4 frequency bands
Figure FDA0003495741290000022
Respectively carrying out data segmentation through sliding windows with the length of l, wherein the sliding step length of the sliding windows is b, and the sliding windows are not overlapped with each other; the jth sliding window data is represented as
Figure FDA0003495741290000023
Wherein
Figure FDA0003495741290000024
Representing the p-th data point in the g-th channel of the c-th band,
Figure FDA0003495741290000025
represents one sample formed by the jth sliding window; setting a label for each sample, wherein the label is whether the user performs motor imagery of corresponding action within the jth sliding window time;
2) building a deep convolutional neural network model based on an attention mechanism, inputting all samples and corresponding labels into the deep convolutional neural network model based on the attention mechanism, and carrying out full supervision training on the deep convolutional neural network model based on the attention mechanism: firstly, training by using samples of all users to obtain a pre-training model, and then, based on the pre-training model, carrying out fine tuning by using the sample of each user to obtain a fine tuning model matched with each user.
8. The system for training and evaluating the full finger rehabilitation of the hand based on the artificial intelligence technology as claimed in claim 1, wherein the deep convolutional neural network model based on the attention mechanism comprises 4 branches, each branch comprises a series of:
(2.1) a data input layer for inputting data corresponding to the theta or alpha or beta or gamma frequency band of the userProcessed EEG electroencephalogram signals
Figure FDA0003495741290000026
(2.2) a first convolution layer, the size of convolution kernel is 1 × l, the number of convolution kernels is 32, the regularization coefficient is 0.01, and l is the length of the sliding window;
(2.3) a second convolution layer having a convolution kernel size of 1 × 16 and a convolution kernel number of 32;
(2.4) a first cascade layer, wherein the output of the first convolution layer and the output of the second convolution layer are spliced in series according to the last dimension;
(2.5) a third convolution layer, wherein the convolution kernel size is gx 1, the depth multiplying power is 1, the regularization coefficient is 0.01, and g is the number of channels;
(2.6) a first batch normalization layer for accelerating model training and reducing overfitting;
(2.7) a first activation function layer, using Elu activation functions;
(2.8) a first average pooling layer having a pooling core size of 1 x 4;
(2.9) a first Dropout layer, with a Dropout probability of 0.5;
(2.10) a fourth convolution layer with convolution kernel size of 1 × 16, depth magnification of 1, and regularization coefficient of 0.01;
(2.11) a second batch normalization layer for accelerating model training and reducing overfitting;
(2.12) a second activation function layer, using Elu activation functions;
(2.13) a second average pooling layer with a pooling kernel size of 1 x 8;
(2.14) a second Dropout layer, with a Dropout probability of 0.5;
(2.15) an attention mechanism module, the attention mechanism module comprising:
(2.15.1) a Reshape layer converting the output size of the second Dropout layer to 8 × 64;
(2.15.2) a first fully connected layer, a neuron number of 64, and an activation function of tanh;
(2.15.3) a first rearrangement layer that swaps the 1 st dimension of the output with the 2 nd dimension;
(2.15.3) a second fully-connected layer, having a neuron number of 1 and an activation function of softmax;
(2.15.4) a second realignment layer to swap dimension 1 of the output with dimension 2;
(2.15.5) a multiplication layer for element-by-element multiplying the output of the second Dropout by the elements of the second rearrangement layer;
(2.15.6) a custom addition layer that adds the outputs of the multiplication layers in dimension 2;
(2.15.7) a flattening layer that expands the output of the custom addition layer into a one-dimensional sequence;
splicing the flattened layer outputs of the 4 branches by using a second series layer, and connecting the output of the second series layer to a third fully-connected layer; and the third full connection layer uses softmax as an activation function, the number of the neurons is 2, and the judgment result of the specified action is output.
9. The hand full-finger rehabilitation training and evaluation system based on artificial intelligence technology as claimed in claim 1, wherein the intelligent rehabilitation hand device (4) comprises a pneumatic glove (41), a control module (44) installed at the back of the hand of the pneumatic glove, a curvature sensor (42) installed at the back of each finger of the pneumatic glove and connected with the control module (44) through a lead respectively, a pressure sensor (43) installed at the front end of each finger of the pneumatic glove, and an air pump (45) connected with each finger of the pneumatic glove (41) through an air path respectively; the pneumatic gloves (41) use the air pump (45) to drive the fingers to bend or stretch independently, and are made of soft elastic gloves, so that the pneumatic gloves are convenient for a user to wear and improve the comfort level; the bending sensor (42) outputs different voltage values according to the current finger bending angle, and the pressure sensor (43) outputs different voltage values according to the current finger exertion degree; the control module (44) is composed of an integrated Bluetooth module, and the integrated Bluetooth module polls and reads voltage values output by each curvature sensor (42) and each pressure sensor (43) through a multiplex switch and sends the voltage values to the MCU processor (25) in the human-computer interaction interface (2); meanwhile, the control module (44) receives action selection and parameter setting information sent by the MCU processor (25) in the human-computer interaction interface (2) and decodes the action selection and parameter setting information into an air pumping/inflating control signal, an air path selection signal and a speed control signal.
10. The hand full-finger rehabilitation training and evaluation system based on artificial intelligence technology as claimed in claim 9, wherein said air pump (45) receives air pumping/inflating control signal, air path selection signal, speed control signal from control module (44), and controls the motion of each finger of pneumatic glove (41) through air path.
11. The hand full-finger rehabilitation training and evaluation system based on artificial intelligence technology as claimed in claim 1, wherein the user performs system mode selection before use, namely selecting a rehabilitation training mode or a rehabilitation effect evaluation mode; wherein the content of the first and second substances,
(1) the rehabilitation training mode comprises the following use steps:
(1.1) selecting actions needing rehabilitation training by a user, and setting training parameters;
(1.2) the user performs motor imagery of corresponding actions according to the screen prompt;
(1.3) the EEG intelligent decoding module (3) decodes EEG signals of a user, judges whether motor imagery of corresponding actions is performed or not, and transmits the result to an MCU processor (25) in the human-computer interaction interface (2) through Wi-Fi;
(1.4) the MCU processor (24) determines whether to drive the air pump (45) of the corresponding finger according to the classification result so as to drive the corresponding finger to move;
(2) the rehabilitation effect evaluation mode comprises the following use steps:
(2.1) selecting actions needing rehabilitation effect evaluation by a user, and setting evaluation parameters;
(2.2) making corresponding finger actions by the user according to the screen prompt;
(2.3) reading real-time numerical values of a curvature sensor (42) and a pressure sensor (43) of a corresponding finger by a control module (44) in the intelligent rehabilitation hand device (4), and sending the real-time numerical values to an MCU (microprogrammed control unit) processor (25) in the human-computer interaction interface (2);
and (2.4) the MCU processor (24) gives a rehabilitation effect evaluation grade according to the received information.
CN202210114284.1A 2022-01-30 2022-01-30 Hand full-fingered rehabilitation training and evaluation system based on artificial intelligence technology Active CN114504468B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210114284.1A CN114504468B (en) 2022-01-30 2022-01-30 Hand full-fingered rehabilitation training and evaluation system based on artificial intelligence technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210114284.1A CN114504468B (en) 2022-01-30 2022-01-30 Hand full-fingered rehabilitation training and evaluation system based on artificial intelligence technology

Publications (2)

Publication Number Publication Date
CN114504468A true CN114504468A (en) 2022-05-17
CN114504468B CN114504468B (en) 2023-08-08

Family

ID=81551181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210114284.1A Active CN114504468B (en) 2022-01-30 2022-01-30 Hand full-fingered rehabilitation training and evaluation system based on artificial intelligence technology

Country Status (1)

Country Link
CN (1) CN114504468B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003000569A (en) * 2001-06-18 2003-01-07 Fumio Nogata Robot for aiding finger locomotion function recovery
CN1568170A (en) * 2001-09-10 2005-01-19 新纪元创新有限公司 Apparatus, method and computer program product to produce or direct movements in synergic timed correlation with physiological activity
CN102138860A (en) * 2011-01-10 2011-08-03 西安交通大学 Intelligentized rehabilitation training equipment for hand functions of patients suffering from cerebral injury
CN107157705A (en) * 2017-05-09 2017-09-15 京东方科技集团股份有限公司 rehabilitation system and method
WO2018188480A1 (en) * 2017-04-14 2018-10-18 The Chinese University Of Hongkong Flexibly driven robotic hands
CN111631907A (en) * 2020-05-31 2020-09-08 天津大学 Cerebral apoplexy patient hand rehabilitation system based on brain-computer interaction hybrid intelligence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003000569A (en) * 2001-06-18 2003-01-07 Fumio Nogata Robot for aiding finger locomotion function recovery
CN1568170A (en) * 2001-09-10 2005-01-19 新纪元创新有限公司 Apparatus, method and computer program product to produce or direct movements in synergic timed correlation with physiological activity
CN102138860A (en) * 2011-01-10 2011-08-03 西安交通大学 Intelligentized rehabilitation training equipment for hand functions of patients suffering from cerebral injury
WO2018188480A1 (en) * 2017-04-14 2018-10-18 The Chinese University Of Hongkong Flexibly driven robotic hands
CN107157705A (en) * 2017-05-09 2017-09-15 京东方科技集团股份有限公司 rehabilitation system and method
CN111631907A (en) * 2020-05-31 2020-09-08 天津大学 Cerebral apoplexy patient hand rehabilitation system based on brain-computer interaction hybrid intelligence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谢志荣: "基于运动想象的脑-机接口关键技术研究及实现", 重庆邮电大学硕士学位论文 *

Also Published As

Publication number Publication date
CN114504468B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN111616721B (en) Emotion recognition system based on deep learning and brain-computer interface and application
CN111631907B (en) Cerebral apoplexy patient hand rehabilitation system based on brain-computer interaction hybrid intelligence
CN100594858C (en) Electric artificial hand combined controlled by brain electricity and muscle electricity and control method
CN101711709B (en) Method for controlling electrically powered artificial hands by utilizing electro-coulogram and electroencephalogram information
CN110765920A (en) Motor imagery classification method based on convolutional neural network
CN110059575A (en) A kind of augmentative communication system based on the identification of surface myoelectric lip reading
CN111513991B (en) Active hand full-finger rehabilitation equipment based on artificial intelligence technology
CN111544855B (en) Pure idea control intelligent rehabilitation method based on distillation learning and deep learning and application
CN102499797B (en) Artificial limb control method and system
CN111631848B (en) Ideation control artificial limb system based on brain-computer hybrid intelligence
CN110495893B (en) System and method for multi-level dynamic fusion recognition of continuous brain and muscle electricity of motor intention
CN107861628A (en) A kind of hand gestures identifying system based on human body surface myoelectric signal
CN104997581B (en) Artificial hand control method and apparatus for driving EEG signals on the basis of facial expressions
CN105022488A (en) Wireless BCI (Brain Computer Interface) input system based on SSVEP (Steady-State Visual Evoked Potentials) brain electric potential
CN113143676B (en) Control method of external limb finger based on brain-muscle-electricity cooperation
CN111544256A (en) Brain-controlled intelligent full limb rehabilitation method based on graph convolution and transfer learning
CN113871028A (en) Interactive rehabilitation system based on myoelectric intelligent wearing
CN116225222A (en) Brain-computer interaction intention recognition method and system based on lightweight gradient lifting decision tree
CN114504730A (en) Portable brain-controlled hand electrical stimulation rehabilitation system based on deep learning
CN113128353B (en) Emotion perception method and system oriented to natural man-machine interaction
CN114145745B (en) Graph-based multitasking self-supervision emotion recognition method
CN114647314A (en) Wearable limb movement intelligent sensing system based on myoelectricity
CN114504468B (en) Hand full-fingered rehabilitation training and evaluation system based on artificial intelligence technology
CN114504330A (en) Fatigue state monitoring system based on portable electroencephalogram acquisition head ring
CN106843509B (en) Brain-computer interface system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant