CN114504468B - Hand full-fingered rehabilitation training and evaluation system based on artificial intelligence technology - Google Patents

Hand full-fingered rehabilitation training and evaluation system based on artificial intelligence technology Download PDF

Info

Publication number
CN114504468B
CN114504468B CN202210114284.1A CN202210114284A CN114504468B CN 114504468 B CN114504468 B CN 114504468B CN 202210114284 A CN202210114284 A CN 202210114284A CN 114504468 B CN114504468 B CN 114504468B
Authority
CN
China
Prior art keywords
rehabilitation
finger
module
layer
eeg
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210114284.1A
Other languages
Chinese (zh)
Other versions
CN114504468A (en
Inventor
高忠科
孙新林
马超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202210114284.1A priority Critical patent/CN114504468B/en
Publication of CN114504468A publication Critical patent/CN114504468A/en
Application granted granted Critical
Publication of CN114504468B publication Critical patent/CN114504468B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H1/00Apparatus for passive exercising; Vibrating apparatus ; Chiropractic devices, e.g. body impacting devices, external devices for briefly extending or aligning unbroken bones
    • A61H1/02Stretching or bending or torsioning apparatus for exercising
    • A61H1/0274Stretching or bending or torsioning apparatus for exercising for the upper limbs
    • A61H1/0285Hand
    • A61H1/0288Fingers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/01Constructive details
    • A61H2201/0157Constructive details portable
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/12Driving means
    • A61H2201/1238Driving means with hydraulic or pneumatic drive
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5023Interfaces to the user
    • A61H2201/5043Displays
    • A61H2201/5046Touch screens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2230/00Measuring physical parameters of the user
    • A61H2230/08Other bio-electrical signals
    • A61H2230/10Electroencephalographic signals
    • A61H2230/105Electroencephalographic signals used as a control parameter for the apparatus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The utility model provides a hand full-fingered rehabilitation training and evaluation system based on artificial intelligence technique, has the connection in proper order: the system comprises two working modes, namely a rehabilitation training mode and a rehabilitation effect evaluation mode; in the rehabilitation training mode, display of rehabilitation actions of a user is given, the user performs motor imagination of corresponding actions, and the corresponding rehabilitation actions are completed by driving the corresponding finger joints to move through the air pump according to the imagination. In the rehabilitation effect evaluation mode, the user selects a rehabilitation action, makes a finger action corresponding to the selected finger action, and performs rehabilitation effect evaluation according to the finger action of the user. According to the intelligent rehabilitation hand device, the action intention of a user is identified, the intelligent rehabilitation hand device is driven to perform corresponding actions, and the user is assisted to complete the hand total finger rehabilitation training. The user can also evaluate the hand rehabilitation effect to form a rehabilitation training closed loop, so that the rehabilitation training is promoted more efficiently.

Description

Hand full-fingered rehabilitation training and evaluation system based on artificial intelligence technology
Technical Field
The invention relates to hand rehabilitation equipment. In particular to a hand total finger rehabilitation training and evaluating system based on an artificial intelligence technology.
Background
Investigation shows that cerebral apoplexy has become the primary cause of disability for adults in China. The cerebral apoplexy has the characteristics of high incidence rate and high disability rate, and the cerebral apoplexy patients in China have large cardinal number, longer treatment period and poor recovery effect. Cerebral stroke can cause damage to a part of brain regions of a patient, and further, the control capacity of a part of limbs is lost. The two hands are used as important organs of human body, and have very important function for completing daily activities. It is therefore important to restore hand function in stroke patients. In addition, rehabilitation training is also required for patients who have undergone hand surgery to restore hand function. Compared with the traditional passive rehabilitation mode, the existing research shows that the active rehabilitation requires the patient to actively cooperate with the rehabilitation process, and can provide better rehabilitation effect. The main point of active rehabilitation is to recover the natural synchronization of brain consciousness and hand movements, so that the consciousness signals of the human brain are combined to cooperate with the rehabilitation movements of the hands. Electroencephalogram (EEG) is an overall reflection of the activity of cerebral cortical brain nerve cells, which contains a great deal of physiological and pathological information, representing the activity state and thinking situation of the human brain. Therefore, the active rehabilitation mode combined with the electroencephalogram signal detection technology can timely identify the rehabilitation action consciousness of the patient, and drive rehabilitation equipment to assist the patient to complete actions, so that the rehabilitation speed is increased.
In recent years, portable electroencephalogram acquisition apparatuses have been attracting more and more attention. Compared with the traditional electroencephalogram acquisition equipment, the portable electroencephalogram acquisition equipment has smaller volume and mass, portability and usability are greatly improved on the premise that the quality of acquired signals is not reduced, and cost and power consumption are further reduced. Under the condition that the patient needs to carry out the treatment at home, portable brain electricity acquisition equipment can provide the condition of monitoring of brain electricity at home for the patient.
The electroencephalogram data has the characteristics of nonlinearity, complex characteristics and low signal-to-noise ratio, the traditional manual inspection method needs an experienced expert to inspect the electroencephalogram specially, the traditional machine learning algorithm needs to extract the electroencephalogram characteristics manually for analysis, and the method needs manual operation and inspection, is unfavorable for long-time monitoring and can be influenced by subjective factors to miss important characteristics of the electroencephalogram. As the most advanced theory of machine Learning, deep Learning (Deep Learning) has a strong superiority in processing big data information, and has been widely used in the research of brain electrical signals. Deep learning is an end-to-end learning method that can directly extract and learn deeper intrinsic characterizations from the input signal and classify. To date, many deep learning architectures have been proposed and applied in the fields of electroencephalogram analysis, rehabilitation training, and the like.
Disclosure of Invention
The invention aims to solve the technical problem of overcoming the defects of the prior art and provides a hand full-finger rehabilitation training and evaluating system based on an artificial intelligence technology, which can effectively identify and correctly classify the motor imagery electroencephalogram signals of finger positions and promote the rehabilitation of the motor functions of fingers.
The technical scheme adopted by the invention is as follows: hand full-fingered rehabilitation training and evaluation system based on artificial intelligence technology, including connecting gradually: the system comprises two working modes, namely a rehabilitation training mode and a rehabilitation effect evaluation mode;
in the rehabilitation training mode, a user selects rehabilitation actions through the man-machine interaction interface and performs motor imagination of corresponding actions according to screen prompts; the portable electroencephalogram acquisition equipment acquires EEG electroencephalogram signals from the brain of a user; the EEG intelligent decoding module decodes the acquired EEG signals through a deep learning technology, judges whether a user performs motor imagery of corresponding actions, and sends a judging result to intelligent rehabilitation hand equipment; the intelligent rehabilitation hand equipment is based on the judgment result of the brain electricity intelligent decoding module, and drives the corresponding finger joints to move through the air pump so as to complete corresponding rehabilitation actions;
in the rehabilitation effect evaluation mode, a user firstly selects rehabilitation actions through the human-computer interaction interface and then makes the actions corresponding to the selected fingers; and the intelligent rehabilitation hand equipment wirelessly transmits the signals of the finger actions to a human-computer interaction interface to evaluate the rehabilitation effect.
The rehabilitation training mode and the rehabilitation effect evaluation mode support 12 finger actions, which are respectively as follows: bending thumb, bending index finger, bending middle finger, bending ring finger, bending little finger, bending thumb+index finger, bending thumb+middle finger, bending thumb+ring finger, bending thumb+little finger, bending whole finger, stretching whole finger.
The portable electroencephalogram acquisition equipment comprises: the EEG brain electrical signal transmission device comprises an EEG electrode cap and a connecting wire which are sequentially connected and used for collecting EEG brain electrical signals, a physiological electrical signal collection-conversion module used for amplifying and converting the EEG electrical signals, an integrated Wi-Fi module used for controlling the physiological electrical signal collection-conversion module, and a power supply circuit which is respectively connected with the physiological electrical signal collection-conversion module and the integrated Wi-Fi module, wherein the EEG electrode cap in the EEG electrode cap and the connecting wire is directly contacted with a scalp of a user through electrodes and used for collecting EEG brain electrical signals on the surface of the scalp, and is connected with the physiological electrical signal collection-conversion module through a Y2 interface in the connecting wire and the EEG electrode cap and the connecting wire and used for transmitting the EEG brain electrical signals; the integrated Wi-Fi module is responsible for reading data from the physiological electric signal acquisition-conversion module and sending the data to the brain electric intelligent decoding module.
The brain electrode cap and the connecting wire thereof acquire EEG brain electrical signals corresponding to sixteen electrodes of the brain electrode cap of a user through the electrodes, wherein the EEG brain electrical signals correspond to the sixteen electrodes of the brain electrode cap of the user, such as FP1, FP2, F3, fz, F4, FCz, T3, C3, cz, C4, T4, P3, pz, P4, oz and A1; the distribution of the electrodes of the brain electrode cap accords with 10/20 international standard leads.
The power supply circuit adopts a 4.2V rechargeable lithium battery to supply power, and controls a charging mode and a working mode through a switch: when the switch is turned off, the power supply circuit enters a charging mode, and a user uses the USB cable to connect the device to the 5V power supply interface for charging; in a charging mode, the power supply circuit adopts an integrated transformer module to electrically isolate the circuit board from an external power supply, so that the circuit is prevented from being damaged due to overvoltage; when the switch is turned on, the power supply circuit enters into an operating mode, and the charging mode is disabled; in the working mode, the power supply circuit adopts a plurality of low-noise linear voltage regulators with different output voltages, and meets the power supply requirements of different devices on a circuit board.
The man-machine interaction interface comprises: the touch control display screen, the Wi-Fi module, the Bluetooth module and the voice prompt module, and an MCU processor respectively connected with the touch control display screen, the Wi-Fi module, the Bluetooth module and the voice prompt module, wherein the MCU processor adopts an embedded operating system working mode to complete driving of the touch control display screen, data transceiving of the Wi-Fi module, data transceiving of the Bluetooth module and voice prompt playing through the voice prompt module; the user performs action selection and parameter setting through the touch display screen, and performs rehabilitation training or rehabilitation effect evaluation according to the picture prompt of the touch display screen; the MCU processor performs data communication with the brain electricity intelligent decoding module through the Wi-Fi module, reads sensor information of the intelligent rehabilitation hand equipment through the Bluetooth module, and transmits action selection and parameter setting information to the intelligent rehabilitation hand equipment.
The EEG intelligent decoding module adopts a deep learning method to analyze and process EEG signals, and specifically comprises the following steps:
1) Preprocessing EEG signals, including two stages of filtering and noise reduction and data enhancement;
in the filtering and noise reduction stage, notch filter is firstly adopted to carry out Notch filtering on 50Hz power frequency interference, then a Butterworth band-pass filter bank is adopted to carry out filtering treatment on EEG signals subjected to Notch filtering, and the EEG signals are divided into 4 frequency bands of theta (4-7 Hz), alpha (8-12 Hz), beta (13-30 Hz) and gamma (31-50 Hz) to obtain pre-processed EEG signalsWherein c represents a frequency band, L represents a data length, and g represents a channel number;
in the data enhancement stage, 4 frequency bands of preprocessed EEG signalsData segmentation is carried out through sliding windows with the length of l, the sliding step length of the sliding windows is b, and the sliding windows are not overlapped with each other; the j-th sliding window data is expressed as +.>Wherein->Representing the p-th data point in the g-th channel of the c-th band,representing one sample formed by the jth sliding window; setting a label for each sample, wherein the label is a motor imagery of whether a user performs corresponding actions in the jth sliding window time;
2) Setting up a deep convolutional neural network model based on an attention mechanism, inputting all samples and corresponding labels into the deep convolutional neural network model based on the attention mechanism, and performing full supervision training on the deep convolutional neural network model based on the attention mechanism: firstly, training samples of all users to obtain a pre-training model, and then based on the pre-training model, performing fine tuning by using the samples of each user to obtain a fine tuning model matched with each user.
The deep convolutional neural network model based on the attention mechanism comprises 4 branches, wherein each branch comprises a plurality of branches connected in series in sequence:
(2.1) a data input layer for inputting data of preprocessed EEG signals corresponding to theta or alpha or beta or gamma frequency bands of a user
(2.2) a first convolution layer having a convolution kernel size of 1×l, a number of convolution kernels of 32, a regularization coefficient of 0.01, l being the length of the sliding window;
(2.3) a second convolution layer having a convolution kernel size of 1 x 16 and a number of convolution kernels of 32;
(2.4) a first series of layers, the output of the first convolutional layer and the output of the second convolutional layer being spliced in series according to the last dimension;
(2.5) a third convolution layer, the convolution kernel size is gx1, the depth multiplying power is 1, the regularization coefficient is 0.01, and g is the channel number;
(2.6) a first batch normalization layer for accelerating model training and reducing overfitting;
(2.7) a first layer of activation functions, using Elu activation functions;
(2.8) a first average pooling layer with a pooling kernel size of 1 x 4;
(2.9) a first Dropout layer having a Dropout probability of 0.5;
(2.10) a fourth convolution layer having a convolution kernel size of 1 x 16, a depth multiplier of 1, and a regularization coefficient of 0.01;
(2.11) a second batch normalization layer for accelerating model training and reducing overfitting;
(2.12) a second layer of activation functions, using Elu activation functions;
(2.13) a second average pooling layer with a pooling kernel size of 1 x 8;
(2.14) a second Dropout layer having a Dropout probability of 0.5;
(2.15) an attention mechanism module, the attention mechanism module comprising:
(2.15.1) a Reshape layer converting the output size of the second Dropout layer to 8 x 64;
(2.15.2) a first fully connected layer having a neuron count of 64 and an activation function of tanh;
(2.15.3) a first rearrangement layer exchanging the 1 st dimension of the output with the 2 nd dimension;
(2.15.3) a second fully-connected layer, having a neuron count of 1 and an activation function of softmax;
(2.15.4) a second rearrangement layer, exchanging the output 1 st dimension with the 2 nd dimension;
(2.15.5) a multiplication layer for multiplying the output of the second Dropout by the elements of the second rearrangement layer element by element;
(2.15.6) a custom adder layer for adding the outputs of the multiplier layers in dimension 2;
(2.15.7) a flattening layer for flattening the output of the custom additive layer into a one-dimensional sequence;
splicing the output of the flattened layers of the 4 branches by using a second serial layer, and connecting the output of the second serial layer to a third full-connection layer; the third full connection layer uses softmax as an activation function, the number of neurons is 2, and a judgment result of the specified action is output.
The intelligent rehabilitation hand equipment comprises pneumatic gloves, a control module arranged at the back of each finger of the pneumatic gloves, a bending sensor arranged at the back side of each finger of the pneumatic gloves and a pressure sensor arranged at the front end of each finger of the pneumatic gloves, wherein the control module is connected with the control module through a lead, and the air pump is connected with each finger of the pneumatic gloves through an air circuit; the pneumatic glove uses the air pump to drive the fingers to bend or stretch independently, and is made of soft elastic gloves, so that the pneumatic glove is convenient for a user to wear and improves comfort; the bending sensor outputs different voltage values according to the bending angle of the current finger, and the pressure sensor outputs different voltage values according to the exertion degree of the current finger; the control module is composed of an integrated Bluetooth module, and the integrated Bluetooth module polls and reads the voltage values output by each bending sensor and each pressure sensor through a multiplexing switch and sends the voltage values to an MCU (micro control Unit) processor in a man-machine interaction interface; meanwhile, the control module receives action selection and parameter setting information sent by the MCU processor in the man-machine interaction interface and decodes the action selection and parameter setting information into an air extraction/inflation control signal, an air path selection signal and a speed control signal.
The air pump receives the air pumping/inflating control signal, the air channel selection signal and the speed control signal from the control module, and controls the action of each finger of the pneumatic glove through the air channel.
Before use, the user selects a system mode, namely a rehabilitation training mode or a rehabilitation effect evaluation mode; wherein, the liquid crystal display device comprises a liquid crystal display device,
(1) The rehabilitation training mode comprises the following using steps:
(1.1) a user selects actions required to perform rehabilitation training and sets training parameters;
(1.2) the user performs motor imagery of the corresponding action according to the screen prompt;
(1.3) the EEG intelligent decoding module decodes EEG signals of the user, judges whether motor imagination of corresponding actions is carried out, and transmits the result to the MCU processor (25) in the man-machine interaction interface (2) through Wi-Fi;
(1.4) the MCU processor decides whether to drive the air pump of the corresponding finger according to the classification result, so as to drive the corresponding finger to move;
(2) The rehabilitation effect evaluation mode comprises the following using steps:
(2.1) the user selects the action required to evaluate the rehabilitation effect and sets the evaluation parameters;
(2.2) the user makes corresponding finger actions according to the screen prompt;
(2.3) a control module in the intelligent rehabilitation hand equipment reads real-time values of a bending sensor and a pressure sensor of the corresponding finger and sends the real-time values to an MCU processor in a man-machine interaction interface;
(2.4) the MCU processor gives a rehabilitation effect evaluation grade according to the received information.
The hand full-finger rehabilitation training and evaluating system based on the artificial intelligence technology can accurately acquire, effectively identify and correctly classify EEG (EEG) electroencephalogram signals, and drive intelligent rehabilitation hand equipment to perform corresponding actions by identifying action intention of a user so as to assist the user to complete hand full-finger rehabilitation training. Meanwhile, a user can evaluate the hand rehabilitation effect by means of the hand rehabilitation training device, a rehabilitation training closed loop is formed, and rehabilitation training is promoted more efficiently.
Drawings
FIG. 1 is a block diagram of a hand full-fingered rehabilitation training and assessment system based on artificial intelligence technology of the present invention;
FIG. 2 is a block diagram showing the constitution of the present invention in a rehabilitation effect evaluation mode;
FIG. 3 is a block diagram of a portable electroencephalogram acquisition apparatus according to the present invention;
FIG. 4 is a block diagram of a human-machine interface in accordance with the present invention;
FIG. 5 is a diagram of a selection of actions of a human-machine interface in the present invention;
FIG. 6 is a flow chart of analysis of the intelligent electroencephalogram decoding module in the present invention;
FIG. 7 is a block diagram of a deep convolutional neural network model based on the attention mechanism of the present invention;
FIG. 8 is a block diagram of the attention mechanism module of the present invention;
FIG. 9 is a schematic diagram of the structure of the intelligent rehabilitation hand device of the present invention;
FIG. 10 is a schematic diagram of the connection relationship of the air pump in the present invention;
FIG. 11 is a schematic diagram of a hand total finger rehabilitation training mode and rehabilitation effect evaluation mode according to the present invention.
Detailed Description
The following describes a hand total finger rehabilitation training and evaluation system based on artificial intelligence technology in detail with reference to the embodiments and the drawings.
As shown in FIG. 1, the hand total finger rehabilitation training and evaluating system based on the artificial intelligence technology comprises the following components: the system comprises two working modes, namely a rehabilitation training mode and a rehabilitation effect evaluation mode, wherein the portable electroencephalogram acquisition device 1, the human-computer interaction interface 2, the electroencephalogram intelligent decoding module 3 and the intelligent rehabilitation hand device 4;
in the rehabilitation training mode, a user selects rehabilitation actions through the man-machine interaction interface 2 and performs motor imagination of corresponding actions according to screen prompts; the portable electroencephalogram acquisition equipment 1 acquires EEG electroencephalogram signals from the brain of a user; the EEG intelligent decoding module 3 decodes the acquired EEG signals through a deep learning technology, judges whether a user performs motor imagery of corresponding actions, and sends a judging result to the intelligent rehabilitation hand equipment 4; the intelligent rehabilitation hand equipment 4 is based on the judgment result of the brain electricity intelligent decoding module 3, and drives the corresponding finger joints to move through the air pump so as to complete corresponding rehabilitation actions;
as shown in fig. 2, in the rehabilitation effect evaluation mode, the user first performs rehabilitation action selection through the human-computer interaction interface 2, and then makes a finger action corresponding to the selected rehabilitation action; the intelligent rehabilitation hand device 4 sends the signals of the finger actions to the MCU processor 24 in the man-machine interaction interface 2 in a wireless mode, and then the MCU processor 24 evaluates the rehabilitation effect according to the sensor signal data.
As shown in fig. 5, the rehabilitation training mode and the rehabilitation effect evaluation mode support 12 finger actions, which are respectively: bending thumb, bending index finger, bending middle finger, bending ring finger, bending little finger, bending thumb+index finger, bending thumb+middle finger, bending thumb+ring finger, bending thumb+little finger, bending whole finger, stretching whole finger.
As shown in fig. 3, the portable electroencephalogram acquisition apparatus 1 includes: the EEG electroencephalogram detection device comprises an EEG electrode cap and a connecting wire 11 which are sequentially connected and used for collecting EEG electroencephalogram signals, a physiological electric signal collecting-converting module 12 used for amplifying and converting the EEG electric signals, an integrated Wi-Fi module 13 used for controlling the physiological electric signal collecting-converting module 12, and a power supply circuit 14 respectively connected with the physiological electric signal collecting-converting module 12 and the integrated Wi-Fi module 13, wherein the EEG electrode cap in the EEG electrode cap and the connecting wire 11 is directly contacted with a scalp of a user through electrodes and is used for collecting EEG electroencephalogram signals on the surface of the scalp, and the EEG electrode cap is connected with the physiological electric signal collecting-converting module 12 through a Y2 interface in the connecting wire and the EEG electrode cap and the connecting wire 11 and is used for transmitting the EEG electroencephalogram signals; the integrated Wi-Fi module 13 is responsible for reading the data from the physiological electric signal acquisition-conversion module 12 and sending the data to the electroencephalogram intelligent decoding module 3.
The brain electrode cap and the connecting wire 11 thereof acquire EEG electroencephalograms of sixteen electrodes corresponding to the FP1, FP2, F3, fz, F4, FCz, T3, C3, cz, C4, T4, P3, pz, P4, oz and A1 of the brain electrode cap of a user through the electrodes; the distribution of the electrodes of the brain electrode cap accords with 10/20 international standard leads.
The physiological electric signal acquisition-conversion module 12 is composed of a plurality of analog input modules with high common mode rejection ratio for receiving EEG electroencephalogram signals acquired by an EEG cap, a low-noise programmable gain amplifier PGA for amplifying the EEG electroencephalogram signals and a biological electric signal acquisition chip of a high-resolution synchronous sampling analog-to-digital converter ADC for converting the analog signals into digital signals.
The integrated Wi-Fi module 13 adopts a module with the model of ESP-12F, has the function of IEEE 802.11b/g/n radio frequency wireless communication, has a IIC, UART, SPI, ADC, GPIO common interface, is used for adjusting the PGA amplification factor and the ADC sampling rate of the physiological electric signal acquisition-conversion module 12, and reads EEG brain electric signals acquired by the physiological electric signal acquisition-conversion module 12 through an SPI interface. The integrated Wi-Fi module 13 works in an AP mode, establishes wireless connection with the electroencephalogram intelligent decoding module 3, and transmits EEG electroencephalogram signals.
The power circuit 14 is powered by a 4.2V rechargeable lithium battery, and controls a charging mode and an operating mode through a switch: when the switch is turned off, the power supply circuit enters a charging mode, and a user uses the USB cable to connect the device to the 5V power supply interface for charging; in a charging mode, the power supply circuit adopts an integrated transformer module to electrically isolate the circuit board from an external power supply, so that the circuit is prevented from being damaged due to overvoltage; when the switch is turned on, the power supply circuit enters into an operating mode, and the charging mode is disabled; in the working mode, the power supply circuit adopts a plurality of low-noise linear voltage regulators with different output voltages, and meets the power supply requirements of different devices on a circuit board.
As shown in fig. 4, the human-computer interaction interface 2 includes: the touch display screen 21, the Wi-Fi module 22, the Bluetooth module 23 and the voice prompt module 24, and an MCU processor 25 respectively connected with the touch display screen 21, the Wi-Fi module 22, the Bluetooth module 23 and the voice prompt module 24, wherein the MCU processor 25 is used for controlling the touch display screen 21, the Wi-Fi module 22, the Bluetooth module 23 and the voice prompt module 24 to work; the MCU processor 25 adopts the working mode of an embedded operating system to complete the driving of the touch display screen 21, the data receiving and transmitting of the Wi-Fi module 22, the data receiving and transmitting of the Bluetooth module 23 and the playing of voice prompts through the voice prompt module 24; the user performs action selection and parameter setting through the touch display screen 21, and performs rehabilitation training or rehabilitation effect evaluation according to the picture prompt of the touch display screen 21; the MCU processor 25 is in data communication with the brain electricity intelligent decoding module 3 through the Wi-Fi module 22, reads sensor information of the intelligent rehabilitation hand equipment 4 through the Bluetooth module 23, and transmits action selection and parameter setting information to the intelligent rehabilitation hand equipment 4.
In the rehabilitation training mode, a user first selects an action to be trained through the touch display screen 21 of the man-machine interaction interface 2, and adds the action to a training queue. The training queue is used for storing action combinations required by the rehabilitation user, and 10 actions are supported to be stored at maximum. Then, the user inputs the number of times the motion needs to be trained through the touch display screen 21, defaults to repeating the training 10 times per motion, and supports repeating the training 30 times per motion at the highest. After training is started, the touch display screen 21 displays a picture of the current training action, the user performs motor imagination of corresponding action according to the picture, the portable electroencephalogram acquisition equipment 1 acquires EEG electroencephalogram signals of the user in real time, the EEG intelligent decoding module 3 analyzes and processes the EEG electroencephalogram signals to judge whether the user performs motor imagination of corresponding action, a classification result is sent to the Wi-Fi module 22 in the human-computer interaction interface 2, and then the MCU processor 25 determines whether to drive an air pump motor of a corresponding finger part according to the classification result.
In the rehabilitation effect evaluation mode, the user first selects an action to be trained through the touch display screen 21 of the man-machine interaction interface 2, and adds the action to the training queue. The training queue is used for storing action combinations required by the rehabilitation user, 10 actions are stored in a highest support mode, the evaluation times of each action are fixedly set to be 5 times, and the user cannot adjust the action combinations. After the evaluation is started, the touch display screen 21 displays a picture of the current evaluation action, the user performs actual finger movement according to the picture, the intelligent rehabilitation hand device 4 acquires signals of a bending sensor and a pressure sensor arranged on each finger, sensor signal data are wirelessly transmitted to the MCU processor 25 in the human-computer interaction interface 2 through Bluetooth, and then the MCU processor 25 obtains a rehabilitation effect evaluation result according to the sensor signal data.
As shown in fig. 6, the electroencephalogram intelligent decoding module 3 adopts a deep learning method to analyze and process the EEG brain signals, and specifically requires the following steps:
1) Preprocessing EEG signals, including two stages of filtering and noise reduction and data enhancement;
in the filtering noise reduction stage, firstly, adoptNotch filter is used for carrying out Notch filtering on 50Hz power frequency interference, then a Butterworth band-pass filter bank is used for carrying out filtering processing on EEG signals after the Notch filtering, and the EEG signals are divided into 4 frequency bands of theta (4-7 Hz), alpha (8-12 Hz), beta (13-30 Hz) and gamma (31-50 Hz) to obtain pre-processed EEG signalsWherein c represents a frequency band, L represents a data length, and g represents a channel number;
in the data enhancement stage, 4 frequency bands of preprocessed EEG signalsData segmentation is carried out through sliding windows with the length of l, the sliding step length of the sliding windows is b, and the sliding windows are not overlapped with each other; the j-th sliding window data is expressed as +.>Wherein->Representing the p-th data point in the g-th channel of the c-th band,representing one sample formed by the jth sliding window. A label is set for each sample, wherein the label is whether a user performs a motor imagery of corresponding actions in the jth sliding window time.
2) Setting up a deep convolutional neural network model based on an attention mechanism, inputting all samples and corresponding labels into the deep convolutional neural network model based on the attention mechanism, and performing full supervision training on the deep convolutional neural network model based on the attention mechanism: firstly, training samples of all users to obtain a pre-training model, and then based on the pre-training model, performing fine tuning by using the samples of each user to obtain a fine tuning model matched with each user. The pretrained model training process uses the BatchSize of 256, the learning rate of 0.001 and the training times of 500 times, and the fine tuning model training process uses the BatchSize of 16, the learning rate of 0.0001 and the training times of 200 times. The pre-training model is completely consistent with the structure of the fine tuning model.
As shown in fig. 7, the deep convolutional neural network model based on the attention mechanism includes 4 branches, each branch corresponds to sample input of θ or α or β or γ frequency bands, and each branch includes sequentially concatenated:
(2.1) a data input layer for inputting data of preprocessed EEG signals corresponding to theta or alpha or beta or gamma frequency bands of a user
(2.2) a first convolution layer having a convolution kernel size of 1×l, a number of convolution kernels of 32, a regularization coefficient of 0.01, l being the length of the sliding window;
(2.3) a second convolution layer having a convolution kernel size of 1 x 16 and a number of convolution kernels of 32;
(2.4) a first series of layers, the output of the first convolutional layer and the output of the second convolutional layer being spliced in series according to the last dimension;
(2.5) a third convolution layer, the convolution kernel size is gx1, the depth multiplying power is 1, the regularization coefficient is 0.01, and g is the channel number;
(2.6) a first batch normalization layer for accelerating model training and reducing overfitting;
(2.7) a first layer of activation functions, using Elu activation functions;
(2.8) a first average pooling layer with a pooling kernel size of 1 x 4;
(2.9) a first Dropout layer having a Dropout probability of 0.5;
(2.10) a fourth convolution layer having a convolution kernel size of 1 x 16, a depth multiplier of 1, and a regularization coefficient of 0.01;
(2.11) a second batch normalization layer for accelerating model training and reducing overfitting;
(2.12) a second layer of activation functions, using Elu activation functions;
(2.13) a second average pooling layer with a pooling kernel size of 1 x 8;
(2.14) a second Dropout layer having a Dropout probability of 0.5;
(2.15) an attention mechanism module, as shown in fig. 8, the attention mechanism module includes:
(2.15.1) a Reshape layer converting the output size of the second Dropout layer to 8 x 64;
(2.15.2) a first fully connected layer having a neuron count of 64 and an activation function of tanh;
(2.15.3) a first rearrangement layer exchanging the 1 st dimension of the output with the 2 nd dimension;
(2.15.3) a second fully-connected layer, having a neuron count of 1 and an activation function of softmax;
(2.15.4) a second rearrangement layer, exchanging the output 1 st dimension with the 2 nd dimension;
(2.15.5) a multiplication layer for multiplying the output of the second Dropout by the elements of the second rearrangement layer element by element;
(2.15.6) a custom adder layer for adding the outputs of the multiplier layers in dimension 2;
(2.15.7) a flattening layer for flattening the output of the custom additive layer into a one-dimensional sequence;
splicing the output of the flattened layers of the 4 branches by using a second serial layer, and connecting the output of the second serial layer to a third full-connection layer; the third full connection layer uses softmax as an activation function, the number of neurons is 2, and a judgment result of the specified action is output.
As shown in fig. 9, the intelligent rehabilitation hand device 4 comprises a pneumatic glove 41, a control module 44 arranged at the back of the pneumatic glove, a bending sensor 42 arranged at the back side of each finger of the pneumatic glove and connected with the control module 44 through a wire, a pressure sensor 43 arranged at the front end of each finger of the pneumatic glove, and an air pump 45 connected with each finger of the pneumatic glove 41 through an air channel; the pneumatic glove 41 uses the air pump 45 to drive the fingers to independently bend or stretch, and is made of soft elastic gloves, so that the pneumatic glove is convenient for a user to wear and improves the comfort level; the bending sensor 42 outputs different voltage values according to the bending angle of the current finger, and reflects the bending angles of different fingers in real time; the pressure sensor 43 outputs different voltage values according to the current finger force level to reflect the force levels of different fingers in real time; the control module 44 is composed of an integrated Bluetooth module with a model number of NRF52832, and the integrated Bluetooth module polls and reads the voltage values output by each bending sensor 42 and each pressure sensor 43 through a multiplexing switch and sends the voltage values to the MCU processor 25 in the man-machine interaction interface 2; meanwhile, the control module 44 receives the action selection and parameter setting information sent by the MCU processor 25 in the man-machine interface 2, and decodes the action selection and parameter setting information into an air pumping/inflating control signal, an air path selection signal and a speed control signal. The MCU processor (24) gives a rehabilitation effect evaluation grade according to the sensor information.
As shown in fig. 10, the air pump 45 receives the air pumping/inflating control signal, the air path selection signal and the speed control signal from the control module 44, and controls the motion of each finger of the pneumatic glove 41 through the air path. The air pumping/inflating control signal is used for controlling the air flow direction of the air pump and further controlling the stretching or bending of fingers; the air path selection signal is used for selecting specific finger stretching or bending; the speed control signal is used to control the speed at which the finger stretches or bends.
Table 1 rehabilitation effect evaluation level table
As shown in FIG. 11, the hand total finger rehabilitation training and evaluation system based on the artificial intelligence technology is characterized in that a user selects a system mode before using the system, namely a rehabilitation training mode or a rehabilitation effect evaluation mode; wherein, the liquid crystal display device comprises a liquid crystal display device,
(1) The rehabilitation training mode comprises the following using steps:
(1.1) a user selects actions required to perform rehabilitation training and sets training parameters;
(1.2) the user performs motor imagery of the corresponding action according to the screen prompt;
(1.3) the EEG intelligent decoding module (3) decodes EEG signals of the user, judges whether motor imagery of corresponding actions is carried out, and transmits the result to the MCU processor 25 in the man-machine interaction interface 2 through Wi-Fi;
(1.4) the MCU processor (24) determines whether to drive the air pump 45 of the corresponding finger according to the classification result, so as to drive the corresponding finger to move;
(2) The rehabilitation effect evaluation mode comprises the following using steps:
(2.1) the user selects the action required to evaluate the rehabilitation effect and sets the evaluation parameters;
(2.2) the user makes corresponding finger actions according to the screen prompt;
(2.3) a control module (44) in the intelligent rehabilitation hand equipment (4) reads real-time values of a bending sensor (42) and a pressure sensor (43) of the corresponding finger and sends the real-time values to an MCU (micro control Unit) processor (25) in the human-computer interaction interface (2);
(2.4) the MCU processor (24) gives a rehabilitation effect assessment grade according to the received information.
Those skilled in the art will readily appreciate that the foregoing description is by way of example only of a preferred embodiment of the invention, and is not intended to limit the invention thereto. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. Hand full-fingered rehabilitation training and evaluation system based on artificial intelligence technology, including connecting gradually: the system is characterized by comprising two working modes, namely a rehabilitation training mode and a rehabilitation effect evaluation mode;
in a rehabilitation training mode, a user selects rehabilitation actions through the human-computer interaction interface (2) and performs motor imagination of corresponding actions according to screen prompts; the portable electroencephalogram acquisition equipment (1) acquires EEG electroencephalogram signals from the brain of a user; the EEG intelligent decoding module (3) decodes the acquired EEG signals through a deep learning technology, judges whether a user performs motor imagery of corresponding actions, and sends a judging result to the intelligent rehabilitation hand equipment (4); the intelligent rehabilitation hand equipment (4) is based on the judgment result of the brain electricity intelligent decoding module (3), and drives the corresponding finger joints to move through the air pump so as to complete corresponding rehabilitation actions;
the EEG intelligent decoding module (3) adopts a deep learning method to analyze and process EEG signals, and specifically needs the following steps:
1) Preprocessing EEG signals, including two stages of filtering and noise reduction and data enhancement;
in the filtering and noise reduction stage, notch filter is firstly adopted to carry out Notch filtering on 50Hz power frequency interference, then a Butterworth band-pass filter bank is adopted to carry out filtering treatment on EEG signals subjected to Notch filtering, and the EEG signals are divided into 4 frequency bands of theta (4-7 Hz), alpha (8-12 Hz), beta (13-30 Hz) and gamma (31-50 Hz) to obtain pre-processed EEG signalsWherein c represents a frequency band, L represents a data length, and g represents a channel number;
in the data enhancement stage, 4 frequency bands of preprocessed EEG signalsData segmentation is carried out through sliding windows with the length of l, the sliding step length of the sliding windows is b, and the sliding windows are not overlapped with each other; the j-th sliding window data is expressed as +.>Wherein->Representing the p-th data point in the g-th channel of the c-th band,/>representing one sample formed by the jth sliding window; setting a label for each sample, wherein the label is a motor imagery of whether a user performs corresponding actions in the jth sliding window time;
2) Setting up a deep convolutional neural network model based on an attention mechanism, inputting all samples and corresponding labels into the deep convolutional neural network model based on the attention mechanism, and performing full supervision training on the deep convolutional neural network model based on the attention mechanism: firstly, training samples of all users to obtain a pre-training model, and then performing fine tuning by using the samples of each user based on the pre-training model to obtain a fine tuning model matched with each user;
in the rehabilitation effect evaluation mode, a user firstly selects rehabilitation actions through the human-computer interaction interface (2), and then makes corresponding finger actions; the intelligent rehabilitation hand equipment (4) wirelessly transmits the signals of the finger actions to the human-computer interaction interface (2) to evaluate the rehabilitation effect.
2. The hand total finger rehabilitation training and assessment system based on the artificial intelligence technology according to claim 1, wherein the rehabilitation training mode and the rehabilitation effect assessment mode support 12 finger actions, respectively: bending thumb, bending index finger, bending middle finger, bending ring finger, bending little finger, bending thumb+index finger, bending thumb+middle finger, bending thumb+ring finger, bending thumb+little finger, bending whole finger, stretching whole finger.
3. The hand total finger rehabilitation training and evaluation system based on artificial intelligence technology according to claim 1, wherein the portable electroencephalogram acquisition device (1) comprises: the EEG brain electrical signal transmission device comprises an EEG electrode cap and a connecting wire (11) which are sequentially connected and used for collecting EEG brain electrical signals, a physiological electrical signal collection-conversion module (12) used for amplifying and converting the EEG electrical signals, an integrated Wi-Fi module (13) used for controlling the physiological electrical signal collection-conversion module (12) and a power supply circuit (14) respectively connected with the physiological electrical signal collection-conversion module (12) and the integrated Wi-Fi module (13), wherein the EEG electrode cap in the EEG electrode cap and the connecting wire (11) is directly contacted with the scalp of a user through electrodes and is used for collecting EEG electrical signals on the surface of the scalp, and the EEG electrical signal is connected with the physiological electrical signal collection-conversion module (12) through a Y2 interface in the connecting wire and the EEG electrode cap and the connecting wire (11) and is used for transmitting EEG electrical signals; the integrated Wi-Fi module (13) is responsible for reading data from the physiological electric signal acquisition-conversion module (12) and sending the data to the electroencephalogram intelligent decoding module (3).
4. The hand total finger rehabilitation training and evaluation system based on artificial intelligence technology according to claim 3, wherein said brain electrode cap and its connecting wire (11) obtain the EEG brain electrical signals of sixteen electrodes corresponding to FP1, FP2, F3, fz, F4, FCz, T3, C3, cz, C4, T4, P3, pz, P4, oz, A1 of the user through the electrodes; the distribution of the electrodes of the brain electrode cap accords with 10/20 international standard leads.
5. A hand total finger rehabilitation training and assessment system based on artificial intelligence technology according to claim 3, wherein said power supply circuit (14) is powered by 4.2V rechargeable lithium battery, and controls the charging mode and the working mode by means of a switch: when the switch is turned off, the power supply circuit enters a charging mode, and a user uses the USB cable to connect the device to the 5V power supply interface for charging; in a charging mode, the power supply circuit adopts an integrated transformer module to electrically isolate the circuit board from an external power supply, so that the circuit is prevented from being damaged due to overvoltage; when the switch is turned on, the power supply circuit enters into an operating mode, and the charging mode is disabled; in the working mode, the power supply circuit adopts a plurality of low-noise linear voltage regulators with different output voltages, and meets the power supply requirements of different devices on a circuit board.
6. The system for training and evaluating the total finger rehabilitation of hands based on the artificial intelligence technology according to claim 1, wherein the human-computer interaction interface (2) comprises: the touch control display screen (21), the Wi-Fi module (22), the Bluetooth module (23) and the voice prompt module (24), and an MCU processor (25) respectively connected with the touch control display screen (21), the Wi-Fi module (22), the Bluetooth module (23) and the voice prompt module (24), wherein the MCU processor (24) adopts an embedded operating system working mode to complete the driving of the touch control display screen (21), the data transceiving of the Wi-Fi module (22), the data transceiving of the Bluetooth module (23) and the voice prompt playing through the voice prompt module (24); the user selects actions and sets parameters through the touch display screen (21), and carries out rehabilitation training or rehabilitation effect evaluation according to the picture prompt of the touch display screen (21); the MCU processor (25) is in data communication with the brain electricity intelligent decoding module (3) through the Wi-Fi module (22), reads sensor information of the intelligent rehabilitation hand equipment (4) through the Bluetooth module (23), and transmits action selection and parameter setting information to the intelligent rehabilitation hand equipment (4).
7. The hand total finger rehabilitation training and evaluation system based on the artificial intelligence technology according to claim 1, wherein the deep convolutional neural network model based on the attention mechanism comprises 4 branches, and each branch comprises the following components in series:
(2.1) a data input layer for inputting data of preprocessed EEG signals corresponding to theta or alpha or beta or gamma frequency bands of a user
(2.2) a first convolution layer having a convolution kernel size of 1×l, a number of convolution kernels of 32, a regularization coefficient of 0.01, l being the length of the sliding window;
(2.3) a second convolution layer having a convolution kernel size of 1 x 16 and a number of convolution kernels of 32;
(2.4) a first series of layers, the output of the first convolutional layer and the output of the second convolutional layer being spliced in series according to the last dimension;
(2.5) a third convolution layer, the convolution kernel size is gx1, the depth multiplying power is 1, the regularization coefficient is 0.01, and g is the channel number;
(2.6) a first batch normalization layer for accelerating model training and reducing overfitting;
(2.7) a first layer of activation functions, using Elu activation functions;
(2.8) a first average pooling layer with a pooling kernel size of 1 x 4;
(2.9) a first Dropout layer having a Dropout probability of 0.5;
(2.10) a fourth convolution layer having a convolution kernel size of 1 x 16, a depth multiplier of 1, and a regularization coefficient of 0.01;
(2.11) a second batch normalization layer for accelerating model training and reducing overfitting;
(2.12) a second layer of activation functions, using Elu activation functions;
(2.13) a second average pooling layer with a pooling kernel size of 1 x 8;
(2.14) a second Dropout layer having a Dropout probability of 0.5;
(2.15) an attention mechanism module, the attention mechanism module comprising:
(2.15.1) a Reshape layer converting the output size of the second Dropout layer to 8 x 64;
(2.15.2) a first fully connected layer having a neuron count of 64 and an activation function of tanh;
(2.15.3) a first rearrangement layer exchanging the 1 st dimension of the output with the 2 nd dimension;
(2.15.3) a second fully-connected layer, having a neuron count of 1 and an activation function of softmax;
(2.15.4) a second rearrangement layer, exchanging the output 1 st dimension with the 2 nd dimension;
(2.15.5) a multiplication layer for multiplying the output of the second Dropout by the elements of the second rearrangement layer element by element;
(2.15.6) a custom adder layer for adding the outputs of the multiplier layers in dimension 2;
(2.15.7) a flattening layer for flattening the output of the custom additive layer into a one-dimensional sequence;
splicing the output of the flattened layers of the 4 branches by using a second serial layer, and connecting the output of the second serial layer to a third full-connection layer; the third full connection layer uses softmax as an activation function, the number of neurons is 2, and a judgment result of the specified action is output.
8. The hand total finger rehabilitation training and evaluation system based on the artificial intelligence technology according to claim 1, wherein the intelligent rehabilitation hand equipment (4) comprises pneumatic gloves (41), a control module (44) arranged at the back of the hand of the pneumatic gloves, a bending sensor (42) respectively connected with the control module (44) through wires and arranged at the back side of each finger of the pneumatic gloves, a pressure sensor (43) arranged at the front end of each finger of the pneumatic gloves, and an air pump (45) respectively connected with each finger of the pneumatic gloves (41) through an air path; the pneumatic glove (41) uses the air pump (45) to drive the fingers to independently bend or stretch, and is made of soft elastic gloves, so that the pneumatic glove is convenient for a user to wear and improves comfort; the bending sensor (42) outputs different voltage values according to the bending angle of the current finger, and the pressure sensor (43) outputs different voltage values according to the exertion degree of the current finger; the control module (44) is composed of an integrated Bluetooth module, and the integrated Bluetooth module polls and reads the voltage values output by each bending sensor (42) and each pressure sensor (43) through a multiplexing switch and sends the voltage values to the MCU processor (25) in the human-computer interaction interface (2); meanwhile, the control module (44) receives action selection and parameter setting information sent by the MCU processor (25) in the man-machine interaction interface (2) and decodes the action selection and parameter setting information into an air extraction/inflation control signal, an air passage selection signal and a speed control signal.
9. The hand total finger rehabilitation training and assessment system based on artificial intelligence technology according to claim 8, wherein the air pump (45) receives the air pumping/inflating control signal, the air path selection signal and the speed control signal from the control module (44), and controls the motion of each finger of the pneumatic glove (41) through the air path.
10. The hand total finger rehabilitation training and assessment system based on artificial intelligence technology according to claim 1, wherein the user performs system mode selection before use, i.e. selects a rehabilitation training mode or a rehabilitation effect assessment mode; wherein, the liquid crystal display device comprises a liquid crystal display device,
(1) The rehabilitation training mode comprises the following using steps:
(1.1) a user selects actions required to perform rehabilitation training and sets training parameters;
(1.2) the user performs motor imagery of the corresponding action according to the screen prompt;
(1.3) the EEG intelligent decoding module (3) decodes EEG signals of a user, judges whether motor imagination of corresponding actions is carried out, and transmits the result to the MCU processor (25) in the man-machine interaction interface (2) through Wi-Fi;
(1.4) the MCU processor (24) decides whether to drive the air pump (45) of the corresponding finger according to the classification result, so as to drive the corresponding finger to move;
(2) The rehabilitation effect evaluation mode comprises the following using steps:
(2.1) the user selects the action required to evaluate the rehabilitation effect and sets the evaluation parameters;
(2.2) the user makes corresponding finger actions according to the screen prompt;
(2.3) a control module (44) in the intelligent rehabilitation hand equipment (4) reads real-time values of a bending sensor (42) and a pressure sensor (43) of the corresponding finger and sends the real-time values to an MCU (micro control Unit) processor (25) in the human-computer interaction interface (2);
(2.4) the MCU processor (24) gives a rehabilitation effect assessment grade according to the received information.
CN202210114284.1A 2022-01-30 2022-01-30 Hand full-fingered rehabilitation training and evaluation system based on artificial intelligence technology Active CN114504468B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210114284.1A CN114504468B (en) 2022-01-30 2022-01-30 Hand full-fingered rehabilitation training and evaluation system based on artificial intelligence technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210114284.1A CN114504468B (en) 2022-01-30 2022-01-30 Hand full-fingered rehabilitation training and evaluation system based on artificial intelligence technology

Publications (2)

Publication Number Publication Date
CN114504468A CN114504468A (en) 2022-05-17
CN114504468B true CN114504468B (en) 2023-08-08

Family

ID=81551181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210114284.1A Active CN114504468B (en) 2022-01-30 2022-01-30 Hand full-fingered rehabilitation training and evaluation system based on artificial intelligence technology

Country Status (1)

Country Link
CN (1) CN114504468B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003000569A (en) * 2001-06-18 2003-01-07 Fumio Nogata Robot for aiding finger locomotion function recovery
CN1568170A (en) * 2001-09-10 2005-01-19 新纪元创新有限公司 Apparatus, method and computer program product to produce or direct movements in synergic timed correlation with physiological activity
CN102138860A (en) * 2011-01-10 2011-08-03 西安交通大学 Intelligentized rehabilitation training equipment for hand functions of patients suffering from cerebral injury
CN107157705A (en) * 2017-05-09 2017-09-15 京东方科技集团股份有限公司 rehabilitation system and method
WO2018188480A1 (en) * 2017-04-14 2018-10-18 The Chinese University Of Hongkong Flexibly driven robotic hands
CN111631907A (en) * 2020-05-31 2020-09-08 天津大学 Cerebral apoplexy patient hand rehabilitation system based on brain-computer interaction hybrid intelligence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003000569A (en) * 2001-06-18 2003-01-07 Fumio Nogata Robot for aiding finger locomotion function recovery
CN1568170A (en) * 2001-09-10 2005-01-19 新纪元创新有限公司 Apparatus, method and computer program product to produce or direct movements in synergic timed correlation with physiological activity
CN102138860A (en) * 2011-01-10 2011-08-03 西安交通大学 Intelligentized rehabilitation training equipment for hand functions of patients suffering from cerebral injury
WO2018188480A1 (en) * 2017-04-14 2018-10-18 The Chinese University Of Hongkong Flexibly driven robotic hands
CN107157705A (en) * 2017-05-09 2017-09-15 京东方科技集团股份有限公司 rehabilitation system and method
CN111631907A (en) * 2020-05-31 2020-09-08 天津大学 Cerebral apoplexy patient hand rehabilitation system based on brain-computer interaction hybrid intelligence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于运动想象的脑-机接口关键技术研究及实现;谢志荣;重庆邮电大学硕士学位论文;全文 *

Also Published As

Publication number Publication date
CN114504468A (en) 2022-05-17

Similar Documents

Publication Publication Date Title
CN111631907B (en) Cerebral apoplexy patient hand rehabilitation system based on brain-computer interaction hybrid intelligence
CN100594858C (en) Electric artificial hand combined controlled by brain electricity and muscle electricity and control method
CN111616721A (en) Emotion recognition system based on deep learning and brain-computer interface and application
CN111616682B (en) Epileptic seizure early warning system based on portable electroencephalogram acquisition equipment and application
CN104997581B (en) Artificial hand control method and apparatus for driving EEG signals on the basis of facial expressions
CN111544855B (en) Pure idea control intelligent rehabilitation method based on distillation learning and deep learning and application
CN111631848B (en) Ideation control artificial limb system based on brain-computer hybrid intelligence
CN110059575A (en) A kind of augmentative communication system based on the identification of surface myoelectric lip reading
CN107212883B (en) A kind of mechanical arm writing device and control method based on brain electric control
CN113143676B (en) Control method of external limb finger based on brain-muscle-electricity cooperation
CN111631908A (en) Active hand training system and method based on brain-computer interaction and deep learning
CN111513735A (en) Major depressive disorder identification system based on brain-computer interface and deep learning and application
CN111513991A (en) Novel active hand all-finger rehabilitation equipment based on artificial intelligence technology
CN115554093A (en) Wrist rehabilitation control device based on concentration level
CN114145745B (en) Graph-based multitasking self-supervision emotion recognition method
CN114647314A (en) Wearable limb movement intelligent sensing system based on myoelectricity
CN113128353B (en) Emotion perception method and system oriented to natural man-machine interaction
CN114504468B (en) Hand full-fingered rehabilitation training and evaluation system based on artificial intelligence technology
CN113871028A (en) Interactive rehabilitation system based on myoelectric intelligent wearing
CN109498362A (en) A kind of hemiplegic patient's hand movement function device for healing and training and model training method
CN201227336Y (en) Electric artificial hand controlled by brain electricity and muscle electricity
CN106843509B (en) Brain-computer interface system
CN112998725A (en) Rehabilitation method and system of brain-computer interface technology based on motion observation
CN114504730A (en) Portable brain-controlled hand electrical stimulation rehabilitation system based on deep learning
CN116225222A (en) Brain-computer interaction intention recognition method and system based on lightweight gradient lifting decision tree

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant