CN112086165B - Upper limb rehabilitation monitoring method and system based on deep learning - Google Patents

Upper limb rehabilitation monitoring method and system based on deep learning Download PDF

Info

Publication number
CN112086165B
CN112086165B CN202010934782.1A CN202010934782A CN112086165B CN 112086165 B CN112086165 B CN 112086165B CN 202010934782 A CN202010934782 A CN 202010934782A CN 112086165 B CN112086165 B CN 112086165B
Authority
CN
China
Prior art keywords
layer
rehabilitation
data
phase
forgetting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010934782.1A
Other languages
Chinese (zh)
Other versions
CN112086165A (en
Inventor
肖甫
刘海猛
盛碧云
周剑
戴纪馨
程钲评
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202010934782.1A priority Critical patent/CN112086165B/en
Publication of CN112086165A publication Critical patent/CN112086165A/en
Application granted granted Critical
Publication of CN112086165B publication Critical patent/CN112086165B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0029Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement being specially adapted for wireless interrogation of grouped or bundled articles tagged with wireless record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Medical Informatics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Algebra (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

An upper limb rehabilitation monitoring method and system based on deep learning do not need a user to bind any equipment or label, and a CNN model trained in advance in a neural network is used for recognizing rehabilitation actions of upper limbs. Compared with the action recognition system which needs to carry equipment or labels, the action recognition system has the advantages that the invasion sense of the user can not be realized by utilizing the label matrix, the privacy of the user can not be invaded, and the experience sense of the user is improved. Only a single reader, a single antenna and 9 commercial passive RFID tags are used, the equipment is cheap and simple, and the requirements of professional equipment and professional operators are avoided. The convolutional neural network in deep learning is adopted to identify the action, so that the waiting time of action identification is greatly reduced. The experiment verifies that the method is realized in an indoor room, the rehabilitation exercise scene is simulated at home, and the recognition precision of the rehabilitation action can reach 97%.

Description

Upper limb rehabilitation monitoring method and system based on deep learning
Technical Field
The invention relates to the technical field of deep learning and Internet of things, in particular to an upper limb rehabilitation monitoring method and system based on deep learning.
Background
At present, there are many researches based on motion recognition, in the researches based on wearable devices, uWave uses a single triaxial acceleration sensor to recognize personalized gestures with high precision, and FEMD uses a Kinect sensor to classify ten different gestures. Femo identifies the user's activity during physical exercise. In recent years, contactless gesture recognition based on Wi-Fi signals has attracted much research attention. WiGest detects the basic raw gesture in a device-less manner. And the WiFinger detects fine-grained gestures according to the CSI change. WiTrack is able to recognize movements of the human body and recognize simple gestures using USRP devices.
The above is the most popular gesture recognition technology at present, and some methods are based on contact, and require a user to wear or stick some sensors or RFID tags. Contact-based methods are inconvenient to use compared to non-contact gesture recognition, which limits their application scenarios, but some require very expensive embodiment equipment. Because the RFID has the advantages of small unique volume, low cost, long service life, good penetrability, good reusability, no fear of pollution, adaptability to various environments and the like, the RFID becomes an important implementation mode of action identification.
Disclosure of Invention
The invention mainly aims to provide an upper limb rehabilitation monitoring method and system based on deep learning, which can efficiently identify the rehabilitation exercise actions of upper limbs (shoulders, elbows and wrists) without any equipment or label carried by a human body.
An upper limb rehabilitation monitoring method based on deep learning comprises the following steps:
step 1: arranging a passive RFID tag matrix, and collecting phase information data of rehabilitation actions;
step 2: preprocessing the phase information acquired in the step 1;
and step 3: acquiring phase information data related to rehabilitation actions from the preprocessed signals;
and 4, step 4: making the phase information data into a rehabilitation action data set, and training the rehabilitation action data by adopting a deep learning Convolutional Neural Network (CNN) to obtain a model suitable for rehabilitation action;
and 5: the user starts rehabilitation training, the rehabilitation action of the user is monitored in real time, the label matrix collects phase information, and then after the processing of the step 2 and the step 3, the obtained phase information data is sent to the CNN model obtained in the step 4 to identify the rehabilitation action of the user;
step 6: and (5) according to the result of the step 5, a series of rehabilitation actions of the user are scored and rehabilitation advice is given by combining information such as dead time between actions, action time and the like.
Further, in the step 1, the RFID tag matrix is composed of 9 tags, which are sequentially arranged in a staggered manner, the size of the matrix is three rows and three columns, the interval between each tag is 12.5cm, and the phase information of a plurality of tags in the tag matrix is obtained through the mutual communication between the reader and the tags; the rehabilitation action comprises: flexion/extension of elbow joint, flexion/extension of wrist joint, flexion/extension of shoulder joint, adduction/abduction of shoulder joint, and rotation/supination of shoulder joint.
Further, in the step 2, preprocessing the acquired phase information, specifically including the following steps:
step 2-1: phase unwrapping: for the phase information obtained by reading the label from the reader, the phase is set as P ═ P 1 ,P 2 ,...,P n N is the number of labels, P i ={x (1) ,x (2) ,...,x (m) X is the phase value, x (m) Is the phase value at sample point m, then the phase difference is calculated:
Figure BDA0002671560390000031
and (3) processing the trip point:
Figure BDA0002671560390000032
storing the obtained value in D (P) i )={D 1 ,D 2 ,...,D m-1 Are then summed, S i ={0,D 1 ,D 1 +D 2 ,...,D 1 +D 2 +D m-1 And finally obtaining phase unwrapped data: fin _ Phase ═ P i +S i
Step 2-2: the Phase curve is smoothed by moving average filtering, a matlab function smooth (Phase 'moving') is used, the default window width is 5, the calculation method is as follows, input y is Fin _ Phase, and output yy:
Figure BDA0002671560390000033
step 2-3: and (3) data normalization processing, namely normalizing the data x-yy to an interval [ ymin, ymax ] by using a matlab function mapminmax (x, ymin, ymax), wherein ymin and ymax respectively represent minimum and maximum values of the interval, and the data x-yy is normalized to the interval [ -1,1] by default, and the calculation formula is as follows:
Figure BDA0002671560390000034
further, in the step 3, the data obtained in the step 2 is segmented, and phase information related to rehabilitation action is obtained; specifically, a characteristic waveform related to rehabilitation actions is obtained by waveform segmentation of a Savitzky-Golay filter, and the segmentation method comprises the following steps:
Figure BDA0002671560390000041
wherein, n and x k Respectively the sliding window length and the kth sample point, m is the number of windows, A 1 And F 1 Is weight value, filtering G in image m A point less than a certain threshold value, so that phase information about the motion is cut out.
Further, the step 4 of preparing a data set from the phase information data related to the rehabilitation motion comprises the following steps:
step 4-1: constructing a phase matrix, and generating an m × n matrix P, where m is the number of sampling points and n is the number of tags, that is:
Figure BDA0002671560390000042
step 4-2: resampling: resampling the data, adjusting the size of the submatrix, considering it as a 2d image, and calculating the duration t of all the actions end -t start Then, calculating the intermediate value T of the matrix and utilizing the matlab function, namely, sample (phase, T, length), wherein length is the original length of the matrix, and T is the length after resampling.
Further, in step 4, the CNN model includes: the device comprises an input layer, a convolution layer, a pooling layer, a forgetting layer, a flattening layer, a full-connection layer and a classification layer; the convolutional layer includes: a convolution layer one and a convolution layer two; the pooling layer comprises a pooling layer I and a pooling layer II; the forgetting layer comprises a forgetting layer I, a forgetting layer II and a forgetting layer III;
the front end of the CNN model is provided with an input layer, the input layer is connected with a first convolution layer, and the first convolution layer is connected with a first pooling layer; the first pooling layer is connected with the first forgetting layer; the forgetting layer I is connected with the convolution layer II; the convolution layer II is connected with the pooling layer II; the pooling layer II is connected with the forgetting layer II; the forgetting layer II is connected with the flattening layer; the flattening layer is connected with the full-connection layer; the full connection layer is connected with the forgetting layer III; the forgetting layer III is connected with the classification layer;
the input layer is used for finishing the processing of input data; the convolution layer uses convolution kernel to carry out feature extraction and feature mapping; the pooling layer is used for reducing the number of parameters and reducing the complexity of the network; the forgetting layer is used for preventing overfitting in training and accelerating the speed of training convergence; the flattening layer is used for compressing data into a one-dimensional array; the fully-connected layer maps the learned distributed feature representation to a sample label space; the classification layer is used for completing classification of rehabilitation actions.
Furthermore, in the step 6, a series of rehabilitation action conditions of the user are scored and some rehabilitation suggestions are given, the scoring is based on judging whether the actions are standard after the rehabilitation actions of the user are recognized, if not, the scoring is carried out according to the recognized angle and position information of the actions and the information of the dead time, the action time and the like among the actions, and professional rehabilitation suggestions are given.
An upper limb rehabilitation monitoring system based on deep learning, comprising:
the commercial passive RFID tag is used for collecting phase data information of a user;
the antenna is used for sending signals to the RFID label, receiving the signals from the RFID label and sending the signals to the reader;
a reader for communicating with the tag, modulating and demodulating the signal, and decoding the data packet;
the data processing unit is used for carrying out phase extraction, phase expansion, phase smoothing, normalization processing and Savitzky-Golay filter waveform segmentation processing on original data;
the data storage unit is used for storing the data processed by the data processing unit and making the data into a data set;
the CNN model unit is used for constructing and training a suitable CNN model based on a large amount of phase data of rehabilitation actions;
and the prompting unit is used for scoring the rehabilitation condition of the user and giving the rehabilitation suggestion by combining with the professional rehabilitation suggestion.
Compared with the prior art, the invention has the following beneficial effects:
the invention relates to an upper limb rehabilitation detection method and system based on deep learning, which can be used for identifying the rehabilitation action of an upper limb by utilizing a pre-trained CNN model in a neural network without binding any equipment or label by a user. Compared with the action recognition system which needs to carry equipment or labels, the action recognition system provided by the invention can not invade users by using the label matrix, can not invade the privacy of the users, and can improve the experience of the users. The invention only uses a single reader, a single antenna and 9 commercial passive RFID tags, and the equipment is cheap and simple and has no requirements of professional equipment and professional operators. At present, some systems mainly use template matching to distinguish different gestures, so that the waiting time of motion recognition is usually very high, and therefore the method adopts a convolutional neural network in deep learning to recognize the motion, so that the waiting time of motion recognition can be greatly reduced. The experimental verification is realized in an indoor room, the rehabilitation exercise scene is simulated at home, and the recognition precision of the rehabilitation action can reach 97%.
Drawings
Fig. 1 is a schematic structural diagram of the upper limb rehabilitation monitoring system in the embodiment of the present invention.
Fig. 2 is a tag matrix layout diagram of the upper limb rehabilitation monitoring system according to the embodiment of the invention.
Fig. 3 is a schematic diagram of a tag matrix layout of the upper limb rehabilitation monitoring system in the embodiment of the present invention.
Fig. 4 is a schematic diagram of the rehabilitation action monitored by the upper limb in the embodiment of the invention.
Fig. 5 is a schematic structural diagram of the CNN model according to the embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the drawings in the specification.
An upper limb rehabilitation monitoring method based on deep learning comprises the following steps:
step 1: and arranging a passive RFID tag matrix, and acquiring phase information data of the rehabilitation action.
Step 2: and (3) preprocessing the phase information acquired in the step (1).
And step 3: phase information data relating to the rehabilitation activity is acquired from the preprocessed signals.
And 4, step 4: and making the phase information data into a rehabilitation action data set, and training the rehabilitation action data by adopting a deep learning Convolutional Neural Network (CNN) to obtain a model suitable for rehabilitation action.
And 5: and (3) the user starts rehabilitation training, the rehabilitation action of the user is monitored in real time, the label matrix collects phase information, and the obtained phase information data is sent to the CNN model obtained in the step (4) after the processing of the step (2) and the step (3) so as to identify the rehabilitation action of the user.
And 6: and (5) according to the result of the step 5, a series of rehabilitation actions of the user are scored and rehabilitation suggestions are given in combination with information such as the dead time between the actions, the action time and the like.
Referring to fig. 1, an upper limb rehabilitation monitoring system based on deep learning includes: 9 passive RFID tags, an antenna, a reader, a data processing unit and a data storage unit. The passive RFID tag matrix collects phase data information of the rehabilitation action of the user, and the position layout of the tags is shown in figures 2 and 3; the phase information data of the rehabilitation motion is acquired to include standard rehabilitation motion and nonstandard rehabilitation motion, in this embodiment, motion data of the five standard rehabilitation motions, that is, motion data when the motion amplitude is 90 °, and motion data of the nonstandard rehabilitation motion are acquired, and in this embodiment, motion data of 30 °, 45 °, and 60 ° are acquired by taking elbow joint flexion/extension as an example. The angle schematic diagram of the acquisition position is shown in fig. 2; the rehabilitation movements in the examples include elbow joint flexion/extension, wrist joint flexion/extension, shoulder joint adduction/abduction, and shoulder joint rotation in/out, as shown in fig. 4. The antenna is used for sending signals to the passive RFID tags, receiving the signals from the passive RFID tags and sending the signals to the reader; a reader for modulating and demodulating the signal and decoding the data packet; the data processing unit is used for carrying out phase extraction, phase expansion, phase smoothing, normalization processing and Savitzky-Golay filter waveform segmentation processing on the original data; the data storage unit is used for storing the data processed by the data processing unit and making the data into a data set; the CNN model unit trains a suitable CNN model based on a large amount of phase data of rehabilitation movements, as shown in fig. 5. And the prompting unit is used for prompting whether the rehabilitation action of the user meets the requirement.
The operation performed by the data processing unit in the CPU control unit includes:
phase unwrapping: the phase retrieved from the reader is wound and therefore phase unwrapping is required. Let P be { P1, P2.., Pn }, where n is the number of tags, P i ={x (1) ,x (2) ,...,x (m) X is the phase value, x (m) Is the phase value at sample point m. The method comprises the following steps:
calculating the phase difference:
Figure BDA0002671560390000081
and (3) processing the trip point:
Figure BDA0002671560390000091
storing the obtained value in D (P) i )={D 1 ,D 2 ,...,D m-1 Are then summed S i ={0,D 1 ,D 1 +D 2 ,...,D 1 +D 2 +D m-1 And finally obtaining phase unwrapped data: fin _ Phase ═ P i +S i
Moving average filtering smoothing: using matlab's own function smooth (Phase ', moving '), the default window width is 5, and the calculation method is as follows, where input y is Fin _ Phase, and output yy:
Figure BDA0002671560390000092
data normalization processing: the matlab self-band function mapminmax (x, ymin, ymax) is used to normalize the data x ═ yy to the interval [ ymin, ymax ], and by default to the interval [ -1,1], where ymin, ymax represent the interval minimum and maximum values, respectively. The calculation method is as follows:
Figure BDA0002671560390000093
and 3, acquiring phase information related to the rehabilitation action, and performing waveform segmentation by using a Savitzky-Golay filter to obtain a characteristic waveform related to the rehabilitation action. The cutting method comprises the following steps:
Figure BDA0002671560390000094
wherein, n and x k Respectively the sliding window length and the kth sample point, m is the number of windows, A 1 And F 1 Is weight value, filtering G in image m A point less than a certain threshold, thereby slicing out data related to the action.
In step 4, a rehabilitation action data set is made, specifically:
constructing a phase matrix, and generating an m × n matrix P, where m is the number of sampling points and n is the number of tags, that is:
Figure BDA0002671560390000101
resampling: because the time of each action is not consistent, resampling processing is required for the data in order to normalize the phase matrix dimension. Similar to resizing the image processing, we resize the sub-matrix, which can be considered as a 2d image. The calculation method is as follows:
calculating the duration t of all actions end -t start Then, calculating the intermediate value T, and then utilizing a matlab function response (phase, T, length), wherein length is the original length of the matrix, and T is the resampled length.
The CNN model includes: input layer, convolution layer, pooling layer, forgetting layer, flattening layer, full connection layer, classification layer. The convolutional layer comprises: a first convolution layer and a second convolution layer; the pooling layer comprises a pooling layer I and a pooling layer II; the forgetting layer comprises a forgetting layer I, a forgetting layer II and a forgetting layer III.
The front end of the CNN model is provided with an input layer, the input layer is connected with a first convolution layer, and the first convolution layer is connected with a first pooling layer; the first pooling layer is connected with the first forgetting layer; the forgetting layer I is connected with the convolution layer II; the convolution layer II is connected with the pooling layer II; the pooling layer II is connected with the forgetting layer II; the forgetting layer II is connected with the flattening layer; the flattening layer is connected with the full connecting layer; the full connection layer is connected with the forgetting layer III; the forgetting layer III is connected with the classification layer.
The input layer is used for finishing the processing of input data; the convolution layer uses convolution kernel to carry out feature extraction and feature mapping; the pooling layer is used for effectively reducing the number of parameters, so that the network complexity is reduced; the forgetting layer is used for preventing overfitting in training and accelerating the speed of training convergence; the flattening layer is used for compressing data into a one-dimensional array; the fully connected layer maps the learned distributed feature representation to a sample mark space; the classification layer is used for completing classification of rehabilitation actions.
And feeding the processed training data set into the constructed CNN model, wherein the CNN model is continuously learned based on a large number of data samples in the training data set, and finally training a CNN model meeting requirements.
When a user carries out rehabilitation exercise, the label matrix collects rehabilitation actions of the user; passing through a data processing unit comprising: the method comprises the steps of phase expansion processing, moving average filtering smoothing, data normalization processing, Savitzky-Golay filter waveform segmentation, manufacturing a rehabilitation action data set, generating rehabilitation action data belonging to a user, and storing the rehabilitation action data in a database. And then, the rehabilitation action data of the user is sent to a trained CNN model based on rehabilitation action for action prediction, and a recognition result is output. And outputting the result to a prompting unit, wherein the prompting unit evaluates the rehabilitation exercise of the user by combining the factors such as the angle, the persistence, the speed and the like of the rehabilitation action of the user, and provides some suggestions for the rehabilitation exercise of the user by combining with professional medical suggestions. The experimental verification of the invention is that in an indoor closed room, the scene of the user performing rehabilitation exercise at home is simulated, the accuracy of the rehabilitation exercise action is recognized to reach 97 percent, and the feasibility of the method is proved.
The above description is only a preferred embodiment of the present invention, and the scope of the present invention is not limited to the above embodiment, but equivalent modifications or changes made by those skilled in the art according to the disclosure of the present invention should be included in the scope of the present invention as set forth in the appended claims.

Claims (5)

1. An upper limb rehabilitation monitoring method based on deep learning is characterized in that: the method comprises the following steps:
step 1: arranging a passive RFID tag matrix, and acquiring phase information data of rehabilitation actions;
step 2: preprocessing the phase information acquired in the step 1;
and step 3: acquiring phase information data related to rehabilitation actions from the preprocessed signals;
in the step 3, the data obtained in the step 2 are segmented, and phase information related to rehabilitation actions is obtained; specifically, a characteristic waveform related to rehabilitation action is obtained by waveform segmentation of a Savitzky-Golay filter, and the segmentation method comprises the following steps:
Figure FDA0003735590760000011
wherein n and x k Respectively the sliding window length and the kth sample point, m is the number of windows, A 1 And F 1 Is weight value, filtering G in image m Points less than a certain threshold, thereby slicing out phase information related to the motion;
and 4, step 4: making the phase information data into a rehabilitation action data set, and training the rehabilitation action data by adopting a deep learning Convolutional Neural Network (CNN) to obtain a model suitable for rehabilitation action;
the phase information data related to rehabilitation action obtained in the step 4 is made into a data set, and the method comprises the following steps:
step 4-1: constructing a phase matrix to generate an m × n matrix P, where m is the number of sampling points and n is the number of labels, that is:
Figure FDA0003735590760000021
step 4-2: resampling: resampling the data, adjusting the size of the submatrix, considering it as a 2d image, and calculating the duration t of all the actions end -t start Then, calculating a middle value T of the matrix and utilizing a matlab function, namely, a sample (phase, T, length), wherein the length is the original length of the matrix, and the T is the length after resampling;
in step 4, the CNN model includes: an input layer, a convolution layer, a pooling layer, a forgetting layer, a flattening layer, a full-connection layer and a classification layer; the convolutional layer includes: a convolution layer one and a convolution layer two; the pooling layer comprises a pooling layer I and a pooling layer II; the forgetting layer comprises a forgetting layer I, a forgetting layer II and a forgetting layer III;
the front end of the CNN model is provided with an input layer, the input layer is connected with a first convolution layer, and the first convolution layer is connected with a first pooling layer; the first pooling layer is connected with the first forgetting layer; the forgetting layer I is connected with the convolution layer II; the convolution layer II is connected with the pooling layer II; the pooling layer II is connected with the forgetting layer II; the forgetting layer II is connected with the flattening layer; the flattening layer is connected with the full connecting layer; the full connection layer is connected with the forgetting layer III; the forgetting layer III is connected with the classification layer;
the input layer is used for finishing processing input data; the convolution layer is used for carrying out feature extraction and feature mapping by using a convolution kernel; the pooling layer is used for reducing the number of parameters and reducing the network complexity; the forgetting layer is used for preventing overfitting in training and accelerating the speed of training convergence; the flattening layer is used for compressing data into a one-dimensional array; the fully connected layer maps the learned distributed feature representation to a sample label space; the classification layer is used for completing classification of rehabilitation actions;
and 5: the user starts rehabilitation training, the rehabilitation action of the user is monitored in real time, the label matrix collects phase information, and then after the processing of the step 2 and the step 3, the obtained phase information data is sent to the CNN model obtained in the step 4 to identify the rehabilitation action of the user;
step 6: and (5) according to the result of the step 5, a series of rehabilitation actions of the user are scored and rehabilitation suggestions are given by combining the dead time between the actions and the information of the action time.
2. The upper limb rehabilitation monitoring method based on deep learning of claim 1, characterized in that: in the step 1, the RFID label matrix consists of 9 labels which are arranged in a staggered mode in sequence, the size of the matrix is three rows and three columns, the interval of each label is 12.5cm, and phase information of a plurality of labels in the label matrix is obtained through mutual communication between a reader and the labels; the rehabilitation action comprises: flexion/extension of elbow joint, flexion/extension of wrist joint, flexion/extension of shoulder joint, adduction/abduction of shoulder joint, and rotation/supination of shoulder joint.
3. The upper limb rehabilitation monitoring method based on deep learning of claim 1, characterized in that: in the step 2, the collected phase information is preprocessed, and the method specifically comprises the following steps:
step 2-1: phase unwrapping: for phase information obtained by reading a tag from a reader, let P be { P ═ P 1 ,P 2 ,…,P n N is the number of labels, P i ={x (1) ,x (2) ,…,x (m) X is the phase value, x (m) Is the phase value at sample point m, then the phase difference is calculated:
Figure FDA0003735590760000031
and (3) processing the trip point:
Figure FDA0003735590760000041
storing the obtained value in D (P) i )={D 1 ,D 2 ,...,D m-1 Are then summed, S i ={0,D 1 ,D 1 +D 2 ,...,D 1 +D 2 +D m-1 And finally, obtaining data after phase unwrapping: fin _ Phase ═ P i +S i
Step 2-2: the Phase curve is smoothed by moving average filtering, a matlab function smooth (Phase 'moving') is used, the default window width is 5, the calculation method is as follows, input y is Fin _ Phase, and output yy:
Figure FDA0003735590760000042
step 2-3: and (3) data normalization processing, namely normalizing the data x-yy to an interval [ ymin, ymax ] by using a matlab function mapminmax (x, ymin, ymax), wherein ymin and ymax respectively represent minimum and maximum values of the interval, and the data x-yy is normalized to the interval [ -1,1] by default, and the calculation formula is as follows:
Figure FDA0003735590760000043
4. the upper limb rehabilitation monitoring method based on deep learning of claim 1, characterized in that: in the step 6, the rehabilitation action condition of the user is scored and rehabilitation advice is given, the scoring is based on judging whether the action is standard after the rehabilitation action of the user is identified, if not, the scoring is carried out according to the angle and position information of the identified action, the dead time between actions and the action time information, and professional rehabilitation advice is given.
5. The utility model provides an upper limbs rehabilitation monitoring system based on deep learning which characterized in that: the system comprises:
the commercial passive RFID tag collects phase data information of a user;
the antenna is used for sending signals to the RFID label, receiving the signals from the RFID label and sending the signals to the reader;
a reader for communicating with the tag, modulating and demodulating the signal, and decoding the data packet;
the data processing unit is used for carrying out phase extraction, phase expansion, phase smoothing, normalization processing and Savitzky-Golay filter waveform segmentation processing on the original data;
the data obtained by segmentation is used for obtaining phase information related to rehabilitation actions; specifically, a characteristic waveform related to rehabilitation action is obtained by waveform segmentation of a Savitzky-Golay filter, and the segmentation method comprises the following steps:
Figure FDA0003735590760000051
wherein, n and x k Respectively the sliding window length and the kth sample point, m the number of windows, A 1 And F 1 Is weight value, filtering G in image m Points below a certain threshold, so as to separate out phase information relating to the motion;
The data storage unit is used for storing the data processed by the data processing unit and making the data into a data set;
the method for generating the phase information data relevant to the rehabilitation action into the data set comprises the following steps:
step 4-1: constructing a phase matrix, and generating an m × n matrix P, where m is the number of sampling points and n is the number of tags, that is:
Figure FDA0003735590760000061
step 4-2: resampling: resampling the data, adjusting the size of the submatrix, considering it as a 2d image, and calculating the duration t of all the actions end -t start Then, calculating a middle value T of the matrix and utilizing a matlab function, namely, a sample (phase, T, length), wherein the length is the original length of the matrix, and the T is the length after resampling;
the CNN model unit is used for constructing and training a suitable CNN model based on a large amount of phase data of rehabilitation actions;
the CNN model comprises: the device comprises an input layer, a convolution layer, a pooling layer, a forgetting layer, a flattening layer, a full-connection layer and a classification layer; the convolutional layer includes: a first convolution layer and a second convolution layer; the pooling layer comprises a first pooling layer and a second pooling layer; the forgetting layer comprises a forgetting layer I, a forgetting layer II and a forgetting layer III;
the front end of the CNN model is provided with an input layer, the input layer is connected with a first convolution layer, and the first convolution layer is connected with a first pooling layer; the first pooling layer is connected with the first forgetting layer; the forgetting layer I is connected with the convolution layer II; the convolution layer II is connected with the pooling layer II; the second pooling layer is connected with the second forgetting layer; the forgetting layer II is connected with the flattening layer; the flattening layer is connected with the full connecting layer; the full connection layer is connected with the forgetting layer III; the forgetting layer III is connected with the classification layer;
the input layer is used for finishing processing input data; the convolution layer uses convolution kernel to carry out feature extraction and feature mapping; the pooling layer is used for reducing the number of parameters and reducing the complexity of the network; the forgetting layer is used for preventing overfitting in training and accelerating the speed of training convergence; the flattening layer is used for compressing data into a one-dimensional array; the fully connected layer maps the learned distributed feature representation to a sample label space; the classification layer is used for completing classification of rehabilitation actions;
and the prompting unit is used for scoring the rehabilitation condition of the user and giving rehabilitation suggestions by combining with professional rehabilitation suggestions.
CN202010934782.1A 2020-09-08 2020-09-08 Upper limb rehabilitation monitoring method and system based on deep learning Active CN112086165B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010934782.1A CN112086165B (en) 2020-09-08 2020-09-08 Upper limb rehabilitation monitoring method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010934782.1A CN112086165B (en) 2020-09-08 2020-09-08 Upper limb rehabilitation monitoring method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN112086165A CN112086165A (en) 2020-12-15
CN112086165B true CN112086165B (en) 2022-08-19

Family

ID=73731608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010934782.1A Active CN112086165B (en) 2020-09-08 2020-09-08 Upper limb rehabilitation monitoring method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN112086165B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315886B (en) * 2023-09-07 2024-04-12 安徽建筑大学 UWB radar-based method and device for detecting impending falling of personnel

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111584030A (en) * 2020-04-30 2020-08-25 天津大学 Idea control intelligent rehabilitation system based on deep learning and complex network and application

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111584030A (en) * 2020-04-30 2020-08-25 天津大学 Idea control intelligent rehabilitation system based on deep learning and complex network and application

Also Published As

Publication number Publication date
CN112086165A (en) 2020-12-15

Similar Documents

Publication Publication Date Title
US10061389B2 (en) Gesture recognition system and gesture recognition method
Dang et al. Sensor-based and vision-based human activity recognition: A comprehensive survey
CN107153871B (en) Falling detection method based on convolutional neural network and mobile phone sensor data
CN111027487B (en) Behavior recognition system, method, medium and equipment based on multi-convolution kernel residual error network
Javeed et al. Wearable sensors based exertion recognition using statistical features and random forest for physical healthcare monitoring
CN106846729B (en) Tumble detection method and system based on convolutional neural network
CN105205436B (en) A kind of gesture recognition system based on forearm bioelectricity multisensor
CN108171278B (en) Motion pattern recognition method and system based on motion training data
Akhund et al. IoT based low-cost robotic agent design for disabled and Covid-19 virus affected people
CN110610158A (en) Human body posture identification method and system based on convolution and gated cyclic neural network
CN111199202B (en) Human body action recognition method and recognition device based on circulating attention network
CN112086165B (en) Upper limb rehabilitation monitoring method and system based on deep learning
Li et al. Convolutional neural networks (CNN) for indoor human activity recognition using Ubisense system
CN114384999B (en) User-independent myoelectric gesture recognition system based on self-adaptive learning
CN109711324A (en) Human posture recognition method based on Fourier transformation and convolutional neural networks
CN113300750A (en) Personnel identity authentication and handwritten letter identification method based on WIFI signal
Eyobu et al. A real-time sleeping position recognition system using IMU sensor motion data
CN114495265B (en) Human behavior recognition method based on activity graph weighting under multi-cross-domain scene
Avadut et al. A Deep Learning based IoT Framework for Assistive Healthcare using Gesture Based Interface
CN112801283B (en) Neural network model, action recognition method, device and storage medium
CN113242547B (en) Method and system for filtering user behavior privacy in wireless signal based on deep learning and wireless signal receiving and transmitting device
CN114764580A (en) Real-time human body gesture recognition method based on no-wearing equipment
CN117290773B (en) Amphibious personalized gesture recognition method and recognition system based on intelligent data glove
Zhang et al. An Improved Deep Convolutional LSTM for Human Activity Recognition Using Wearable Sensors
Sowmiya et al. Comparative Analysis of Various Hybrid Neural Network Models to Determine Human Activities using Inertial Measurement Units

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant