CN116458852B - Rehabilitation training system and method based on cloud platform and lower limb rehabilitation robot - Google Patents

Rehabilitation training system and method based on cloud platform and lower limb rehabilitation robot Download PDF

Info

Publication number
CN116458852B
CN116458852B CN202310712324.7A CN202310712324A CN116458852B CN 116458852 B CN116458852 B CN 116458852B CN 202310712324 A CN202310712324 A CN 202310712324A CN 116458852 B CN116458852 B CN 116458852B
Authority
CN
China
Prior art keywords
vector
association
matrix
state
time sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310712324.7A
Other languages
Chinese (zh)
Other versions
CN116458852A (en
Inventor
盛振文
王桂云
盛智
何静
王素琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Xiehe University
Original Assignee
Shandong Xiehe University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Xiehe University filed Critical Shandong Xiehe University
Priority to CN202310712324.7A priority Critical patent/CN116458852B/en
Publication of CN116458852A publication Critical patent/CN116458852A/en
Application granted granted Critical
Publication of CN116458852B publication Critical patent/CN116458852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4848Monitoring or testing the effects of treatment, e.g. of medication
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/22Ergometry; Measuring muscular strength or the force of a muscular blow
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4836Diagnosis combined with treatment in closed-loop systems or methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Physiology (AREA)
  • Artificial Intelligence (AREA)
  • Cardiology (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Fuzzy Systems (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Vascular Medicine (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application relates to the field of intelligent rehabilitation training, and particularly discloses a rehabilitation training system and method based on a cloud platform and a lower limb rehabilitation robot. The method comprises the steps of uploading joint angle values, moment values, heart rate values and blood pressure values of a plurality of preset time points to a cloud platform, respectively arranging the joint angle time sequence input vector, the moment time sequence input vector, the heart rate time sequence input vector and the blood pressure time sequence input vector, respectively carrying out association coding, respectively obtaining an operation state feature matrix and a physiological state feature matrix through a first convolution neural network model and a second convolution neural network model, and finally, enabling a classification feature matrix obtained by fusing the operation state feature matrix and the physiological state feature matrix to pass through a classifier to obtain a classification result of a label value for representing a rehabilitation state. In this way, intelligent assessment detection can be achieved.

Description

Rehabilitation training system and method based on cloud platform and lower limb rehabilitation robot
Technical Field
The application relates to the field of intelligent rehabilitation training, in particular to a rehabilitation training system and method based on a cloud platform and a lower limb rehabilitation robot.
Background
With the progress of global aging, the rehabilitation field becomes an important research direction. The lower limb rehabilitation robot has been widely used as an innovative rehabilitation device in various rehabilitation therapies, and plays a great role in improving the rehabilitation effect of patients and improving the quality of life.
However, the traditional rehabilitation therapy generally needs to manually monitor and evaluate the rehabilitation state of the patient, has low efficiency and poor data precision, is difficult to perform personalized rehabilitation training according to the actual needs of the patient, and greatly limits the optimization and popularization of the rehabilitation therapy.
Accordingly, an optimized cloud platform and lower limb rehabilitation robot-based rehabilitation training system is desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a rehabilitation training system and method based on a cloud platform and a lower limb rehabilitation robot. Firstly uploading joint angle values, moment values, heart rate values and blood pressure values at a plurality of preset time points to a cloud platform, then respectively arranging the joint angle values, the moment time sequence input vectors, the heart rate time sequence input vectors and the blood pressure time sequence input vectors, then respectively performing association coding to obtain an operation state association matrix and a physiological state association matrix, then passing the operation state association matrix through a first convolution neural network model to obtain an operation state feature matrix, then passing the physiological state association matrix through a second convolution neural network model to obtain a physiological state feature matrix, and finally, passing a classification feature matrix obtained by fusing the operation state feature matrix and the physiological state feature matrix through a classifier to obtain a classification result of a label value for representing a rehabilitation state. In this way, intelligent assessment detection can be achieved.
According to one aspect of the present application, there is provided a rehabilitation training system based on a cloud platform and a lower limb rehabilitation robot, comprising:
the data acquisition module is used for acquiring joint angle values, moment values, heart rate values and blood pressure values at a plurality of preset time points in a preset time period acquired by a sensor group of the lower limb rehabilitation robot;
the data transmission module is used for uploading the joint angle values, the moment values, the heart rate values and the blood pressure values of the plurality of preset time points to the cloud platform;
the data parameter time sequence arrangement module is used for arranging the joint angle values, the moment values, the heart rate values and the blood pressure values of the plurality of preset time points into joint angle time sequence input vectors, moment time sequence input vectors, heart rate time sequence input vectors and blood pressure time sequence input vectors according to time dimensions respectively;
the association coding module is used for carrying out association coding on the joint angle time sequence input vector and the moment time sequence input vector to obtain an operation state association matrix, and carrying out association coding on the heart rate time sequence input vector and the blood pressure time sequence input vector to obtain a physiological state association matrix;
the running state bidirectional attention feature extraction module is used for obtaining the running state feature matrix through a first convolution neural network model using a bidirectional attention mechanism by the running state association matrix;
The physiological state bidirectional attention feature extraction module is used for obtaining a physiological state feature matrix through a second convolution neural network model using a bidirectional attention mechanism by the physiological state association matrix;
the feature fusion module is used for fusing the running state feature matrix and the physiological state feature matrix to obtain a classification feature matrix;
and the recovery state evaluation module is used for passing the classification feature matrix through a classifier to obtain a classification result, wherein the classification result is used for representing a label value of the recovery state.
In the rehabilitation training system based on the cloud platform and the lower limb rehabilitation robot, the association coding module comprises:
the first association coding unit is used for carrying out association coding on the joint angle time sequence input vector and the moment time sequence input vector by using a first association coding formula to obtain the running state association matrix;
wherein, the first association coding formula is:
wherein ,representing the joint angle time sequence input vector, +.>Transpose vector representing the joint angle timing input vector,/->Representing the moment timing input vector, +.>Representing the operating state association matrix, +. >Representing vector multiplication;
the second association coding unit is used for carrying out association coding on the heart rate time sequence input vector and the blood pressure time sequence input vector by using a second association coding formula so as to obtain the physiological state association matrix;
wherein, the second association coding formula is:
wherein ,representing the heart rate timing input vector, +.>A transpose vector representing the heart rate timing input vector, < >>Representing the blood pressure time sequence input vector, +.>Representing the physiological state association matrix, +.>Representing vector multiplication.
In the rehabilitation training system based on the cloud platform and the lower limb rehabilitation robot, the running state bidirectional attention feature extraction module comprises:
the first bidirectional pooling unit is used for pooling the operation state incidence matrix along the horizontal direction and the vertical direction respectively to obtain a first pooling vector and a second pooling vector;
the first pooling association coding unit is used for carrying out association coding on the first pooling vector and the second pooling vector to obtain a running state bidirectional association matrix;
the first activation unit is used for inputting the running state bidirectional association matrix into a Sigmoid activation function to obtain a running state attention matrix;
The first matrix expansion unit is used for expanding the running state association matrix and the running state attention moment matrix into feature vectors respectively to obtain a running state association vector and a running state attention vector;
the first optimization feature fusion unit is used for fusing the operation state association vector and the operation state attention vector to obtain an operation state fusion association vector;
and the first dimension reconstruction unit is used for carrying out dimension reconstruction on the operation state fusion association vector so as to obtain the operation state feature matrix.
In the rehabilitation training system based on the cloud platform and the lower limb rehabilitation robot, the first optimization feature fusion unit is configured to:
fusing the running state association vector and the running state attention vector by adopting a class converter space migration displacement fusion mode according to the following first fusion formula to obtain the running state fusion association vector;
wherein, the first fusion formula is:
wherein ,is the operating state association vector, +.>Is the operating state attention vector, +.>For the distance matrix between the operating state association vector and the operating state attention vector, +. >Representing the Euclidean distance between said operating state association vector and said operating state attention vector,/->For the +.>Personal characteristic value->For the +.>Personal characteristic value->Is a mask threshold superparameter, and the vectors are all row vectors, +.>、/> and />Position-by-position addition, subtraction and multiplication of feature vectors, respectively, < >>Representing a matrix multiplication of the number of bits,representation->Function (F)>Is the run state fusion association vector.
In the rehabilitation training system based on the cloud platform and the lower limb rehabilitation robot, the physiological state bidirectional attention feature extraction module comprises:
the second bidirectional pooling unit is used for pooling the physiological state incidence matrix along the horizontal direction and the vertical direction respectively to obtain a third pooling vector and a fourth pooling vector;
the second pooling association coding unit is used for carrying out association coding on the first pooling vector and the second pooling vector to obtain a physiological state bidirectional association matrix;
the second activation unit is used for inputting the physiological state bidirectional correlation matrix into a Sigmoid activation function to obtain a physiological state attention matrix;
The second matrix expansion unit is used for expanding the physiological state association matrix and the physiological state attention moment matrix into feature vectors respectively to obtain a physiological state association vector and a physiological state attention vector;
the second optimization feature fusion unit is used for fusing the physiological state association vector and the physiological state attention vector to obtain a physiological state fusion association vector;
and a second dimension reconstruction unit, configured to perform dimension reconstruction on the physiological state fusion correlation vector to obtain the physiological state feature matrix.
In the rehabilitation training system based on the cloud platform and the lower limb rehabilitation robot, the second optimization feature fusion unit is configured to:
fusing the physiological state association vector and the physiological state attention vector by adopting a class converter space migration displacement fusion mode according to the following second fusion formula to obtain the physiological state fusion association vector;
wherein, the second fusion formula is:
wherein ,is the physiological state association vector, +.>Is the physiological state attention vector, +.>For a distance matrix between the physiological state association vector and the physiological state attention vector,/a >Representing the Euclidean distance between the physiological state associated vector and the physiological state attention vector, < >>Is the +.>Personal characteristic value->A +.f. for the physiological state attention vector>Personal characteristic value->Is a mask threshold superparameter, and the vectors are all row vectors, +.>、/> and />Position-by-position addition, subtraction and multiplication of feature vectors, respectively, < >>Representing a matrix multiplication of the number of bits,representation->Function (F)>Is the physiological state fusion association vector.
In the rehabilitation training system based on the cloud platform and the lower limb rehabilitation robot, the rehabilitation state evaluation module includes:
the unfolding unit is used for unfolding the classification characteristic matrix into a classification characteristic vector according to a row vector or a column vector;
the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a full-connection layer of the classifier so as to obtain coded classification characteristic vectors;
and the classification unit is used for inputting the coding classification feature vector into a Softmax classification function of the classifier to obtain the classification result.
According to another aspect of the present application, there is provided a rehabilitation training method based on a cloud platform and a lower limb rehabilitation robot, including:
Acquiring joint angle values, moment values, heart rate values and blood pressure values of a plurality of preset time points in a preset time period acquired by a sensor group of the lower limb rehabilitation robot;
uploading the joint angle values, the moment values, the heart rate values and the blood pressure values of the plurality of preset time points to a cloud platform;
in the cloud platform, the joint angle values, the moment values, the heart rate values and the blood pressure values of the plurality of preset time points are respectively arranged into joint angle time sequence input vectors, moment time sequence input vectors, heart rate time sequence input vectors and blood pressure time sequence input vectors according to time dimensions;
performing association coding on the joint angle time sequence input vector and the moment time sequence input vector to obtain an operation state association matrix, and performing association coding on the heart rate time sequence input vector and the blood pressure time sequence input vector to obtain a physiological state association matrix;
the operation state association matrix is processed through a first convolution neural network model using a bidirectional attention mechanism to obtain an operation state feature matrix;
the physiological state association matrix is subjected to a second convolution neural network model using a bidirectional attention mechanism to obtain a physiological state feature matrix;
Fusing the running state feature matrix and the physiological state feature matrix to obtain a classification feature matrix;
and passing the classification feature matrix through a classifier to obtain a classification result, wherein the classification result is used for representing a label value of the rehabilitation state.
In the rehabilitation training method based on the cloud platform and the lower limb rehabilitation robot, performing association coding on the joint angle time sequence input vector and the moment time sequence input vector to obtain an operation state association matrix, and performing association coding on the heart rate time sequence input vector and the blood pressure time sequence input vector to obtain a physiological state association matrix, the method comprises the following steps:
performing association coding on the joint angle time sequence input vector and the moment time sequence input vector by using a first association coding formula to obtain the running state association matrix;
wherein, the first association coding formula is:
wherein ,representing the joint angle time sequence input vector, +.>Transpose vector representing the joint angle timing input vector,/->Representing the moment timing input vector, +.>Representing the operating state association matrix, +.>Representing vector multiplication;
performing association coding on the heart rate time sequence input vector and the blood pressure time sequence input vector by using a second association coding formula to obtain the physiological state association matrix;
Wherein, the second association coding formula is:
wherein ,representing the heart rate timing input vector, +.>A transpose vector representing the heart rate timing input vector, < >>Representing the blood pressure time sequence input vector, +.>Representing the physiological state association matrix, +.>Representing vector multiplication.
In the rehabilitation training method based on the cloud platform and the lower limb rehabilitation robot, the operation state correlation matrix is obtained by using a first convolutional neural network model of a bidirectional attention mechanism, and the method comprises the following steps:
pooling the operation state association matrix along the horizontal direction and the vertical direction respectively to obtain a first pooling vector and a second pooling vector;
performing association coding on the first pooling vector and the second pooling vector to obtain a running state bidirectional association matrix;
inputting the running state bidirectional association matrix into a Sigmoid activation function to obtain a running state attention matrix;
respectively expanding the operation state association matrix and the operation state attention moment matrix into feature vectors to obtain an operation state association vector and an operation state attention vector;
fusing the operation state association vector and the operation state attention vector to obtain an operation state fusion association vector;
And carrying out dimension reconstruction on the operation state fusion association vector to obtain the operation state feature matrix.
Compared with the prior art, the rehabilitation training system and method based on the cloud platform and the lower limb rehabilitation robot provided by the application have the advantages that the joint angle values, the moment values, the heart rate values and the blood pressure values at a plurality of preset time points are uploaded to the cloud platform and then are respectively arranged into the joint angle time sequence input vector, the moment time sequence input vector, the heart rate time sequence input vector and the blood pressure time sequence input vector, then, the operation state association matrix and the physiological state association matrix are respectively obtained through association coding, then, the operation state association matrix is subjected to a first convolution neural network model to obtain an operation state feature matrix, then, the physiological state association matrix is subjected to a second convolution neural network model to obtain a physiological state feature matrix, and finally, the classification feature matrix obtained through fusion of the operation state feature matrix and the physiological state feature matrix is subjected to a classifier to obtain a classification result of a label value for representing a rehabilitation state. In this way, intelligent assessment detection can be achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art. The following drawings are not intended to be drawn to scale, emphasis instead being placed upon illustrating the principles of the application.
Fig. 1 is an application scenario diagram of a rehabilitation training system based on a cloud platform and a lower limb rehabilitation robot according to an embodiment of the application.
Fig. 2 is a block diagram of a rehabilitation training system based on a cloud platform and a lower limb rehabilitation robot according to an embodiment of the application.
Fig. 3 is a schematic block diagram of the associated coding module in a rehabilitation training system based on a cloud platform and a lower limb rehabilitation robot according to an embodiment of the application.
Fig. 4 is a schematic block diagram of the running state bidirectional attention feature extraction module in the rehabilitation training system based on the cloud platform and the lower limb rehabilitation robot according to the embodiment of the application.
Fig. 5 is a block diagram schematic diagram of the physiological state bidirectional attention feature extraction module in the rehabilitation training system based on the cloud platform and the lower limb rehabilitation robot according to the embodiment of the application.
Fig. 6 is a schematic block diagram of the rehabilitation status evaluation module in the rehabilitation training system based on the cloud platform and the lower limb rehabilitation robot according to the embodiment of the application.
Fig. 7 is a flowchart of a rehabilitation training method based on a cloud platform and a lower limb rehabilitation robot according to an embodiment of the application.
Fig. 8 is a schematic diagram of a system architecture of a rehabilitation training method based on a cloud platform and a lower limb rehabilitation robot according to an embodiment of the application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are also within the scope of the application.
As used in the specification and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Although the present application makes various references to certain modules in a system according to embodiments of the present application, any number of different modules may be used and run on a user terminal and/or server. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
A flowchart is used in the present application to describe the operations performed by a system according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Robotics first require sensing the surrounding environment and identifying the work object. The visual perception technology is used as a guide, the perception and fusion of one-dimensional, two-dimensional and multidimensional information are realized, the quick and accurate environment perception and target recognition are realized, and the whole robot system is enabled to act as reliable and efficient eyes, ears and noses. The method comprises the steps of establishing an automatic production line platform of the high-end intelligent manufacturing robot, mainly researching a high-resolution high-precision visual information acquisition technology of the robot, image segmentation of a complex operation environment and accurate tracking and positioning of a high-speed moving target, overcoming a visual perception key technology of an intelligent industrial robot, finally realizing multi-scale visual perception, high-precision high-speed information processing and high-efficiency target detection and identification of the robot, and providing core technical support for the intelligent function of the robot in the industrial environment.
After sensing the environment, the control methods of intelligent control, variable structure control, self-adaptive control and the like are combined, and the most appropriate control command is provided for the robot system like a brain to make corresponding actions and reactions, so that the robot control technology needs to be researched. The platform focuses on breaking through key control technologies such as high-speed movement of a robot based on vision, accurate positioning and map creation, cooperative decision, coordinated control, optimization of proper force and the like, comprehensively considering the kinematics and dynamics of the robot under different working environments, developing a robot vision servo system, a multi-manipulator cooperation system and a robot high-speed movement tracking system which are oriented to manufacturing industry requirements, and reducing system errors and improving the accuracy and the rapidity of the control system on the premise of ensuring the stability of the control system.
The control of the movement of the robot is based on the control of the individual joints of the robot, and each joint of the robot can be regarded as a high-precision motor. If the motor technology is improved, the joints of the robot can be more flexible and powerful. The integrated design technology of the servo motor and the servo of the precise motor of the robot is researched, and the driving and precise servo driving control technology, the bus technology and the multi-shaft combined driving technology of the electromechanical system of the robot are broken through. Breaks through the technology of driving and precise servo driving control of the robot electric system, and realizes the high-speed stable operation of the robot automatic production line without static difference, overshoot and interference. The core technologies of improvement of the mechanical structure and excitation mode of the motor, cogging torque elimination, electromagnetic torque fluctuation reduction, wide speed regulation range of the servo motor, autorotation elimination, quick response and the like are researched. The servo drive control method is improved, and the problems of control precision of a servo drive system, anti-interference capability of the system, quick start of the system, stable control of frequent start and stop, bus control, realization of multi-degree-of-freedom motion and coordination control and the like are solved.
The robotic system ultimately contacts the work object with a flexible "finger". The more functions the finger has, the more things the robot can do. Developing a robot multi-joint smart finger, a high-executive-force smart mechanism and a driving device, and constructing an intelligent industrial robot smart operation mechanism verification experiment platform. The key technology of high-precision and high-reliability motion planning and control of the smart operation mechanism of the robot is researched, and the optimal control of the flexibility and proper force of the robot is realized. The modeling of the smart operation mechanism of the robot and the digital design simulation are researched, the smart mechanism with high executive force, the driving device and the multi-joint smart finger are researched, key technical problems of high-precision and high-reliability motion planning and control of the smart operation mechanism of the robot are overcome, performance indexes such as operation precision, reliability, repeatability and resolution of the industrial robot are improved, and the robot is ensured to be capable of completing multi-task, high-complexity and high-reliability smart operation.
Accordingly, the robot may be applied in rehabilitation training. For example, if the patient is recovering gait, the robot may choose to simulate gait movements and provide support.
As described above, conventional rehabilitation therapy generally requires manual monitoring and evaluation of patient rehabilitation status, has low efficiency and poor data accuracy, and is difficult to perform personalized rehabilitation training according to actual needs of patients, greatly limiting optimization and popularization of rehabilitation therapy. Accordingly, an optimized cloud platform and lower limb rehabilitation robot-based rehabilitation training system is desired.
Accordingly, in order to perform personalized rehabilitation training according to actual needs of a patient, accurate detection and evaluation of the rehabilitation progress and the rehabilitation state of the patient are required in consideration of the fact that the rehabilitation training is performed based on the cloud platform and the lower limb rehabilitation robot. Therefore, in the technical scheme of the application, it is desirable to perform deep mining and intelligent analysis on rehabilitation data of a patient by using big data analysis and machine learning algorithms to generate a rehabilitation state evaluation result of the patient. Specifically, the joint angle value, the moment value, the heart rate value, and the blood pressure value of the patient can be acquired by the sensor group of the lower limb rehabilitation robot to perform rehabilitation state evaluation through comprehensive analysis of the data. It should be understood that this is because the joint angle value and the moment value may reflect the patient's movement ability and operation state, and the heart rate value and the blood pressure value may reflect the patient's physiological state and physical health, and by performing time series analysis on the joint angle value, moment value, heart rate value, and blood pressure value parameters of the patient, the detection of the patient's rehabilitation state can be sufficiently performed based on the patient's operation state and physiological state. However, since the joint angle value and the moment value have time-series cooperative correlation characteristics with respect to the patient's operation state, the heart rate value and the blood pressure value have time-series cooperative correlation characteristics with respect to the patient's physiological state, and there is a correlation between the patient's operation state and the patient's physiological state with respect to the rehabilitation state. Therefore, in this process, the difficulty lies in how to mine the correlation characteristic distribution information between the operation state time sequence cooperative correlation characteristic between the joint angle value and the moment value and the physiological state time sequence cooperative correlation characteristic between the heart rate value and the blood pressure value, so as to intelligently evaluate and detect the rehabilitation state of the patient, further generate an individualized rehabilitation training scheme based on the actual rehabilitation state of the patient, and further promote the application and development of the rehabilitation robot technology.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. The development of deep learning and neural networks provides new solutions and schemes for mining the correlation feature distribution information between the operation state time sequence cooperative correlation features between the joint angle values and the moment values and the physiological state time sequence cooperative correlation features between the heart rate values and the blood pressure values. Those of ordinary skill in the art will appreciate that a deep learning based deep neural network model may adjust parameters of the deep neural network model by appropriate training strategies, such as by a gradient descent back-propagation algorithm, to enable modeling of complex nonlinear correlations between things, which is obviously suitable for modeling and mining of correlation feature distribution information between operational state timing co-correlation features between the joint angle values and the moment values and physiological state timing co-correlation features between the heart rate values and the blood pressure values.
Specifically, in the technical scheme of the application, first, joint angle values, moment values, heart rate values and blood pressure values at a plurality of preset time points in a preset time period acquired by a sensor group of a lower limb rehabilitation robot are acquired. It should be appreciated that the sensors of the lower limb rehabilitation robot can acquire rehabilitation data of a patient, wherein the rehabilitation data comprise various parameters such as a joint angle value, a moment value, a heart rate value, a blood pressure value and the like. These parameters may provide a comprehensive understanding of the physical state of the patient during rehabilitation, wherein the joint angle values and the moment values may reflect the patient's motor ability and performance state, while heart rate values and blood pressure values may reflect the patient's physiological state. Therefore, the joint angle value, the moment value, the heart rate value and the blood pressure value parameters at a plurality of preset time points are collected through the sensor group of the lower limb rehabilitation robot, so that the follow-up analysis of the running state and the physiological state of the patient is facilitated, the physical condition and the rehabilitation progress condition of the patient are better mastered, and a personalized rehabilitation scheme is provided.
And uploading the joint angle values, the moment values, the heart rate values and the blood pressure values of the preset time points to a cloud platform, and arranging the joint angle values, the moment values, the heart rate values and the blood pressure values of the preset time points into joint angle time sequence input vectors, moment time sequence input vectors, heart rate time sequence input vectors and blood pressure time sequence input vectors according to time dimensions on the cloud platform, so that time sequence distribution information of the joint angle values, the moment values, the heart rate values and the blood pressure values in the time dimensions is integrated, and the subsequent capturing and extraction of time sequence change correlation features of the data are facilitated.
Then, it is considered that since the joint angle value and the moment value have respective dynamic change characteristic information in the time dimension, and the parameter data of the two also have time-series cooperative correlation characteristics concerning the running state of the patient. Furthermore, it is also contemplated that since the heart rate value and the blood pressure value have respective dynamic change characteristic information in the time dimension, and that there is a time-series cooperative correlation characteristic between the parameter data of the two with respect to the physiological state of the patient. Therefore, in the technical scheme of the application, the joint angle time sequence input vector and the moment time sequence input vector are required to be subjected to association coding to obtain an operation state association matrix, and the heart rate time sequence input vector and the blood pressure time sequence input vector are required to be subjected to association coding to obtain a physiological state association matrix, so that the operation state time sequence association relationship between the joint angle value and the moment value and the physiological state time sequence association relationship between the heart rate value and the blood pressure value are established.
Further, feature mining of the operation state association matrix is performed using a convolutional neural network model having excellent performance in terms of implicit association feature extraction, and particularly, in consideration of the fact that the operation state features of a patient cannot be sufficiently and effectively captured through a conventional feature extraction method due to the complicated and diverse operation states of the patient. Therefore, in the technical scheme of the application, the first convolution neural network model of the bidirectional attention mechanism is further used for processing the operation state association matrix to obtain an operation state feature matrix. In this way, the contextual information can be fully utilized to enhance the characteristic response of the patient operating state and inhibit the background characteristic response, so that the time sequence cooperative correlation characteristic between the joint angle value and the moment value, namely the time sequence change characteristic information of the patient operating state, can be effectively extracted. Specifically, the bidirectional attention module respectively calibrates the attention weights of the whole running state association matrix from the horizontal direction and the vertical direction and acquires complex characteristic relations, so that local characteristic information can be acquired from the global characteristics of the space.
In the process of extracting the time sequence correlation characteristics of the physiological state of the patient, likewise, the physiological state correlation matrix is obtained through a second convolution neural network model using a bidirectional attention mechanism, so that the time sequence cooperative correlation characteristics between the heart rate value and the blood pressure value, namely time sequence change characteristic information of the physiological state of the patient, are extracted. And then, fusing the running state characteristic matrix and the physiological state characteristic matrix to obtain a classification characteristic matrix so as to represent the fusion correlation characteristic between the running state time sequence correlation characteristic information of the patient and the physiological state time sequence correlation characteristic information of the patient, so that the evaluation and detection of the rehabilitation state of the patient are facilitated.
And then, the classification feature matrix is passed through a classifier to obtain a classification result of the label value for representing the rehabilitation state. That is, the classification label of the classifier is a label of the rehabilitation state of the patient, so after the classification result is obtained, the rehabilitation state evaluation and detection of the patient can be performed based on the classification result, and then a personalized rehabilitation training scheme is generated based on the actual rehabilitation state of the patient.
Particularly, in the technical scheme of the application, when the physiological state feature matrix is obtained by using the second convolution neural network model of the bidirectional attention mechanism, the physiological state feature matrix is obtained by further fusing the physiological state association matrix and the physiological state attention matrix on the basis of the attention feature matrix obtained by strengthening the local space feature semantics of the physiological state association matrix by the attention weight mechanism of the bidirectional attention module in the row and column space dimensions, so that if the fusion effect of the physiological state association matrix and the physiological state attention matrix can be improved, the expression effect of the physiological state feature matrix can be improved.
And considering that the physiological state attention matrix is obtained by feature extraction on the basis of the physiological state correlation matrix, the feature distribution of the physiological state attention matrix in a high-dimensional feature space has space migration relative to the feature distribution of the physiological state correlation matrix, so that the fusion effect of the physiological state correlation matrix and the physiological state attention matrix needs to be improved.
Based on this, the physiological state correlation matrix and the physiological state attention moment matrix are first developed into physiological state correlation vectors, e.g., denoted asAnd a physiological state attention vector, e.g. denoted +.>And employing class transformer space migration permutation fusion to said fused physiological state association vector +.>And the physiological state attention vector +.>The method is specifically expressed as follows:
wherein ,is the physiological state association vector, +.>Is the physiological state attention vector, +.>For a distance matrix between the physiological state association vector and the physiological state attention vector,/a>Representing the relationship between the physiological state and the physiological state attention vectorEuropean distance,/, of->Is the +. >Personal characteristic value->A +.f. for the physiological state attention vector>Personal characteristic value->Is a mask threshold superparameter, and the vectors are all row vectors, +.>、/> and />Position-by-position addition, subtraction and multiplication of feature vectors, respectively, < >>Representing a matrix multiplication of the number of bits,representation->Function (F)>Is the physiological state fusion association vector.
Here, the class transformer spatial migration permutation fusion is performed by correlating vectors with the fused physiological statesAnd the physiological state attention vector +.>Mask prediction of a class-converter mechanism for spatial distances of feature value pairs based on differential characterization of feature value pairs, realizing physiological state feature vectors +.>Edge affine encoding in high-dimensional feature space and passing physiological state feature vector +.>Relative to the fusion physiological state association vector to be fused +.>And the physiological state attention vector +.>Global rotation and translation under the converter mechanism is not deformed, realizing the fusion physiological state association vector +.>And the physiological state attention vector +.>The spatial migration displaceability of the feature distribution of (2) thereby improving the fusion physiological state association vector +. >And the physiological state attention vector +.>Is a fusion effect of (a). Thus, the physiological state feature vector is further +.>The physiological state feature matrix is restored, so that the expression effect of the physiological state feature matrix can be improved.
In the technical scheme of the application, when the running state feature matrix is obtained by using the first convolutional neural network model of the bidirectional attention mechanism, the running state feature matrix is obtained by further fusing the running state association matrix and the running state attention matrix on the basis of the attention feature matrix obtained by strengthening the local space feature semantics of the running state association matrix by the attention weight mechanism of the bidirectional attention module in the row and column space dimensions, so that if the fusion effect of the running state association matrix and the running state attention matrix can be improved, the expression effect of the running state feature matrix can be improved.
And considering that the operation state attention matrix is obtained by extracting features on the basis of the operation state association matrix, the feature distribution of the operation state attention matrix in a high-dimensional feature space has space migration relative to the feature distribution of the operation state association matrix, so that the fusion effect of the operation state association matrix and the operation state attention matrix needs to be improved.
Based on this, the operating state correlation matrix and the operating state attention moment matrix are first expanded into an operating state correlation vector, for example denoted asAnd an operating state attention vector, e.g. denoted +.>And employing class transformer space migration permutation fusion to said fusion run state association vector +.>And the operating state attention vector +.>The method is specifically expressed as follows:
wherein ,is the operating state association vector, +.>Is the operating state attention vector, +.>For the distance matrix between the operating state association vector and the operating state attention vector, +.>Representing the Euclidean distance between said operating state association vector and said operating state attention vector,/->For the +.>Personal characteristic value->For the +.>Personal characteristic value->Is a mask threshold superparameter, and the vectors are all row vectors, +.>、/> and />Position-by-position addition, subtraction and multiplication of feature vectors, respectively, < >>Representing a matrix multiplication of the number of bits,representation->Function (F)>Is the run state fusion association vector.
Here, the class transformer space migration permutation fusion is performed by associating vectors with the fusion run state And the operating state attention vector +.>Mask prediction of a class converter mechanism for spatial distance of feature value pairs by differential characterization of feature value pairs, realizing an operational state feature vector +.>Edge affine encoding in high-dimensional feature space and by applying hidden state bias under the self-attention mechanism of the converter, by running state feature vector +.>Relative to the fusion operating state association vector to be fused ∈>And the operating state attention vector +.>Global rotation and translation under the converter mechanism are not deformed, realizing the fusion run state association vector +.>And the operating stateAttention vector->The spatial migration displaceability of the feature distribution of (2) thereby improving the fusion run state association vector +.>And the operating state attention vector +.>Is a fusion effect of (a). Thus, the operating state feature vector is again +.>And restoring the operation state characteristic matrix to an operation state characteristic matrix, so that the expression effect of the operation state characteristic matrix can be improved. Therefore, the rehabilitation state evaluation and detection of the patient can be accurately carried out, and then a personalized rehabilitation training scheme is generated based on the actual rehabilitation state of the patient, so that the application and development of the rehabilitation robot technology are further promoted.
Fig. 1 is an application scenario diagram of a rehabilitation training system based on a cloud platform and a lower limb rehabilitation robot according to an embodiment of the application. As shown in fig. 1, in this application scenario, first, joint angle values, moment values, heart rate values, and blood pressure values at a plurality of predetermined time points within a predetermined period of time acquired by a sensor group of a lower limb rehabilitation robot (for example, D illustrated in fig. 1) are acquired, and then the joint angle values, moment values, heart rate values, and blood pressure values at the plurality of predetermined time points are input to a server (for example, S illustrated in fig. 1) where a rehabilitation training algorithm based on a cloud platform and a lower limb rehabilitation robot is deployed, wherein the server can process the joint angle values, moment values, heart rate values, and blood pressure values at the plurality of predetermined time points using the rehabilitation training algorithm based on the cloud platform and the lower limb rehabilitation robot to obtain a classification result of tag values for representing a rehabilitation state.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Fig. 2 is a block diagram of a rehabilitation training system based on a cloud platform and a lower limb rehabilitation robot according to an embodiment of the application. As shown in fig. 2, a rehabilitation training system 100 based on a cloud platform and a lower limb rehabilitation robot according to an embodiment of the application includes: a data acquisition module 110 for acquiring joint angle values, moment values, heart rate values and blood pressure values at a plurality of predetermined time points within a predetermined period of time acquired by a sensor group of the lower limb rehabilitation robot; the data transmission module 120 is configured to upload the joint angle values, the moment values, the heart rate values, and the blood pressure values at the plurality of predetermined time points to the cloud platform; the data parameter time sequence arrangement module 130 is configured to arrange the joint angle values, the moment values, the heart rate values and the blood pressure values at the plurality of predetermined time points into a joint angle time sequence input vector, a moment time sequence input vector, a heart rate time sequence input vector and a blood pressure time sequence input vector according to a time dimension, respectively; the association encoding module 140 is configured to perform association encoding on the joint angle time sequence input vector and the moment time sequence input vector to obtain an operation state association matrix, and perform association encoding on the heart rate time sequence input vector and the blood pressure time sequence input vector to obtain a physiological state association matrix; an operation state bidirectional attention feature extraction module 150, configured to obtain an operation state feature matrix by using a first convolutional neural network model of a bidirectional attention mechanism by using the operation state association matrix; a physiological state bidirectional attention feature extraction module 160, configured to obtain a physiological state feature matrix by using a second convolutional neural network model of a bidirectional attention mechanism with the physiological state correlation matrix; the feature fusion module 170 is configured to fuse the running state feature matrix and the physiological state feature matrix to obtain a classification feature matrix; and a rehabilitation state evaluation module 180, configured to pass the classification feature matrix through a classifier to obtain a classification result, where the classification result is used to represent a label value of a rehabilitation state.
More specifically, in the embodiment of the present application, the data acquisition module 110 is configured to acquire the joint angle value, the moment value, the heart rate value, and the blood pressure value at a plurality of predetermined time points within a predetermined period of time acquired by the sensor group of the lower limb rehabilitation robot. The joint angle value and the moment value can reflect the movement capacity and the running state of the patient, the heart rate value and the blood pressure value can reflect the physiological state and the physical health condition of the patient, and the recovery state of the patient can be sufficiently detected based on the running state and the physiological state of the patient by carrying out time sequence analysis on the joint angle value, the moment value, the heart rate value and the blood pressure value parameters of the patient.
More specifically, in the embodiment of the present application, the data transmission module 120 is configured to upload the joint angle values, the moment values, the heart rate values, and the blood pressure values at the plurality of predetermined time points to a cloud platform.
More specifically, in the embodiment of the present application, the data parameter timing arrangement module 130 is configured to arrange, at the cloud platform, the joint angle values, the moment values, the heart rate values, and the blood pressure values at the plurality of predetermined time points into a joint angle timing input vector, a moment timing input vector, a heart rate timing input vector, and a blood pressure timing input vector according to a time dimension, respectively. Therefore, the time sequence distribution information of the joint angle value, the moment value, the heart rate value and the blood pressure value in the time dimension is integrated, and the capturing and extracting of the time sequence change related characteristics of the data are facilitated.
More specifically, in the embodiment of the present application, the association encoding module 140 is configured to perform association encoding on the joint angle time sequence input vector and the moment time sequence input vector to obtain an operation state association matrix, and perform association encoding on the heart rate time sequence input vector and the blood pressure time sequence input vector to obtain a physiological state association matrix. Because the joint angle value and the moment value have respective dynamic change characteristic information in the time dimension, and time sequence cooperative correlation characteristics related to the running state of the patient are also arranged between the parameter data of the joint angle value and the moment value. Furthermore, it is also contemplated that since the heart rate value and the blood pressure value have respective dynamic change characteristic information in the time dimension, and that there is a time-series cooperative correlation characteristic between the parameter data of the two with respect to the physiological state of the patient. Therefore, in the technical scheme of the application, the joint angle time sequence input vector and the moment time sequence input vector are required to be subjected to association coding to obtain an operation state association matrix, and the heart rate time sequence input vector and the blood pressure time sequence input vector are required to be subjected to association coding to obtain a physiological state association matrix, so that the operation state time sequence association relationship between the joint angle value and the moment value and the physiological state time sequence association relationship between the heart rate value and the blood pressure value are established.
Accordingly, in one specific example, as shown in fig. 3, the association coding module 140 includes: a first association encoding unit 141, configured to perform association encoding on the joint angle time sequence input vector and the moment time sequence input vector according to the following first association encoding formula to obtain the running state association matrix; wherein, the first association coding formula is:
wherein ,representing the joint angle time sequence input vector, +.>Transpose vector representing the joint angle timing input vector,/->Representing the moment timing input vector, +.>Representing the operating state association matrix, +.>Representing vector multiplication; and a second association encoding unit 142, configured to perform association encoding on the heart rate time sequence input vector and the blood pressure time sequence input vector according to the following second association encoding formula to obtain the physiological state association matrix; wherein the second associationThe coding formula is:
wherein ,representing the heart rate timing input vector, +.>A transpose vector representing the heart rate timing input vector, < >>Representing the blood pressure time sequence input vector, +.>Representing the physiological state association matrix, +.>Representing vector multiplication.
More specifically, in an embodiment of the present application, the running state bidirectional attention feature extraction module 150 is configured to obtain the running state feature matrix by using the first convolutional neural network model of the bidirectional attention mechanism to the running state correlation matrix. Feature mining of the operation state association matrix is performed using a convolutional neural network model having excellent performance in terms of implicit association feature extraction, and particularly, in consideration of the fact that the operation state features of a patient cannot be sufficiently and effectively captured through a conventional feature extraction method due to the complicated and diverse operation states of the patient. Therefore, in the technical scheme of the application, the first convolution neural network model of the bidirectional attention mechanism is further used for processing the operation state association matrix to obtain an operation state feature matrix. In this way, the contextual information can be fully utilized to enhance the characteristic response of the patient operating state and inhibit the background characteristic response, so that the time sequence cooperative correlation characteristic between the joint angle value and the moment value, namely the time sequence change characteristic information of the patient operating state, can be effectively extracted. Specifically, the bidirectional attention module respectively calibrates the attention weights of the whole running state association matrix from the horizontal direction and the vertical direction and acquires complex characteristic relations, so that local characteristic information can be acquired from the global characteristics of the space.
It should be appreciated that convolutional neural network (Convolutional Neural Network, CNN) is an artificial neural network and has wide application in the fields of image recognition and the like. The convolutional neural network may include an input layer, a hidden layer, and an output layer, where the hidden layer may include a convolutional layer, a pooling layer, an activation layer, a full connection layer, etc., where the previous layer performs a corresponding operation according to input data, outputs an operation result to the next layer, and obtains a final result after the input initial data is subjected to a multi-layer operation.
Accordingly, in one specific example, as shown in fig. 4, the operation state bidirectional attention feature extraction module 150 includes: a first bidirectional pooling unit 151, configured to pool the running state correlation matrix along a horizontal direction and a vertical direction respectively to obtain a first pooling vector and a second pooling vector; a first pooled associated encoding unit 152, configured to perform associated encoding on the first pooled vector and the second pooled vector to obtain an operation state bidirectional associated matrix; a first activating unit 153, configured to input the operation state bidirectional association matrix into a Sigmoid activating function to obtain an operation state attention matrix; a first matrix expansion unit 154, configured to expand the operation state association matrix and the operation state attention moment matrix into feature vectors to obtain an operation state association vector and an operation state attention vector, respectively; a first optimization feature fusion unit 155, configured to fuse the running state association vector and the running state attention vector to obtain a running state fusion association vector; and a first dimension reconstruction unit 156, configured to perform dimension reconstruction on the running state fusion association vector to obtain the running state feature matrix.
When the running state correlation matrix is obtained through a first convolution neural network model using a bidirectional attention mechanism, the running state correlation matrix and the running state attention matrix are further fused on the basis of the attention characteristic matrix obtained by strengthening the local space characteristic semantics of the running state correlation matrix through the attention weight mechanism in row and column space dimensions by the bidirectional attention module to obtain the running state characteristic matrix, so that if the fusion effect of the running state correlation matrix and the running state attention matrix can be improved, the expression effect of the running state characteristic matrix can be improved. And considering that the operation state attention matrix is obtained by extracting features on the basis of the operation state association matrix, the feature distribution of the operation state attention matrix in a high-dimensional feature space has space migration relative to the feature distribution of the operation state association matrix, so that the fusion effect of the operation state association matrix and the operation state attention matrix needs to be improved. Based on the above, the operation state association matrix and the operation state attention moment matrix are firstly unfolded into an operation state association vector and an operation state attention vector, and the operation state association vector and the operation state attention vector are fused by adopting class converter space migration displacement fusion.
Accordingly, in a specific example, the first optimizing feature fusing unit 155 is configured to: fusing the running state association vector and the running state attention vector by adopting a class converter space migration displacement fusion mode according to the following first fusion formula to obtain the running state fusion association vector; wherein, the first fusion formula is:
wherein ,is the operating state association vector, +.>Is of the run-like shapeState attention vector, ++>For the distance matrix between the operating state association vector and the operating state attention vector, +.>Representing the Euclidean distance between said operating state association vector and said operating state attention vector,/->For the +.>Personal characteristic value->For the +.>Personal characteristic value->Is a mask threshold superparameter, and the vectors are all row vectors, +.>、/> and />Position-by-position addition, subtraction and multiplication of feature vectors, respectively, < >>Representing matrix multiplication +.>Representation->The function of the function is that,/>is the run state fusion association vector.
The class converter space migration displacement fusion is used for carrying out mask prediction of a class converter mechanism on the space distance of the characteristic value pair by using the differential representation of the characteristic value pair of the fusion operation state association vector and the operation state attention vector, so that edge affine coding of the operation state characteristic vector in a high-dimensional characteristic space is realized, and the fusion effect of the fusion operation state association vector and the operation state attention vector is improved by applying hidden state bias of a converter under a self-attention mechanism and realizing that the operation state characteristic vector is not deformed relative to global rotation and translation of the fusion operation state association vector to be fused and the operation state attention vector under the converter mechanism. In this way, the operation state feature vector is restored to the operation state feature matrix, and the expression effect of the operation state feature matrix can be improved. Therefore, the rehabilitation state evaluation and detection of the patient can be accurately carried out, and then a personalized rehabilitation training scheme is generated based on the actual rehabilitation state of the patient, so that the application and development of the rehabilitation robot technology are further promoted.
More specifically, in an embodiment of the present application, the physiological state bidirectional attention feature extraction module 160 is configured to use the physiological state correlation matrix to obtain a physiological state feature matrix through a second convolutional neural network model using a bidirectional attention mechanism. Likewise, the physiological state correlation matrix is obtained through a second convolution neural network model using a bidirectional attention mechanism, so that a physiological state characteristic matrix is obtained, and time sequence cooperative correlation characteristics between the heart rate value and the blood pressure value, namely time sequence change characteristic information of the physiological state of the patient, are extracted. And then, fusing the running state characteristic matrix and the physiological state characteristic matrix to obtain a classification characteristic matrix so as to represent the fusion correlation characteristic between the running state time sequence correlation characteristic information of the patient and the physiological state time sequence correlation characteristic information of the patient, so that the evaluation and detection of the rehabilitation state of the patient are facilitated.
Accordingly, in one specific example, as shown in fig. 5, the physiological state bidirectional attention feature extraction module 160 includes: a second bidirectional pooling unit 161, configured to pool the physiological state correlation matrix along a horizontal direction and a vertical direction respectively to obtain a third pooled vector and a fourth pooled vector; a second pooled associated encoding unit 162, configured to perform associated encoding on the first pooled vector and the second pooled vector to obtain a physiological state bidirectional associated matrix; a second activating unit 163, configured to input the physiological state bidirectional correlation matrix into a Sigmoid activating function to obtain a physiological state attention matrix; a second matrix expansion unit 164, configured to expand the physiological state correlation matrix and the physiological state attention moment matrix into feature vectors to obtain a physiological state correlation vector and a physiological state attention vector, respectively; a second optimization feature fusion unit 165, configured to fuse the physiological state association vector and the physiological state attention vector to obtain a physiological state fusion association vector; and a second dimension reconstruction unit 166, configured to perform dimension reconstruction on the physiological state fusion correlation vector to obtain the physiological state feature matrix.
Particularly, in the technical scheme of the application, when the physiological state feature matrix is obtained by using the second convolution neural network model of the bidirectional attention mechanism, the physiological state feature matrix is obtained by further fusing the physiological state association matrix and the physiological state attention matrix on the basis of the attention feature matrix obtained by strengthening the local space feature semantics of the physiological state association matrix by the attention weight mechanism of the bidirectional attention module in the row and column space dimensions, so that if the fusion effect of the physiological state association matrix and the physiological state attention matrix can be improved, the expression effect of the physiological state feature matrix can be improved. And considering that the physiological state attention matrix is obtained by feature extraction on the basis of the physiological state correlation matrix, the feature distribution of the physiological state attention matrix in a high-dimensional feature space has space migration relative to the feature distribution of the physiological state correlation matrix, so that the fusion effect of the physiological state correlation matrix and the physiological state attention matrix needs to be improved. Based on this, the physiological state correlation matrix and the physiological state attention moment matrix are first expanded into a physiological state correlation vector and a physiological state attention vector, and the fused physiological state correlation vector and the physiological state attention vector are fused by using class transformer space migration displacement fusion.
Accordingly, in a specific example, the second optimization feature fusion unit 165 is configured to: fusing the physiological state association vector and the physiological state attention vector by adopting a class converter space migration displacement fusion mode according to the following second fusion formula to obtain the physiological state fusion association vector; wherein, the second fusion formula is:
wherein ,is the physiological state association vector, +.>Is the physiological state attention vector, +.>For a distance matrix between the physiological state association vector and the physiological state attention vector,/a>Representing the Euclidean distance between the physiological state associated vector and the physiological state attention vector, < >>Is the +.>Personal characteristic value->A +.f. for the physiological state attention vector>Personal characteristic value->Is a mask threshold superparameter, and the vectors are all row vectors, +.>、/> and />Position-by-position addition, subtraction and multiplication of feature vectors, respectively, < >>Representing a matrix multiplication of the number of bits,representation->Function (F)>Is the physiological state fusion association vector.
The class converter space migration replacement fusion is used for carrying out mask prediction of a class converter mechanism on the space distance of the feature value pair by using the differential representation of the feature value pair of the fused physiological state association vector and the physiological state attention vector, so that edge affine coding of the physiological state feature vector in a high-dimensional feature space is realized, and the fusion effect of the fused physiological state association vector and the physiological state attention vector is improved by applying hidden state bias of a converter under a self-attention mechanism and realizing that the physiological state feature vector is not deformed relative to global rotation and translation of the fused physiological state association vector to be fused and the physiological state attention vector under the converter mechanism. Therefore, the physiological state feature vector is restored to the physiological state feature matrix, and the expression effect of the physiological state feature matrix can be improved.
More specifically, in the embodiment of the present application, the feature fusion module 170 is configured to fuse the running state feature matrix and the physiological state feature matrix to obtain a classification feature matrix.
More specifically, in the embodiment of the present application, the recovery state evaluation module 180 is configured to pass the classification feature matrix through a classifier to obtain a classification result, where the classification result is used to represent a label value of a recovery state. After the classification result is obtained, the rehabilitation state evaluation detection of the patient can be carried out based on the classification result, and then a personalized rehabilitation training scheme is generated based on the actual rehabilitation state of the patient.
It should be appreciated that the role of the classifier is to learn the classification rules and classifier using a given class, known training data, and then classify (or predict) the unknown data. Logistic regression (logistics), SVM, etc. are commonly used to solve the classification problem, and for multi-classification problems (multi-class classification), logistic regression or SVM can be used as well, but multiple bi-classifications are required to compose multiple classifications, but this is error-prone and inefficient, and the commonly used multi-classification method is the Softmax classification function.
Accordingly, in one specific example, as shown in fig. 6, the rehabilitation status evaluation module 180 includes: a developing unit 181, configured to develop the classification feature matrix into a classification feature vector according to a row vector or a column vector; a full-connection encoding unit 182, configured to perform full-connection encoding on the classification feature vector by using a full-connection layer of the classifier to obtain an encoded classification feature vector; and a classification unit 183, configured to input the encoded classification feature vector into a Softmax classification function of the classifier to obtain the classification result.
In summary, the rehabilitation training system 100 based on the cloud platform and the lower limb rehabilitation robot according to the embodiment of the application is illustrated, firstly, joint angle values, moment values, heart rate values and blood pressure values at a plurality of preset time points are uploaded to the cloud platform and then are respectively arranged into joint angle time sequence input vectors, moment time sequence input vectors, heart rate time sequence input vectors and blood pressure time sequence input vectors, then, association coding is respectively carried out to obtain an operation state association matrix and a physiological state association matrix, then, the operation state association matrix is subjected to a first convolutional neural network model to obtain an operation state feature matrix, then, the physiological state association matrix is subjected to a second convolutional neural network model to obtain a physiological state feature matrix, and finally, classification feature matrixes obtained by fusing the operation state feature matrix and the physiological state feature matrix are subjected to a classifier to obtain classification results of label values for representing rehabilitation states. In this way, intelligent assessment detection can be achieved.
As described above, the rehabilitation training system 100 based on the cloud platform and the lower limb rehabilitation robot according to the embodiment of the present application may be implemented in various terminal devices, for example, a server having a rehabilitation training algorithm based on the cloud platform and the lower limb rehabilitation robot according to the embodiment of the present application. In one example, the rehabilitation training system 100 based on the cloud platform and the lower limb rehabilitation robot according to the embodiment of the present application may be integrated into the terminal device as one software module and/or hardware module. For example, the rehabilitation training system 100 based on the cloud platform and the lower limb rehabilitation robot according to the embodiment of the present application may be a software module in the operating system of the terminal device, or may be an application program developed for the terminal device; of course, the rehabilitation training system 100 based on the cloud platform and the lower limb rehabilitation robot according to the embodiment of the application may be one of a plurality of hardware modules of the terminal device.
Alternatively, in another example, the rehabilitation training system 100 based on the cloud platform and the lower limb rehabilitation robot according to the embodiment of the present application and the terminal device may be separate devices, and the rehabilitation training system 100 based on the cloud platform and the lower limb rehabilitation robot may be connected to the terminal device through a wired and/or wireless network, and transmit the interactive information according to the agreed data format.
Fig. 7 is a flowchart of a rehabilitation training method based on a cloud platform and a lower limb rehabilitation robot according to an embodiment of the application. As shown in fig. 7, a rehabilitation training method based on a cloud platform and a lower limb rehabilitation robot according to an embodiment of the application includes: s110, acquiring joint angle values, moment values, heart rate values and blood pressure values at a plurality of preset time points in a preset time period acquired by a sensor group of the lower limb rehabilitation robot; s120, uploading the joint angle values, the moment values, the heart rate values and the blood pressure values of the plurality of preset time points to a cloud platform; s130, at the cloud platform, respectively arranging the joint angle values, the moment values, the heart rate values and the blood pressure values of the plurality of preset time points into joint angle time sequence input vectors, moment time sequence input vectors, heart rate time sequence input vectors and blood pressure time sequence input vectors according to a time dimension; s140, performing association coding on the joint angle time sequence input vector and the moment time sequence input vector to obtain an operation state association matrix, and performing association coding on the heart rate time sequence input vector and the blood pressure time sequence input vector to obtain a physiological state association matrix; s150, the operation state association matrix is processed through a first convolution neural network model using a bidirectional attention mechanism to obtain an operation state feature matrix; s160, the physiological state association matrix is obtained through a second convolution neural network model using a bidirectional attention mechanism; s170, fusing the running state feature matrix and the physiological state feature matrix to obtain a classification feature matrix; and S180, passing the classification feature matrix through a classifier to obtain a classification result, wherein the classification result is used for representing a label value of the rehabilitation state.
Fig. 8 is a schematic diagram of a system architecture of a rehabilitation training method based on a cloud platform and a lower limb rehabilitation robot according to an embodiment of the application. As shown in fig. 8, in the system architecture of the rehabilitation training method based on the cloud platform and the lower limb rehabilitation robot, first, joint angle values, moment values, heart rate values and blood pressure values at a plurality of predetermined time points within a predetermined time period acquired by a sensor group of the lower limb rehabilitation robot are acquired; then, uploading the joint angle values, the moment values, the heart rate values and the blood pressure values of the plurality of preset time points to a cloud platform; then, at the cloud platform, respectively arranging the joint angle values, the moment values, the heart rate values and the blood pressure values of the plurality of preset time points into joint angle time sequence input vectors, moment time sequence input vectors, heart rate time sequence input vectors and blood pressure time sequence input vectors according to a time dimension; performing association coding on the joint angle time sequence input vector and the moment time sequence input vector to obtain an operation state association matrix, and performing association coding on the heart rate time sequence input vector and the blood pressure time sequence input vector to obtain a physiological state association matrix; then, the operation state association matrix is processed through a first convolution neural network model using a bidirectional attention mechanism to obtain an operation state feature matrix; then, the physiological state association matrix is processed through a second convolution neural network model using a bidirectional attention mechanism to obtain a physiological state feature matrix; then, fusing the running state feature matrix and the physiological state feature matrix to obtain a classification feature matrix; and finally, the classification feature matrix passes through a classifier to obtain a classification result, wherein the classification result is used for representing a label value of the rehabilitation state.
In a specific example, in the rehabilitation training method based on the cloud platform and the lower limb rehabilitation robot, performing association coding on the joint angle time sequence input vector and the moment time sequence input vector to obtain an operation state association matrix, and performing association coding on the heart rate time sequence input vector and the blood pressure time sequence input vector to obtain a physiological state association matrix, the method includes: performing association coding on the joint angle time sequence input vector and the moment time sequence input vector by using a first association coding formula to obtain the running state association matrix; wherein, the first association coding formula is:
wherein ,representing the joint angle time sequence input vector, +.>Transpose vector representing the joint angle timing input vector,/->Representing the moment timing input vector, +.>Representing the operating state association matrix, +.>Representing vector multiplication; and performing association coding on the heart rate time sequence input vector and the blood pressure time sequence input vector by using a second association coding formula to obtain the physiological state association matrix; wherein, the second association coding formula is:
wherein ,representing the heart rate timing input vector, +. >A transpose vector representing the heart rate timing input vector, < >>Representing the blood pressure timingInput vector,/->Representing the physiological state association matrix, +.>Representing vector multiplication.
In a specific example, in the rehabilitation training method based on the cloud platform and the lower limb rehabilitation robot, the step of obtaining the running state feature matrix by using the first convolutional neural network model of the bidirectional attention mechanism includes: pooling the operation state association matrix along the horizontal direction and the vertical direction respectively to obtain a first pooling vector and a second pooling vector; performing association coding on the first pooling vector and the second pooling vector to obtain a running state bidirectional association matrix; inputting the running state bidirectional association matrix into a Sigmoid activation function to obtain a running state attention matrix; respectively expanding the operation state association matrix and the operation state attention moment matrix into feature vectors to obtain an operation state association vector and an operation state attention vector; fusing the operation state association vector and the operation state attention vector to obtain an operation state fusion association vector; and carrying out dimension reconstruction on the operation state fusion association vector to obtain the operation state feature matrix.
In a specific example, in the rehabilitation training method based on the cloud platform and the lower limb rehabilitation robot, fusing the operation state association vector and the operation state attention vector to obtain an operation state fusion association vector includes: fusing the running state association vector and the running state attention vector by adopting a class converter space migration displacement fusion mode according to the following first fusion formula to obtain the running state fusion association vector; wherein, the first fusion formula is:
wherein ,is the operating state association vector, +.>Is the operating state attention vector, +.>For the distance matrix between the operating state association vector and the operating state attention vector, +.>Representing the Euclidean distance between said operating state association vector and said operating state attention vector,/->For the +.>Personal characteristic value->For the +.>Personal characteristic value->Is a mask threshold superparameter, and the vectors are all row vectors, +.>、/> and />Position-by-position addition, subtraction and multiplication of feature vectors, respectively, < >>Representing matrix multiplication +. >Representation->Function (F)>Is the run state fusion association vector.
In a specific example, in the rehabilitation training method based on the cloud platform and the lower limb rehabilitation robot, the step of obtaining the physiological state feature matrix by using the second convolutional neural network model of the bidirectional attention mechanism to the physiological state feature matrix includes: pooling the physiological state association matrix along the horizontal direction and the vertical direction respectively to obtain a third pooling vector and a fourth pooling vector; performing association coding on the first pooling vector and the second pooling vector to obtain a physiological state bidirectional association matrix; inputting the physiological state bidirectional correlation matrix into a Sigmoid activation function to obtain a physiological state attention matrix; respectively expanding the physiological state association matrix and the physiological state attention moment matrix into feature vectors to obtain a physiological state association vector and a physiological state attention vector; fusing the physiological state correlation vector and the physiological state attention vector to obtain a physiological state fusion correlation vector; and carrying out dimension reconstruction on the physiological state fusion association vector to obtain the physiological state feature matrix.
In a specific example, in the rehabilitation training method based on the cloud platform and the lower limb rehabilitation robot, fusing the physiological state association vector and the physiological state attention vector to obtain a physiological state fusion association vector includes: fusing the physiological state association vector and the physiological state attention vector by adopting a class converter space migration displacement fusion mode according to the following second fusion formula to obtain the physiological state fusion association vector; wherein, the second fusion formula is:
/>
wherein ,is the physiological state association vector, +.>Is the physiological state attention vector, +.>For a distance matrix between the physiological state association vector and the physiological state attention vector,/a>Representing the Euclidean distance between the physiological state associated vector and the physiological state attention vector, < >>Is the +.>Personal characteristic value->A +.f. for the physiological state attention vector>Personal characteristic value->Is a mask threshold superparameter, and the vectors are all row vectors, +.>、/> and />Position-by-position addition, subtraction and multiplication of feature vectors, respectively, < >>Representing matrix multiplication +. >Representation->Function (F)>Is the physiological state fusion association vector.
In a specific example, in the rehabilitation training method based on the cloud platform and the lower limb rehabilitation robot, the classification feature matrix is passed through a classifier to obtain a classification result, where the classification result is used for a label value representing a rehabilitation state, and the method includes: expanding the classification feature matrix into classification feature vectors according to row vectors or column vectors; performing full-connection coding on the classification feature vectors by using a full-connection layer of the classifier to obtain coded classification feature vectors; and inputting the coding classification feature vector into a Softmax classification function of the classifier to obtain the classification result.
Here, it will be understood by those skilled in the art that the specific operations of the respective steps in the above-described rehabilitation training method based on the cloud platform and the lower limb rehabilitation robot have been described in detail in the above description of the rehabilitation training system 100 based on the cloud platform and the lower limb rehabilitation robot with reference to fig. 1 to 6, and thus, repetitive descriptions thereof will be omitted.
According to another aspect of the present application there is also provided a non-volatile computer readable storage medium having stored thereon computer readable instructions which when executed by a computer can perform a method as described above.
Program portions of the technology may be considered to be "products" or "articles of manufacture" in the form of executable code and/or associated data, embodied or carried out by a computer readable medium. A tangible, persistent storage medium may include any memory or storage used by a computer, processor, or similar device or related module. Such as various semiconductor memories, tape drives, disk drives, or the like, capable of providing storage functionality for software.
The application uses specific words to describe embodiments of the application. Reference to "a first/second embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the application may be combined as suitable.
Furthermore, those skilled in the art will appreciate that the various aspects of the application are illustrated and described in the context of a number of patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof. Although a few exemplary embodiments of this invention have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention as defined in the following claims. It is to be understood that the foregoing is illustrative of the present invention and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The invention is defined by the claims and their equivalents.

Claims (7)

1. Rehabilitation training system based on cloud platform and low limbs rehabilitation robot, characterized in that includes:
the data acquisition module is used for acquiring joint angle values, moment values, heart rate values and blood pressure values at a plurality of preset time points in a preset time period acquired by a sensor group of the lower limb rehabilitation robot;
the data transmission module is used for uploading the joint angle values, the moment values, the heart rate values and the blood pressure values of the plurality of preset time points to the cloud platform;
the data parameter time sequence arrangement module is used for arranging the joint angle values, the moment values, the heart rate values and the blood pressure values of the plurality of preset time points into joint angle time sequence input vectors, moment time sequence input vectors, heart rate time sequence input vectors and blood pressure time sequence input vectors according to time dimensions respectively;
the association coding module is used for carrying out association coding on the joint angle time sequence input vector and the moment time sequence input vector to obtain an operation state association matrix, and carrying out association coding on the heart rate time sequence input vector and the blood pressure time sequence input vector to obtain a physiological state association matrix;
the running state bidirectional attention feature extraction module is used for obtaining the running state feature matrix through a first convolution neural network model using a bidirectional attention mechanism by the running state association matrix;
The physiological state bidirectional attention feature extraction module is used for obtaining a physiological state feature matrix through a second convolution neural network model using a bidirectional attention mechanism by the physiological state association matrix;
the feature fusion module is used for fusing the running state feature matrix and the physiological state feature matrix to obtain a classification feature matrix;
the recovery state evaluation module is used for passing the classification feature matrix through a classifier to obtain a classification result, wherein the classification result is used for representing a label value of the recovery state;
the running state bidirectional attention characteristic extraction module comprises:
the first bidirectional pooling unit is used for pooling the operation state incidence matrix along the horizontal direction and the vertical direction respectively to obtain a first pooling vector and a second pooling vector;
the first pooling association coding unit is used for carrying out association coding on the first pooling vector and the second pooling vector to obtain a running state bidirectional association matrix;
the first activation unit is used for inputting the running state bidirectional association matrix into a Sigmoid activation function to obtain a running state attention matrix;
the first matrix expansion unit is used for expanding the running state association matrix and the running state attention moment matrix into feature vectors respectively to obtain a running state association vector and a running state attention vector;
The first optimization feature fusion unit is used for fusing the operation state association vector and the operation state attention vector to obtain an operation state fusion association vector;
the first dimension reconstruction unit is used for carrying out dimension reconstruction on the operation state fusion association vector so as to obtain the operation state feature matrix;
the first optimization feature fusion unit is used for:
fusing the running state association vector and the running state attention vector by adopting a class converter space migration displacement fusion mode according to the following first fusion formula to obtain the running state fusion association vector;
wherein, the first fusion formula is:
wherein ,is the operating state association vector, +.>Is the operating state attention vector, +.>For the distance matrix between the operating state association vector and the operating state attention vector, +.>Representing the Euclidean distance between said operating state association vector and said operating state attention vector,/->For the +.>Personal characteristic value->For the +.>Personal characteristic value->Is a mask threshold superparameter, and the vectors are all row vectors, +. >、/> and />Position-by-position addition, subtraction and multiplication of feature vectors, respectively, < >>Representing a matrix multiplication of the number of bits,representation->Function (F)>Is the run state fusion association vector.
2. The rehabilitation training system based on a cloud platform and a lower limb rehabilitation robot according to claim 1, wherein the association coding module comprises:
the first association coding unit is used for carrying out association coding on the joint angle time sequence input vector and the moment time sequence input vector by using a first association coding formula to obtain the running state association matrix;
wherein, the first association coding formula is:
wherein ,representing the joint angle time sequence input vector, +.>Transpose vector representing the joint angle timing input vector,/->Representing the moment timing input vector, +.>Representing the operating state association matrix, +.>Representing vector multiplication;
the second association coding unit is used for carrying out association coding on the heart rate time sequence input vector and the blood pressure time sequence input vector by using a second association coding formula so as to obtain the physiological state association matrix;
wherein, the second association coding formula is:
wherein ,Representing the heart rate timing input vector, +.>A transpose vector representing the heart rate timing input vector, < >>Representing the blood pressure time sequence input vector, +.>Representing the physiological state association matrix, +.>Representing vector multiplication.
3. The rehabilitation training system based on a cloud platform and a lower limb rehabilitation robot according to claim 2, wherein the physiological state bidirectional attention feature extraction module comprises:
the second bidirectional pooling unit is used for pooling the physiological state incidence matrix along the horizontal direction and the vertical direction respectively to obtain a third pooling vector and a fourth pooling vector;
the second pooling association coding unit is used for carrying out association coding on the first pooling vector and the second pooling vector to obtain a physiological state bidirectional association matrix;
the second activation unit is used for inputting the physiological state bidirectional correlation matrix into a Sigmoid activation function to obtain a physiological state attention matrix;
the second matrix expansion unit is used for expanding the physiological state association matrix and the physiological state attention moment matrix into feature vectors respectively to obtain a physiological state association vector and a physiological state attention vector;
The second optimization feature fusion unit is used for fusing the physiological state association vector and the physiological state attention vector to obtain a physiological state fusion association vector;
and a second dimension reconstruction unit, configured to perform dimension reconstruction on the physiological state fusion correlation vector to obtain the physiological state feature matrix.
4. The rehabilitation training system based on a cloud platform and a lower limb rehabilitation robot according to claim 3, wherein the second optimization feature fusion unit is configured to:
fusing the physiological state association vector and the physiological state attention vector by adopting a class converter space migration displacement fusion mode according to the following second fusion formula to obtain the physiological state fusion association vector;
wherein, the second fusion formula is:
wherein ,is the physiological state association vector, +.>Is the physiological state attention vector, +.>For a distance matrix between the physiological state association vector and the physiological state attention vector,/a>Representing the Euclidean distance between the physiological state associated vector and the physiological state attention vector, < >>Is the +. >Personal characteristic value->A +.f. for the physiological state attention vector>Personal characteristic value->Is a mask threshold superparameter, and the vectors are all row vectors, +.>、/> and />Position-by-position addition, subtraction and multiplication of feature vectors, respectively, < >>Representing a matrix multiplication of the number of bits,representation->Function (F)>Is the physiological state fusion association vector.
5. The rehabilitation training system based on a cloud platform and a lower limb rehabilitation robot of claim 4, wherein the rehabilitation state assessment module comprises:
the unfolding unit is used for unfolding the classification characteristic matrix into a classification characteristic vector according to a row vector or a column vector;
the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a full-connection layer of the classifier so as to obtain coded classification characteristic vectors;
and the classification unit is used for inputting the coding classification feature vector into a Softmax classification function of the classifier to obtain the classification result.
6. A rehabilitation training method based on a cloud platform and a lower limb rehabilitation robot is characterized by comprising the following steps:
acquiring joint angle values, moment values, heart rate values and blood pressure values of a plurality of preset time points in a preset time period acquired by a sensor group of the lower limb rehabilitation robot;
Uploading the joint angle values, the moment values, the heart rate values and the blood pressure values of the plurality of preset time points to a cloud platform;
in the cloud platform, the joint angle values, the moment values, the heart rate values and the blood pressure values of the plurality of preset time points are respectively arranged into joint angle time sequence input vectors, moment time sequence input vectors, heart rate time sequence input vectors and blood pressure time sequence input vectors according to time dimensions;
performing association coding on the joint angle time sequence input vector and the moment time sequence input vector to obtain an operation state association matrix, and performing association coding on the heart rate time sequence input vector and the blood pressure time sequence input vector to obtain a physiological state association matrix;
the operation state association matrix is processed through a first convolution neural network model using a bidirectional attention mechanism to obtain an operation state feature matrix;
the physiological state association matrix is subjected to a second convolution neural network model using a bidirectional attention mechanism to obtain a physiological state feature matrix;
fusing the running state feature matrix and the physiological state feature matrix to obtain a classification feature matrix;
the classification feature matrix passes through a classifier to obtain a classification result, and the classification result is used for representing a label value of a rehabilitation state;
The step of obtaining the running state characteristic matrix by using the first convolution neural network model of the bidirectional attention mechanism by the running state associated matrix comprises the following steps:
pooling the operation state association matrix along the horizontal direction and the vertical direction respectively to obtain a first pooling vector and a second pooling vector;
performing association coding on the first pooling vector and the second pooling vector to obtain a running state bidirectional association matrix;
inputting the running state bidirectional association matrix into a Sigmoid activation function to obtain a running state attention matrix;
respectively expanding the operation state association matrix and the operation state attention moment matrix into feature vectors to obtain an operation state association vector and an operation state attention vector;
fusing the operation state association vector and the operation state attention vector to obtain an operation state fusion association vector;
performing dimension reconstruction on the operation state fusion association vector to obtain the operation state feature matrix;
fusing the operational state correlation vector and the operational state attention vector to obtain an operational state fused correlation vector, comprising:
Fusing the running state association vector and the running state attention vector by adopting a class converter space migration displacement fusion mode according to the following first fusion formula to obtain the running state fusion association vector;
wherein, the first fusion formula is:
wherein ,is the operating state association vector, +.>Is the operating state attention vector, +.>For the distance matrix between the operating state association vector and the operating state attention vector, +.>Representing the Euclidean distance between said operating state association vector and said operating state attention vector,/->For the +.>Personal characteristic value->For the +.>Personal characteristic value->Is a mask threshold superparameter, and the vectors are all row vectors, +.>、/> and />Position-by-position addition, subtraction and multiplication of feature vectors, respectively, < >>Representing a matrix multiplication of the number of bits,representation->Function (F)>Is the run state fusion association vector.
7. The rehabilitation training method based on a cloud platform and a lower limb rehabilitation robot according to claim 6, wherein performing association coding on the joint angle time sequence input vector and the moment time sequence input vector to obtain an operation state association matrix, and performing association coding on the heart rate time sequence input vector and the blood pressure time sequence input vector to obtain a physiological state association matrix, comprises:
Performing association coding on the joint angle time sequence input vector and the moment time sequence input vector by using a first association coding formula to obtain the running state association matrix;
wherein, the first association coding formula is:
wherein ,representing the joint angle time sequence input vector, +.>Transpose vector representing the joint angle timing input vector,/->Representing the moment timing input vector, +.>Representing the operating state association matrix, +.>Representing vector multiplication;
performing association coding on the heart rate time sequence input vector and the blood pressure time sequence input vector by using a second association coding formula to obtain the physiological state association matrix;
wherein, the second association coding formula is:
wherein ,representing the heart rate timing input vector, +.>A transpose vector representing the heart rate timing input vector, < >>Representing the blood pressure time sequence input vector, +.>Representing the physiological state association matrix, +.>Representing vector multiplication.
CN202310712324.7A 2023-06-16 2023-06-16 Rehabilitation training system and method based on cloud platform and lower limb rehabilitation robot Active CN116458852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310712324.7A CN116458852B (en) 2023-06-16 2023-06-16 Rehabilitation training system and method based on cloud platform and lower limb rehabilitation robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310712324.7A CN116458852B (en) 2023-06-16 2023-06-16 Rehabilitation training system and method based on cloud platform and lower limb rehabilitation robot

Publications (2)

Publication Number Publication Date
CN116458852A CN116458852A (en) 2023-07-21
CN116458852B true CN116458852B (en) 2023-09-01

Family

ID=87177399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310712324.7A Active CN116458852B (en) 2023-06-16 2023-06-16 Rehabilitation training system and method based on cloud platform and lower limb rehabilitation robot

Country Status (1)

Country Link
CN (1) CN116458852B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117034123B (en) * 2023-08-28 2024-05-07 定州市云领域体育用品有限公司 Fault monitoring system and method for fitness equipment
CN117254593B (en) * 2023-09-25 2024-05-03 安徽南瑞继远电网技术有限公司 Cloud-edge-collaboration-based intelligent management and control platform and method for power grid inspection operation
CN117556220B (en) * 2024-01-09 2024-03-22 吉林大学 Intelligent auxiliary system and method for rehabilitation nursing

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112022619A (en) * 2020-09-07 2020-12-04 西北工业大学 Multi-mode information fusion sensing system of upper limb rehabilitation robot
WO2021184619A1 (en) * 2020-03-19 2021-09-23 南京未艾信息科技有限公司 Human body motion attitude identification and evaluation method and system therefor
CN113647939A (en) * 2021-08-26 2021-11-16 复旦大学 Artificial intelligence rehabilitation evaluation and training system for spinal degenerative diseases
WO2021258333A1 (en) * 2020-06-24 2021-12-30 中国科学院深圳先进技术研究院 Gait abnormality early identification and risk early-warning method and apparatus
WO2022036777A1 (en) * 2020-08-21 2022-02-24 暨南大学 Method and device for intelligent estimation of human body movement posture based on convolutional neural network
CN114366559A (en) * 2021-12-31 2022-04-19 华南理工大学 Multi-mode sensing system for lower limb rehabilitation robot
CN114366556A (en) * 2021-12-31 2022-04-19 华南理工大学 Multi-mode training control system and method for lower limb rehabilitation
CN114399818A (en) * 2022-01-05 2022-04-26 广东电网有限责任公司 Multi-mode face emotion recognition method and device
WO2022257187A1 (en) * 2021-06-11 2022-12-15 华中师范大学 Non-contact fatigue detection method and system
CN115512162A (en) * 2022-10-08 2022-12-23 中国石油大学(华东) Terrain classification method based on attention twin network and multi-mode fusion features
CN115624321A (en) * 2022-11-08 2023-01-20 深圳市鑫一代科技有限公司 Desk type health monitor
CN115644823A (en) * 2022-12-12 2023-01-31 中国科学院苏州生物医学工程技术研究所 Dynamic prediction and individualized intervention method and system for rehabilitation effect
CN115830718A (en) * 2023-02-14 2023-03-21 福建中医药大学 Data processing system for predicting rehabilitation training effect based on gait recognition

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140343460A1 (en) * 2013-05-15 2014-11-20 Ut-Battelle, Llc Mobile gait force and motion analysis system
JP7037366B2 (en) * 2015-05-27 2022-03-16 ジョージア テック リサーチ コーポレイション Wearable technology for joint health assessment
US10854104B2 (en) * 2015-08-28 2020-12-01 Icuemotion Llc System for movement skill analysis and skill augmentation and cueing
EP3986266A4 (en) * 2019-06-21 2023-10-04 Rehabilitation Institute of Chicago D/b/a Shirley Ryan Abilitylab Wearable joint tracking device with muscle activity and methods thereof
CN115661713A (en) * 2022-11-01 2023-01-31 华南农业大学 Suckling piglet counting method based on self-attention spatiotemporal feature fusion

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021184619A1 (en) * 2020-03-19 2021-09-23 南京未艾信息科技有限公司 Human body motion attitude identification and evaluation method and system therefor
WO2021258333A1 (en) * 2020-06-24 2021-12-30 中国科学院深圳先进技术研究院 Gait abnormality early identification and risk early-warning method and apparatus
WO2022036777A1 (en) * 2020-08-21 2022-02-24 暨南大学 Method and device for intelligent estimation of human body movement posture based on convolutional neural network
CN112022619A (en) * 2020-09-07 2020-12-04 西北工业大学 Multi-mode information fusion sensing system of upper limb rehabilitation robot
WO2022257187A1 (en) * 2021-06-11 2022-12-15 华中师范大学 Non-contact fatigue detection method and system
CN113647939A (en) * 2021-08-26 2021-11-16 复旦大学 Artificial intelligence rehabilitation evaluation and training system for spinal degenerative diseases
CN114366556A (en) * 2021-12-31 2022-04-19 华南理工大学 Multi-mode training control system and method for lower limb rehabilitation
CN114366559A (en) * 2021-12-31 2022-04-19 华南理工大学 Multi-mode sensing system for lower limb rehabilitation robot
CN114399818A (en) * 2022-01-05 2022-04-26 广东电网有限责任公司 Multi-mode face emotion recognition method and device
CN115512162A (en) * 2022-10-08 2022-12-23 中国石油大学(华东) Terrain classification method based on attention twin network and multi-mode fusion features
CN115624321A (en) * 2022-11-08 2023-01-20 深圳市鑫一代科技有限公司 Desk type health monitor
CN115644823A (en) * 2022-12-12 2023-01-31 中国科学院苏州生物医学工程技术研究所 Dynamic prediction and individualized intervention method and system for rehabilitation effect
CN115830718A (en) * 2023-02-14 2023-03-21 福建中医药大学 Data processing system for predicting rehabilitation training effect based on gait recognition

Also Published As

Publication number Publication date
CN116458852A (en) 2023-07-21

Similar Documents

Publication Publication Date Title
CN116458852B (en) Rehabilitation training system and method based on cloud platform and lower limb rehabilitation robot
Lee et al. Making sense of vision and touch: Self-supervised learning of multimodal representations for contact-rich tasks
US11577388B2 (en) Automatic robot perception programming by imitation learning
Abraham et al. Real-time translation of Indian sign language using LSTM
Kwiatkowski et al. Grasp stability assessment through the fusion of proprioception and tactile signals using convolutional neural networks
Ito et al. Efficient multitask learning with an embodied predictive model for door opening and entry with whole-body control
WO2018169708A1 (en) Learning efficient object detection models with knowledge distillation
US20160239000A1 (en) TS-DIST: Learning Adaptive Distance Metric in Time Series Sets
CN111204476A (en) Vision-touch fusion fine operation method based on reinforcement learning
Teulière et al. Self-calibrating smooth pursuit through active efficient coding
Yin et al. Associate latent encodings in learning from demonstrations
CN116929815A (en) Equipment working state monitoring system and method based on Internet of things
Gupta et al. Digital twin techniques in recognition of human action using the fusion of convolutional neural network
KR20200080419A (en) Hand gesture recognition method using artificial neural network and device thereof
Adam et al. Multiple faults diagnosis for an industrial robot fuse quality test bench using deep-learning
Pérez-Dattari et al. Interactive learning of temporal features for control: Shaping policies and state representations from human feedback
CN114898219A (en) SVM-based manipulator touch data representation and identification method
Khaertdinov et al. Temporal feature alignment in contrastive self-supervised learning for human activity recognition
Male et al. Multimodal sensor-based human-robot collaboration in assembly tasks
Thach et al. Deformernet: A deep learning approach to 3d deformable object manipulation
Eze et al. Learning by Watching: A Review of Video-based Learning Approaches for Robot Manipulation
Huynh et al. Maneuverable autonomy of a six-legged walking robot: design and implementation using deep neural networks and hexapod locomotion
Guo et al. Autoencoding a Soft Touch to Learn Grasping from On‐Land to Underwater
Wang et al. Alignment Method of Combined Perception for Peg‐in‐Hole Assembly with Deep Reinforcement Learning
Kang et al. Manual assembly actions segmentation system using temporal-spatial-contact features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant