CN113241150A - Rehabilitation training evaluation method and system in mixed reality environment - Google Patents

Rehabilitation training evaluation method and system in mixed reality environment Download PDF

Info

Publication number
CN113241150A
CN113241150A CN202110623610.7A CN202110623610A CN113241150A CN 113241150 A CN113241150 A CN 113241150A CN 202110623610 A CN202110623610 A CN 202110623610A CN 113241150 A CN113241150 A CN 113241150A
Authority
CN
China
Prior art keywords
evaluation
user
robot
module
limb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110623610.7A
Other languages
Chinese (zh)
Inventor
陈超
丁乐
胥佳艳
祁俊龙
李东华
霍冠宇
张泽宝
张段隆昊
刘佳欣
王诺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Institute of Science and Technology
Original Assignee
North China Institute of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Institute of Science and Technology filed Critical North China Institute of Science and Technology
Priority to CN202110623610.7A priority Critical patent/CN113241150A/en
Publication of CN113241150A publication Critical patent/CN113241150A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0622Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0622Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
    • A63B2071/0638Displaying moving images of recorded environment, e.g. virtual environment

Abstract

The invention belongs to the technical field of mixed reality, and particularly relates to a rehabilitation training evaluation method and a rehabilitation training evaluation system in a mixed reality environment, wherein preset evaluation contents are obtained; converting the preset evaluation content into a mixed reality scene and a virtual motion guidance holographic body animation in real time; displaying the virtual motion guidance holographic body animation so that the current user imitates the motion of the virtual motion guidance holographic body animation; acquiring the current limb action of a user; analyzing the current user limb action data by using a deep learning algorithm to obtain a current user limb action analysis result; obtaining a current user limb evaluation result according to the limb action command and the current user limb action analysis result; and pushing preset training contents according to the evaluation result and generating new evaluation contents. The invention can promote the rehabilitation process and greatly improve the rehabilitation efficiency. By actively participating in the training and evaluation closed-loop training period, a good rehabilitation effect can be obtained.

Description

Rehabilitation training evaluation method and system in mixed reality environment
Technical Field
The invention relates to the technical field of mixed reality, in particular to a rehabilitation training evaluation method and system in a mixed reality environment.
Background
Rehabilitation has been an important component of modern medicine. With the continuous influence of the development experience of the foreign rehabilitation therapy, the continuous improvement and opening of the domestic policy, and the endless new mode and new technology of the rehabilitation therapy, the important development opportunity is brought for the rehabilitation therapy. Most of traditional limb rehabilitation evaluations are low in accuracy, whether the evaluation needs to achieve the purpose of training content of a patient, whether the evaluation process is smooth, and whether the evaluation principle is standard or not are evaluated, the accuracy of rehabilitation data has great significance for the patient and a doctor, but due to the influences of subjective factors of the rehabilitation doctor, the ambiguity of an evaluation model and the like, the phenomena of low evaluation accuracy, ambiguity of the training process and the like often occur.
Although some teams have been conducting more intelligent system development in this regard, we are still faced with some problems. For example, some rehabilitation systems have low functional expansibility and can only complete auxiliary rehabilitation on a single part, or the application technology is limited to the mixed reality technology, and immature scene construction algorithms and the influence of the immature scene construction algorithms on the psychological state of a patient are often ignored; still some systems lack the evaluation module, or the evaluation module is only limited to the periodic evaluation, and the intuition nature is poor, and the effect is not good.
In the related art, Fugl-Meyer Association (FMA) is obviously limited in application at present. FMA, a method of assessing sensory dyskinesia in stroke patients, is now used for clinical assessment of motor function. The method has the characteristics of better sensitivity for improving the functional state of the patient, contribution to statistical processing during scientific research, more comprehensive reflection on the condition of the patient and the like, so that the method is a tool with extremely strong effectiveness when being used for clinical evaluation. However, the traditional FMA test is lack of intuitiveness, needs guidance assistance of doctors, wastes time and labor for clinical evaluation, has high requirements on therapists, needs a plurality of devices, has complicated measurement contents and the like, so that the clinical utilization of FMA is greatly limited.
In conclusion, the existing intelligent rehabilitation scheme has the problems of lack of an evaluation module, poor rehabilitation effect, low interest and poor single-type effect.
Disclosure of Invention
In view of this, the present invention aims to overcome the defects of the prior art, and provide a solution for real-time detection and evaluation in a mixed reality environment, so as to avoid delaying the optimal rehabilitation time for a patient physician due to factors such as a field and equipment. The invention can not only start limb evaluation and rehabilitation training at any time and any place, but also can be evaluated and trained by a single person without accompanying of doctors, thereby solving the problems of difficult reservation, uncoordinated psychology of patients and the like.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for rehabilitation training assessment in a mixed reality environment, comprising:
acquiring preset evaluation content;
converting the preset evaluation content into a mixed reality scene and a virtual motion guidance holographic body animation in real time;
displaying the virtual motion guidance holographic body animation so that the current user imitates the motion of the virtual motion guidance holographic body animation;
acquiring the current limb action of a user;
analyzing the current user limb action data by using a deep learning algorithm to obtain a current user limb action analysis result;
obtaining a current user limb evaluation result according to the limb action command and the current user limb action analysis result;
and pushing preset training contents according to the evaluation result and generating new evaluation contents.
Further, the acquiring the current user limb action includes:
the current user guides the holographic body animation to obtain the limb movement guidance according to the virtual movement;
and the current user makes corresponding actions according to the limb action instruction or controls the robot to assist in completing the corresponding actions.
Further, the method is characterized in that the current user performs corresponding actions according to the limb action instruction or controls the robot to assist in completing corresponding actions by the current user, and the method comprises the following steps: the robot assists in picking up auxiliary articles, the robot automatically advances, the robot automatically avoids obstacles, and the robot assists in completing capturing actions.
Further, the robot automatically advances, comprising:
discretizing the action path of the robot to generate a series of ordered waypoints;
acquiring the current position and orientation of the robot, calculating the pivot rotation angle and the forward/backward distance of the robot according to the relationship between the current position and orientation and the position and orientation of a road point, and sending a movement instruction to the robot;
the robot rotates and then walks according to the motion instruction, and a successful execution instruction is fed back after the motion instruction is completed;
acquiring the current position and the orientation of the robot again, and judging whether the distance between the current position of the robot and the position of the road point is smaller than a preset threshold value or not;
if the rotation angle is larger than the preset rotation angle, the on-site rotation angle and the forward/backward distance of the robot are calculated continuously, and a motion instruction is sent to the robot.
Further, the analyzing the current user limb motion data by using the deep learning algorithm includes:
acquiring a user limb action information acquisition standard;
obtaining a complete evaluation model according to the user limb action information acquisition standard and the standard recording sample action;
and obtaining the limb evaluation analysis of the user according to the evaluation model.
Furthermore, the user limb action information acquisition standard is established according to the FMA standard.
Further, obtaining a complete evaluation model according to the user limb action information acquisition standard and the standard recording sample action comprises:
acquiring coordinate information of a joint to be evaluated by using human body identification equipment, and storing the coordinate information as use sample matrix data by taking the coordinate axis of the human body identification equipment as a standard;
and building a neural network evaluation model by using the used sample matrix data through tensorflow, transmitting the obtained multiple groups of joint sample matrix data into the model, and training the model through forward propagation to obtain a complete evaluation model.
Further, the obtaining of the evaluation analysis of the user limb according to the evaluation model includes:
acquiring coordinate information of a joint to be evaluated by using human body identification equipment, and storing the coordinate information as use matrix data by taking a coordinate axis of the human body identification equipment as a standard;
and bringing the use matrix data into an evaluation model to obtain a limb evaluation analysis result.
In a second aspect, a rehabilitation training assessment system in a mixed reality environment includes:
the first acquisition module is used for acquiring preset evaluation content;
the mixed reality module is used for converting the preset evaluation content into a mixed reality scene and a virtual motion guidance holographic body animation in real time;
the display module is used for displaying the virtual motion guidance holographic body animation so that the current user imitates the motion of the virtual motion guidance holographic body animation;
the second acquisition module is used for acquiring the current limb actions of the user;
the body motion analysis module is used for analyzing the body motion data of the current user by utilizing a deep learning algorithm to obtain a body motion analysis result of the current user;
the limb evaluation module is used for obtaining a current user limb evaluation result according to the limb action command and the current user limb action analysis result;
and the training content pushing module is used for pushing preset training content according to the evaluation result and generating new evaluation content.
Further, still include:
the human body identification module is used for identifying physical movement information of the limbs of the user by using human body identification equipment and submitting the information to the data evaluation module;
the human-computer interaction module is used for transmitting user and system instructions and realizing human-computer interaction;
the robot module is used for acquiring positioning information, planning path points by using an algorithm, enabling the robot to advance in a segmented mode and controlling the robot to make auxiliary actions;
the positioning module is used for refreshing the position information of the user, the machine vehicle and the auxiliary article to the server;
the man-machine interaction module comprises:
the voice recognition unit is used for recognizing the voice information of the user and extracting a user control instruction;
the gesture recognition module is used for recognizing gesture information of a user and extracting a user control instruction;
and the voice interaction module is used for man-machine language interaction.
Further, the positioning module comprises: the positioning base station comprises a positioning base station, a first positioning module, a second positioning module and a third positioning module;
the first positioning module is used for positioning the position information of a user;
the second positioning module is used for positioning the position information of the robot;
the third positioning module is used for positioning the position information of the auxiliary article;
the positioning base station is used for completing the cooperative positioning of the first positioning module, the second positioning module and the third positioning module.
By adopting the technical scheme, the invention provides the rehabilitation training evaluation method and the rehabilitation training evaluation system in the mixed reality environment, and the preset evaluation content is obtained; converting the preset evaluation content into a mixed reality scene and a virtual motion guidance holographic body animation in real time; the user receives the limb action instruction according to the evaluation content; acquiring the current limb action of a user; analyzing the current user limb action data by using a deep learning algorithm to obtain a current user limb action analysis result; obtaining a current user limb evaluation result according to the limb action command and the current user limb action analysis result; and pushing preset training contents according to the evaluation result and generating new evaluation contents. The rehabilitation training evaluation method can provide rich training environment according to the human brain neural plasticity theory, the mirror image neuron and the continuous passive training theory, can promote the rehabilitation process, and can greatly improve the rehabilitation efficiency. The user carries out rehabilitation training evaluation by regularly using the system, carefully finishes a system pushing task, actively participates in a closed-loop training period of the training evaluation, and can obtain a good rehabilitation effect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a rehabilitation training assessment method in a mixed reality environment according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating a method for evaluating rehabilitation training in a mixed reality environment according to an embodiment of the present invention;
FIG. 3 is a block diagram illustrating a rehabilitation training evaluation system in a mixed reality environment, in accordance with an embodiment of the present invention;
FIG. 4 is a flowchart illustrating operation of a rehabilitation training evaluation system in a mixed reality environment, in accordance with an embodiment of the present invention;
FIG. 5 is a schematic diagram of a rehabilitation training evaluation system in a mixed reality environment according to another embodiment of the present invention;
fig. 6 is a schematic diagram of a mixed reality device in an embodiment of the invention.
FIG. 7 is a schematic diagram illustrating a flow chart of a rehabilitation training evaluation system in a mixed reality environment according to another embodiment of the present invention;
fig. 8 is a robot control flow diagram of a rehabilitation training evaluation system in a mixed reality environment according to another embodiment of the present invention.
In the figure:
1. an acquisition module; 2. a mixed reality module; 3. a human-computer interaction module; 4. a positioning module; 5. a human body recognition module; 6. a data evaluation module; 7. a robot module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
Fig. 1 is a schematic flow chart of a rehabilitation training evaluation method in a mixed reality environment according to the present invention.
As shown in fig. 1, a flowchart of a rehabilitation training evaluation method in a mixed reality environment is provided in the present invention,
the method comprises the following steps:
step S11, acquiring preset evaluation content;
step S12, converting the preset evaluation content into a mixed reality scene and a virtual motion guidance holographic body animation in real time;
step S13, displaying the virtual motion guidance holographic body animation so that the current user imitates the motion of the virtual motion guidance holographic body animation;
step S14, acquiring the current limb movement of the user;
step S15, analyzing the current user limb action data by using a deep learning algorithm to obtain the current user limb action analysis result;
step S16, obtaining the estimation result of the limb of the current user according to the limb action command and the analysis result of the limb action of the current user;
and step S17, pushing preset training content according to the evaluation result and generating new evaluation content.
In one embodiment, the mixed reality device acquires preset evaluation content sent by a server, and then converts the preset evaluation content into a mixed reality scene and a virtual action guidance holographic body animation in real time; displaying the virtual motion guidance holographic body animation through a display module, viewing the motion of the virtual guidance holographic body animation by a current user through the display module, simulating the motion of the virtual motion guidance holographic body animation by the current user, acquiring the limb motion of the current user, and analyzing the limb motion data of the current user by using a deep learning algorithm to obtain the limb motion analysis result of the current user; obtaining a current user limb evaluation result according to the limb action command and the current user limb action analysis result; and pushing preset training contents according to the evaluation result and generating new evaluation contents.
The rehabilitation training assessment method in the mixed reality environment adopts the technical scheme, and the preset assessment content is obtained; converting the preset evaluation content into a mixed reality scene and a virtual motion guidance holographic body animation in real time; displaying the virtual motion guidance holographic body animation so that the current user imitates the motion of the virtual motion guidance holographic body animation; acquiring the current limb action of a user; analyzing the current user limb action data by using a deep learning algorithm to obtain a current user limb action analysis result; obtaining a current user limb evaluation result according to the limb action command and the current user limb action analysis result; and pushing corresponding training contents according to the evaluation result and generating new evaluation contents. The rehabilitation training evaluation method can provide rich training environment according to the human brain neural plasticity theory, the mirror image neuron and the continuous passive training theory, promote the rehabilitation process and greatly improve the rehabilitation efficiency. The user carries out rehabilitation training evaluation by regularly using the system, carefully finishes a system pushing task, actively participates in a closed-loop training period of the training evaluation, and can obtain a good rehabilitation effect.
In one embodiment, as shown in fig. 2, the present invention provides a flow chart of a rehabilitation training evaluation method in a mixed reality environment,
wherein, obtaining the current user limb evaluation result comprises:
step S21, acquiring a user limb action information acquisition standard; wherein, the collection standard of the limb action information of the user is established according to the FMA standard.
Step S22, recording sample actions according to the user limb action information acquisition standard and the standard to obtain a complete evaluation model;
and step S23, pushing corresponding training content according to the evaluation result and generating new evaluation content.
In some embodiments, obtaining the current user limb movement comprises:
the current user guides the holographic body animation to obtain the limb movement guidance according to the virtual movement;
and the current user makes corresponding actions according to the limb action instruction or controls the robot to assist in completing the corresponding actions.
Wherein, the current user makes corresponding action according to the limbs action guide or the current user controls the robot to assist in completing corresponding action, including:
discretizing the action path of the robot to generate a series of ordered waypoints;
acquiring the current position and orientation of the robot, calculating the pivot rotation angle and the forward/backward distance of the robot according to the relationship between the current position and orientation and the position and orientation of a road point, and sending a movement instruction to the robot;
the robot rotates and then walks according to the motion instruction, and a successful execution instruction is fed back after the motion instruction is completed;
acquiring the current position and the orientation of the robot again, and judging whether the distance between the current position of the robot and the position of the road point is smaller than a preset threshold value or not;
if the rotation angle is larger than the preset rotation angle, the on-site rotation angle and the forward/backward distance of the robot are calculated continuously, and a motion instruction is sent to the robot.
In some embodiments, analyzing the current user limb motion data by using a deep learning algorithm to obtain a current user limb motion analysis result, includes:
acquiring somatosensory recognition data by using the Kinect;
specifically, for example, coordinate information of a joint to be evaluated is acquired by using a Kinect, and the coordinate axis of the Kinect is taken as a standard, and is stored as a 20 × 9 matrix, and is directly stored as txt text. And (4) building a neural network model through tensoflow, analyzing the obtained current user limb actions, and obtaining a current user limb action analysis result.
For example, in one embodiment, a neural network model composed of 5 fully-connected layers is constructed through tensorflow, the acquired multiple sets of joint matrix data are transmitted into the model, and the model is trained through forward propagation. Then, during prediction, 20 x 9 joint data of a patient is recorded through the Kinect, and the data is transmitted into the model, so that a prediction result, such as that the elbow joint cannot bend and stretch, is output. Where forward propagation may also be referred to as a feedforward neural network, the goal is to approximate some function f. For example, for a classifier, y ═ f (x) maps the input x to a class y. The feed forward network defines a mapping y ═ f (x; θ) and learns the values of the parameters θ so that it can get the best functional approximation.
Among them, the Kinect has two functions, one is to take a large number of data sets in the stage of training the model, and train the model according to these data sets. And secondly, in a prediction evaluation stage, the patient performs a specified action, then the Kinect acquires joint data in the action process, extracts 20 × 9 data to form a joint matrix, and uses the trained model to perform prediction by using the data.
Fig. 3 is a general framework diagram of a rehabilitation training evaluation system in a mixed reality environment according to an embodiment of the present application.
Fig. 4 is a flowchart illustrating a method for evaluating rehabilitation training in a mixed reality environment according to an embodiment of the present disclosure.
As shown in fig. 3 and 4, a user enters the system through the HoloLens, the HoloLens completes data exchange with the server by means of gesture recognition, voice interaction and the WIFI communication module following the TCP protocol, and the HoloLens transmits a user instruction to the server.
The patient is guided by adopting the virtual motion guidance holographic body animation, user data is obtained by using the Kinect and processed into a 20-9 matrix, and therefore the motion, the angle and the like of the hand, the wrist, the elbow, the shoulder and the upper body during evaluation are comprehensively expressed. And thus sample matrix data is made, wherein the sample matrix data can be but is not limited to an obstacle category data set, and meanwhile, a neural network evaluation model is built by utilizing tenserflow through a deep learning technology.
In one embodiment, the neural network evaluation model and the acquired obstacle category data set are trained to form a complete evaluation model, wherein the obstacle category data set is the sample matrix data described in the above embodiment, and the action performed by the user after obtaining the signal to start evaluation is acquired by the Kinect, and the evaluation is completed by combining the evaluation model.
When a patient sends an instruction that the robot needs to perform auxiliary picking, the positioning base station acquires the position information of Tracker by means of the Lighthouse through the infrared laser sent out, the Unity server acquires the positioning information of Tracker by calling the SteamVR, and discrete points of the advancing road of the robot are planned by means of the A-star algorithm. The robot obtains a forward command through WiFi/socket communication, and advances and object grabbing are carried out. And in the advancing process of the robot, the purpose of automatically avoiding obstacles is achieved by constructing the size of the obstacles through Tracker position information and a server.
All data when a patient is evaluated are uploaded to a WEB end of the cloud by the Unity server to be displayed.
The software part comprises:
HoloLens scenario (client): UI user interface
Rehabilitation assessment cloud (client): cloud-based playback viewing of experimental data
Unity (server): the method comprises the following steps of collecting limb information, calculating limb data, advancing the robot, controlling grabbing and sending the data.
In the aspect of data acquisition, the prediction object and the prediction demand are comprehensively considered, data acquired by the Kinect are converted into a matrix of 20 x 9 from the upper limb joint coordinates of a three-dimensional space, information such as the motion and the angle of a hand, a wrist, an elbow, a shoulder and an upper half body during evaluation is comprehensively represented, and training and prediction of a neural network are facilitated.
In the aspect of building a neural network model, tenterflow and keras are adopted for framework building at the bottom layer. Meanwhile, a gradient descent algorithm is used, and the model is linearized by selecting a Relu function, so that the model can better predict the target. The optimizer uses an Adam optimizer which is wide in application and good in effect, the learning rate is set to be 0.001, and the model can be trained to achieve an ideal effect through testing. Considering that a training object is a digital sample and the sample complexity is not high, 5 layers of full connection layers are designed in the network structure, and the network structure is proved to have a better prediction result compared with other network structures after multiple experiments.
In terms of model prediction, joint predictions of 660 groups of users at the time of throwing were obtained at the time of data set acquisition, and the number of classifications was 7. The accuracy of the model in the drawing can reach more than 92%, and the accuracy of evaluate can also reach more than 88%.
For the sake of understanding, the model process of the fully-connected layer can be understood as a network layer of several layers, and the design of the neural network is to determine its architecture. The neural network architecture of the present invention arranges these layers in a chain structure, where each layer is a function of the previous layer. In this structure, the first layer is given by:
h(1)=g(1)(W(1)Tx+b(1))
the second layer is composed of
h(2)=g(2)(W(2)Th(1)+b(2))
And the like.
Effective processing of data can also be understood as that joint information acquired by the Kinect is converted into a text number matrix and then is imported into a neural network model for prediction.
The robot divides the ground into grid points, namely nodes, through the A-plug-in, and traverses and calculates the weight values of the nodes to construct a collection. And secondly, calculating a path with the least cost for reaching the target node through an A-x algorithm, wherein the system can bypass the node occupied by the obstacle when traversing the node, and the obtained path is the optimal path.
It can be understood that from the perspective of doctors and patients, in order to solve the problems of boring and inconvenience of traditional rehabilitation and relieve the limitation of FMA, the upper limb rehabilitation training evaluation system based on HoloLens, Kinect, Tracker, robot and rehabilitation cloud is designed by adopting the technologies of somatosensory interaction, deep learning, gesture interaction, voice interaction, laser positioning and the like.
According to the method, the HoloLens2 equipment is used as a hardware inlet for the user to enter the system, the MR technology can well bring the user into a rehabilitation environment with natural fusion of scientific and technological reality, an interaction interface is natural, switching is smooth, the risk and scene limitation of various VR systems are avoided, and the system is easy to expand transversely and longitudinally in application and development. The gesture interaction and the voice interaction are added, so that the user operation is more convenient and faster, the user can directly use the hand without special complex training, and the low threshold and the friendliness of the interaction with the user are highlighted. The gesture recognition enables the performance to be more flexible and the operation real-time performance to be high. The upper limb rehabilitation evaluation system based on the mixed reality technology can generate feedback of various information of a patient in training, effectively utilize evaluation modes and training contents in various forms, and add virtual actions to guide the holographic animation, so that interestingness is increased, the training of the patient is changed from passive to active, and the subjective activity of the patient is fully exerted. According to the human brain neural plasticity theory, the mirror image neuron and the continuous passive training theory, the rich training environment can promote the rehabilitation process, so that the rehabilitation efficiency can be greatly improved by the project.
Fig. 5 is a schematic structural diagram of a rehabilitation training evaluation system in a mixed reality environment according to the present invention.
As shown in fig. 5, the rehabilitation training evaluation system in a mixed reality environment according to the present invention includes:
the acquisition module 1 is used for acquiring preset evaluation content and preset training content;
the mixed reality module 2 is used for converting the preset evaluation content and the preset training content into a mixed reality scene and a virtual motion guidance holographic body animation in real time;
the human-computer interaction module 3 is used for transmitting a user instruction;
the positioning module 4 is used for refreshing the position information of the user, the machine vehicle and the auxiliary article to the server;
the human body recognition module 5 is used for recognizing physical information of the user limb movement by using human body recognition equipment and submitting the information to the data evaluation module;
the data evaluation module 6 is used for calculating the acquired data to obtain a rehabilitation effect conclusion;
and the robot module 7 is used for acquiring the positioning information, planning path points by using an algorithm, enabling the robot to advance in a segmented mode, and controlling the robot to make an auxiliary action.
It can be understood that the working principle of the rehabilitation training evaluation method in the mixed reality environment is as follows: and the user enters the evaluation system, the evaluation system displays the mixed reality interface to the user, and the user operates the mixed reality interface in a gesture operation or voice command mode. The system displays the evaluation content and the virtual motion guidance holographic body animation to the user, and the user tries to imitate the motion of the virtual motion guidance holographic body animation. Meanwhile, the limb action data of the user is processed by the human body action recognition module. The human body action recognition module firstly obtains the limb information by using human body recognition equipment and then transmits the collected data to the server. The server calculates by using the acquired data to obtain an evaluation result and stores the evaluation result, and can also transmit the result to the mixed reality module and display the result to a user. The human body action recognition module can acquire data accurately and can evaluate the current limb state of a user accurately and intuitively.
The evaluation system sets up actions that a user can throw and grab when performing action evaluation, but both require assistance from a robot. After throwing out the auxiliary article, the user sends out a command needing the robot to pick up in a gesture operation or voice command mode.
In some embodiments, the robot positioning system further comprises a robot positioning module, configured to acquire a position of the robot and send the position to the server, so that the current user performs a corresponding action according to the limb action guidance or the current user controls the robot to assist in completing the corresponding action.
In some embodiments, the system further comprises an auxiliary article positioning module for acquiring the position of the auxiliary article and sending the position to the server.
The mixed reality module 2 receives preset evaluation content sent by a server;
converting the preset evaluation content into a mixed reality scene and a virtual motion guidance holographic body animation in real time; and displaying a corresponding mixed reality interface, sending the user limb action information extracted by the human body recognition module to a server, and calculating an evaluation result by the server and transmitting the evaluation result to the mixed reality module 2 through the wireless communication module.
In some embodiments, the positioning base station cooperates with the positioning module, the robot positioning module, and the auxiliary article positioning module, so that the server obtains the positioning information of the user, the robot, and the auxiliary article. And the server plans the advancing route of the robot according to the space and the positioning information. The robot advances according to the planned path and acts according to the tasks preset by the server. After the user finishes evaluation, the server displays a preset training task to the user, and the user trains according to the task.
It should be noted that the positioning module, the robot positioning module, and the auxiliary article positioning module each include an infrared sensor array, and the infrared sensor array includes two infrared laser transmitters whose rotating shafts are perpendicular to each other.
In some embodiments, the evaluation action standard content is formulated according to an FMA scale, and the obstacle category classification is obtained. When making the evaluation model, firstly, human body data is obtained by using human body recognition equipment as an obstacle category sample. And dividing the three-dimensional coordinates of the obtained sample into matrixes, and making an obstacle category data set. And (3) constructing a neural network evaluation model by utilizing the tensorflow through a deep learning technology. And (4) performing gradual training by using the obstacle category data set to obtain a complete neural network evaluation model. When the user evaluates, the system trains according to the complete neural network evaluation model.
The evaluation task may be throwing, grabbing, etc. The completion may be a user's motion statistics such as a user's shoulder rotation angle, elbow flexion and extension angle, etc.
Fugl-Meyer Association (FMA) is a method of assessing sensory dyskinesia in stroke patients and is now being used for clinical Assessment of motor function. The method has the characteristics of better sensitivity for improving the functional state of the patient, contribution to statistical processing during scientific research, more comprehensive reflection on the condition of the patient and the like, so that the method is a tool with extremely strong effectiveness when being used for clinical evaluation. The FMA evaluation model is a recognized accurate evaluation model, but the traditional FMA evaluation has the defects of lack of intuition, requirement on guidance assistance of doctors, time and labor waste in clinical evaluation, high requirement on therapists, more required equipment, complicated measurement content and the like, so that the clinical utilization of the FMA is greatly limited. The current application is obviously restricted.
The invention utilizes FMA to carry out action design, fuses a plurality of measurement contents into a single movement action, and asks professional doctors for the evaluation contents, thereby ensuring that the evaluation contents of the project have professional level. The patient does not need to be guided by the physician's hands in this project, and can take a prescribed action based on the system prompts. The system uses Microsoft Kinect to acquire human body joint points, and the more accurate performance lays a solid foundation for evaluation training; and (3) analyzing the limb movement data of the patient by using a deep learning algorithm, and feeding back various information such as limb angles, limb defects and the like.
Meanwhile, the FMA is used for single evaluation in the project, so that the defect that fuzzy evaluation is generally carried out on the whole process by domestic and foreign intelligent rehabilitation systems is overcome; the evaluation is rapid, and the user can rapidly draw a conclusion after finishing the specified action; and the result of the single evaluation can be displayed. Compared with the traditional FMA evaluation, the system is more intuitive in evaluation. Meanwhile, the FMA is used for single evaluation, so that the defect that fuzzy evaluation is generally carried out on the whole process by domestic and foreign intelligent rehabilitation systems is overcome; the evaluation is rapid, and the user can rapidly draw a conclusion after finishing the specified action; and the result of the single evaluation can be displayed. Compared with the traditional FMA evaluation, the evaluation of the application is more intuitive.
It can be understood that the application utilizes FMA and deep learning to combine action design and evaluation content design, and the rehabilitation status of the user can be accurately judged. The evaluation is carried out aiming at the single evaluation, so that the anxiety of the user in the rehabilitation training period is relieved, and the rehabilitation efficiency is improved. The accurate evaluation result enables the doctor to judge the rehabilitation planning of the user more reasonably and scientifically. In addition, the set of device adopts the MR technology, and the user can carry out rehabilitation evaluation training without going out.
In some embodiments, the method further comprises positioning the base station,
the positioning base station adopted in the present application is Lighthouse, which has many advantages. First, it requires very little computing power. An optical system needs to perform imaging, and then a program needs to distinguish Mark points in the imaging through an image processing method. The more detailed the imaging, the higher the image processing computational power required. The infrared camera is simpler than the monochrome camera, which is simpler than the color camera. Lighthouse uses only the temporal parameters and does not involve image processing, and the calculation of the location can be done locally on the device.
Specifically, the positioning system name that location basic station and orientation module constitute in this application is Lighthouse, constitutes by two basic stations: each base station is provided with an infrared LED array and two rotating infrared laser transmitters with mutually vertical rotating shafts. The rotation speed is one turn for 10 ms. The operating state of the base station is such that: 20ms is a cycle, when the cycle starts, the infrared LED flashes, the rotating laser of the X axis sweeps the whole space within 10ms, and the Y axis does not emit light; the rotating laser in the Y-axis sweeps the entire space for the next 10ms, and the X-axis does not emit light.
This application adopts laser positioning to realize the accurate automatic pickup of robot, keeps away the barrier function automatically.
Some embodiments also include a high speed camera under which a Lighthouse base station Valve mounts many light sensitive sensors on the mixed reality module 2. After the LED of the base station flashes, signals are synchronized, and then the photosensitive sensor can measure the time when the X-axis laser and the Y-axis laser respectively reach the sensor. This time is exactly the time at which the X-axis and Y-axis lasers are turned to this particular angle, which illuminates the sensor, and thus the X-axis and Y-axis angles of the sensor relative to the base station are known; the positions of the light-sensitive sensors distributed on the positioning device are also known, so that the position and the movement track of the mixed reality module can be calculated through the position difference of the sensors.
Second, with very little delay, high computational power requirements means that the delay will be high: the large amount of data for graphics processing is transmitted from the camera to the computer and then from the computer to the display, which increases the delay. And the Lighthouse can directly transmit the position data to the computer, so that the step of high data transmission from the camera to the computer is omitted.
Patients with FMA assessment needs often cannot assess themselves because the patient using the assessment form has many assistance needs and the assistance robot can play a lot of help and interaction.
Preferably, the robot comprises: and the mechanical arm is used for finishing auxiliary article picking or finishing the capturing training with the user by using the article.
The robot comprises a control module, and the control module controls the mechanical arm to grab or capture actions through the driving module.
Specifically, when the user performs motion evaluation, the user issues a command to the robot after performing motion according to the motion guidance given by the virtual motion guidance hologram. The robot advances according to the path plan obtained by the server, and captures auxiliary articles according to the preset actions in the server and delivers the auxiliary articles to the user.
The application provides a rehabilitation training evaluation device based on FMA assesses table and deep learning uses the robot at recovered medical treatment, and the patient of help dyskinesia carries out the rehabilitation assessment training for the user can use FMA to assess more conveniently.
Preferably, the embodiments in the present application further include: an intelligent terminal;
the intelligent terminal is used for setting APP, and a user fills in limb information through a small program to obtain an evaluation task and a rehabilitation task;
and the intelligent terminal is connected with the server.
Traditional rehabilitation data needs doctors to record, and as for the FMA, scores given by the doctors are not intuitive for patients, and if details are too complicated, the requirement that the doctors and the patients can backtrack the data cannot be met. Meanwhile, good recording and displaying of the data are directly related to the rehabilitation effect of the patient, and doctors need to perform reasonable adjustment on the rehabilitation plan of the patient according to the data and the applicability analysis of the current rehabilitation plan.
In some embodiments, the server may be an Azure cloud, the Azure cloud has good device adaptability, the backend display is faster, and cloud resources are rich, so that the WEB side has rich functions. The Azure cloud-based database is adopted to display the limb data and the evaluation result of the user, so that the physician and the patient can conveniently view the data at two ends, and meanwhile, the Azure cloud-based database plays a great auxiliary role in the future rehabilitation of the patient.
In some embodiments, the present application further provides that the robot performs a pick task step, comprising:
the server presets a robot picking task;
a user logs in the system, executes an evaluation task and sends an instruction;
the server acquires a user instruction and analyzes the intention of the user;
the server plans a robot advancing route;
the robot follows the server planned route and picks up the auxiliary items for delivery to the user.
Preferably, when the user needs to control the robot to assist in completing the corresponding action, the step of generating the robot forward path plan by the server includes:
the server acquires the positioning of a user, the robot and the auxiliary article;
discretizing the action path of the robot to generate a series of ordered waypoints;
acquiring the current position and orientation of the robot, calculating the pivot rotation angle and the forward/backward distance of the robot according to the relationship between the current position and orientation and the position and orientation of a road point, and sending a movement instruction to the robot;
the robot rotates and then walks according to the motion instruction, and a successful execution instruction is fed back after the motion instruction is completed;
acquiring the current position and the orientation of the robot again, and judging whether the distance between the current position of the robot and the position of the road point is smaller than a preset threshold value or not;
if the distance is larger than the preset distance, continuing to calculate the pivot rotation angle and the advancing/retreating distance of the robot and sending a motion instruction to the robot;
if not, continuing to walk to the next route point;
and if so, executing according to the preset action.
Specifically, the robot provided by the present application is taken as an example to forward pick up the auxiliary article. After the user sends out an instruction that the robot needs to pick up the auxiliary articles, the server acquires the positioning information of the user, the robot and the auxiliary articles through the positioning module, the second positioning module and the third positioning module.
Firstly, planning a path for the robot to advance to grab auxiliary articles: the server calculates the orientation of the robot, discretizes the road, divides the road into path points, and selects the next most suitable path point as a forward target. And the robot receives the forward command and the target information to advance to the next path point. And the server acquires the position information of the robot, the user and the auxiliary article again, judges the relationship between the distance between the robot and the user and the auxiliary article, and repeats the path planning step if the distance is greater than the threshold until the distance between the robot and the user is smaller than the threshold. When the robot reaches the place where the auxiliary article is located, the control module calls the mechanical arm through the driving module to grab the article.
After the robot picks up the auxiliary item, the server plans a path for the robot to deliver the auxiliary item to the user. And the server considers the orientation of the robot, divides the road into discrete points and selects the current next most suitable path point as a forward target. And the robot receives the forward command and the target information to advance to the next path point. And the server acquires the position information of the robot, the user and the user again, judges the relationship between the distance between the robot and the user, and repeats the path planning step if the distance is greater than the threshold until the distance between the robot and the user is smaller than the threshold. When the robot reaches the location of the user, the control module calls the mechanical arm through the driving module to deliver the auxiliary article to the user.
Preferably, the generating of the user intention according to the gesture instruction comprises:
recognizing that the gesture of the user is in a ready state;
recognizing that the gesture of the user is in a clicking state, and ignoring other gesture postures;
recognizing the gaze fixation content of a user to obtain a gesture click target of the user;
generating a user intention;
preferably, the generating of the user intention according to the voice instruction includes:
acquiring a user voice;
recognizing the voice, returning an executable instruction after recognition, and discarding a non-executable instruction;
generating a user intention;
in summary, the present invention provides a rehabilitation training evaluation system in a mixed reality environment, wherein a user wears a mixed reality device to enter the system, and obtains a precise limb state through a preset evaluation task and an evaluation standard, so that the user can perform rehabilitation training and evaluation in a convenient and interesting environment, and meanwhile, the system obtains a conclusion for a single evaluation, so that the user can see a small achievement of stage training, get rid of anxiety and improve rehabilitation efficiency. The physician can also make decisions about the patient's current training program based on minor changes in the data.
From the perspective of doctors and patients, in order to solve the problems of boring and inconvenience existing in traditional rehabilitation and remove the limitation of FMA, the upper limb rehabilitation training evaluation system based on HoloLens, Kinect, Tracker, robot, Azure rehabilitation cloud and Azure Spatial Anchor is designed by adopting the technologies of somatosensory interaction, deep learning, gesture interaction, voice interaction, laser positioning and the like. The target user is positioned in an individual with self limb rehabilitation requirements, a rehabilitation mechanism which needs to accurately evaluate and liberate manpower, material resources and financial resources, and a rehabilitation mechanism which can realize equipment leasing.
In some embodiments, as shown in fig. 6, for the mixed reality device provided in an embodiment of the present application, the mixed reality device is in the form of a helmet, and can be directly worn on the head of a user, thereby facilitating the user to move and interact with the robot. When a patient wears the helmet, the helmet can receive the voice of the patient through a microphone and recognize the voice in real time as a text to be converted into a corresponding instruction; voice interaction is essential to better incorporate the patient into the training system, and it also plays a large role in helping the patient recover faster and more pleasantly.
For ease of understanding, the present application also provides a flow chart for practical use, as shown in fig. 7.
Step S31, presetting an evaluation task and an evaluation standard by the server;
step S32, the user fills in information and obtains a corresponding evaluation task;
step S33, the user logs in the system, and the system displays the evaluation content according to the server record and guides the action made by the holographic body with the virtual action;
step S34, the user guides the motion guidance made by the hologram animation according to the virtual motion to imitate the motion of the hologram;
step S35, the human body identification equipment acquires the limb information of the user;
step S36, the positioning device acquires the position information of the user, the robot and the service article; planning the advancing route of the robot by using the positioning information, and controlling the robot to perform advancing, grabbing and capturing actions;
step S37, the server guides the action guidance made by the holographic animation according to the virtual action and the current user limb action to obtain the current user limb evaluation result;
and step S38, the server pushes preset training content according to the limb evaluation result and generates new evaluation content.
As shown in fig. 8, in step S36, the positioning device obtains the position information of the user, the robot, and the service item; planning the advancing route of the robot by using the positioning information, and controlling the robot to perform advancing, grabbing and capturing actions; the method specifically comprises the following steps:
361, determining the position of a tracker by the positioning base station through the emitted infrared laser;
step 362, the Unity server obtains the tracker self information through the steam VR;
step 363, the Unity server acquires robot running instruction information sent by the HoloLens;
step 364, the Unity server calculates the robot running path by using an A-star algorithm;
step 365, the robot and the Unity server perform WiFi/socket communication to obtain operation data and start to operate;
step 366, the robot receives the control command and advances to grab the object to the user.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present invention, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (11)

1. A rehabilitation training assessment method in a mixed reality environment is characterized by comprising the following steps:
acquiring preset evaluation content;
converting the preset evaluation content into a mixed reality scene and a virtual motion guidance holographic body animation in real time;
displaying the virtual motion guidance holographic body animation so that the current user imitates the motion of the virtual motion guidance holographic body animation;
acquiring the current limb action of a user;
analyzing the current user limb action data by using a deep learning algorithm to obtain a current user limb action analysis result;
obtaining a current user limb evaluation result according to the limb action command and the current user limb action analysis result;
and pushing preset training contents according to the evaluation result and generating new evaluation contents.
2. The method of claim 1, wherein said obtaining a current user limb motion comprises:
the current user obtains the limb movement instruction according to the virtual movement instruction hologram;
and the current user makes corresponding actions according to the limb action instruction or controls the robot to assist in completing the corresponding actions.
3. The method of claim 2, wherein the current user making the corresponding action according to the limb action guidance or the current user controlling the robot to assist in completing the corresponding action comprises: the robot assists in picking up auxiliary articles, the robot automatically advances, the robot automatically avoids obstacles, and the robot assists in completing capturing actions.
4. The method of claim 3, wherein the robot automatically advances, comprising:
discretizing the action path of the robot to generate a series of ordered waypoints;
acquiring the current position and orientation of the robot, calculating the pivot rotation angle and the forward/backward distance of the robot according to the relationship between the current position and orientation and the position and orientation of a road point, and sending a movement instruction to the robot;
the robot rotates and then walks according to the motion instruction, and a successful execution instruction is fed back after the motion instruction is completed;
acquiring the current position and the orientation of the robot again, and judging whether the distance between the current position of the robot and the position of the road point is smaller than a preset threshold value or not;
if the rotation angle is larger than the preset rotation angle, the on-site rotation angle and the forward/backward distance of the robot are calculated continuously, and a motion instruction is sent to the robot.
5. The method of claim 1, wherein analyzing the current user limb motion data using a deep learning algorithm comprises:
acquiring a user limb action information acquisition standard;
obtaining a complete evaluation model according to the user limb action information acquisition standard and the standard recording sample action;
and obtaining the limb evaluation analysis of the user according to the evaluation model.
6. The method of claim 5, wherein the user limb movement information collection criteria is established in accordance with FMA standards.
7. The method of claim 5, wherein obtaining a complete assessment model based on the user limb movement information collection criteria and standard recorded sample movements comprises:
acquiring coordinate information of a joint to be evaluated by using human body identification equipment, and storing the coordinate information as use sample matrix data by taking the coordinate axis of the human body identification equipment as a standard;
and building a neural network evaluation model by using the used sample matrix data through tensorflow, transmitting the obtained multiple groups of joint sample matrix data into the model, and training the model through forward propagation to obtain a complete evaluation model.
8. The method of claim 5, wherein the deriving a user limb assessment analysis from the assessment model comprises:
acquiring coordinate information of a joint to be evaluated by using human body identification equipment, and storing the coordinate information as use matrix data by taking a coordinate axis of the human body identification equipment as a standard;
and bringing the use matrix data into an evaluation model to obtain a limb evaluation analysis result.
9. A rehabilitation training evaluation system in a mixed reality environment, comprising:
the first acquisition module is used for acquiring preset evaluation content;
the mixed reality module is used for converting the preset evaluation content into a mixed reality scene and a virtual motion guidance holographic body animation in real time;
the display module is used for displaying the virtual motion guidance holographic body animation so that the current user imitates the motion of the virtual motion guidance holographic body animation;
the second acquisition module is used for acquiring the current limb actions of the user;
the body motion analysis module is used for analyzing the body motion data of the current user by utilizing a deep learning algorithm to obtain a body motion analysis result of the current user;
the limb evaluation module is used for obtaining a current user limb evaluation result according to the limb action command and the current user limb action analysis result;
and the training content pushing module is used for pushing preset training content according to the evaluation result and generating new evaluation content.
10. The system of claim 9, further comprising:
the human body identification module is used for identifying physical movement information of the limbs of the user by using human body identification equipment and submitting the information to the data evaluation module;
the human-computer interaction module is used for transmitting user and system instructions and realizing human-computer interaction;
the robot module is used for acquiring positioning information, planning path points by using an algorithm, enabling the robot to advance in a segmented mode and controlling the robot to make auxiliary actions;
the positioning module is used for refreshing the position information of the user, the machine vehicle and the auxiliary article to the server;
the man-machine interaction module comprises:
the voice recognition unit is used for recognizing the voice information of the user and extracting a user control instruction;
the gesture recognition module is used for recognizing gesture information of a user and extracting a user control instruction;
and the voice interaction module is used for man-machine language interaction.
11. The system of claim 10, wherein the positioning module comprises: the positioning base station comprises a positioning base station, a first positioning module, a second positioning module and a third positioning module;
the first positioning module is used for positioning the position information of a user;
the second positioning module is used for positioning the position information of the robot;
the third positioning module is used for positioning the position information of the auxiliary article;
the positioning base station is used for completing the cooperative positioning of the first positioning module, the second positioning module and the third positioning module.
CN202110623610.7A 2021-06-04 2021-06-04 Rehabilitation training evaluation method and system in mixed reality environment Pending CN113241150A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110623610.7A CN113241150A (en) 2021-06-04 2021-06-04 Rehabilitation training evaluation method and system in mixed reality environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110623610.7A CN113241150A (en) 2021-06-04 2021-06-04 Rehabilitation training evaluation method and system in mixed reality environment

Publications (1)

Publication Number Publication Date
CN113241150A true CN113241150A (en) 2021-08-10

Family

ID=77136783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110623610.7A Pending CN113241150A (en) 2021-06-04 2021-06-04 Rehabilitation training evaluation method and system in mixed reality environment

Country Status (1)

Country Link
CN (1) CN113241150A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674830A (en) * 2021-08-25 2021-11-19 上海一尺视界数码科技有限公司 Rehabilitation training method, system, terminal and storage medium
CN114187651A (en) * 2021-11-04 2022-03-15 福建中医药大学附属康复医院 Taijiquan training method and system based on mixed reality, equipment and storage medium
CN114469079A (en) * 2022-01-29 2022-05-13 北京中科深智科技有限公司 Body joint measuring method using LightHouse

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107349570A (en) * 2017-06-02 2017-11-17 南京邮电大学 Rehabilitation training of upper limbs and appraisal procedure based on Kinect
CN107485844A (en) * 2017-09-27 2017-12-19 广东工业大学 A kind of limb rehabilitation training method, system and embedded device
CN110211661A (en) * 2019-06-05 2019-09-06 山东大学 Hand functional training system and data processing method based on mixed reality
CN110232963A (en) * 2019-05-06 2019-09-13 中山大学附属第一医院 A kind of upper extremity exercise functional assessment system and method based on stereo display technique
CN111124102A (en) * 2019-10-24 2020-05-08 上海市长宁区天山中医医院 Mixed reality holographic head display limb and spine movement rehabilitation system and method
CN111863198A (en) * 2020-08-21 2020-10-30 华北科技学院 Rehabilitation robot interaction system and method based on virtual reality
CN112233771A (en) * 2020-11-04 2021-01-15 无锡蓝软智能医疗科技有限公司 Knee joint rehabilitation training method, storage medium, terminal and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107349570A (en) * 2017-06-02 2017-11-17 南京邮电大学 Rehabilitation training of upper limbs and appraisal procedure based on Kinect
CN107485844A (en) * 2017-09-27 2017-12-19 广东工业大学 A kind of limb rehabilitation training method, system and embedded device
CN110232963A (en) * 2019-05-06 2019-09-13 中山大学附属第一医院 A kind of upper extremity exercise functional assessment system and method based on stereo display technique
CN110211661A (en) * 2019-06-05 2019-09-06 山东大学 Hand functional training system and data processing method based on mixed reality
CN111124102A (en) * 2019-10-24 2020-05-08 上海市长宁区天山中医医院 Mixed reality holographic head display limb and spine movement rehabilitation system and method
CN111863198A (en) * 2020-08-21 2020-10-30 华北科技学院 Rehabilitation robot interaction system and method based on virtual reality
CN112233771A (en) * 2020-11-04 2021-01-15 无锡蓝软智能医疗科技有限公司 Knee joint rehabilitation training method, storage medium, terminal and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674830A (en) * 2021-08-25 2021-11-19 上海一尺视界数码科技有限公司 Rehabilitation training method, system, terminal and storage medium
CN114187651A (en) * 2021-11-04 2022-03-15 福建中医药大学附属康复医院 Taijiquan training method and system based on mixed reality, equipment and storage medium
CN114469079A (en) * 2022-01-29 2022-05-13 北京中科深智科技有限公司 Body joint measuring method using LightHouse

Similar Documents

Publication Publication Date Title
CN113241150A (en) Rehabilitation training evaluation method and system in mixed reality environment
US20230017367A1 (en) User interface system for movement skill analysis and skill augmentation
CN102567638B (en) A kind of interactive upper limb healing system based on microsensor
CN203149575U (en) Interactive upper limb rehabilitation device based on microsensor
Zhong et al. Environmental context prediction for lower limb prostheses with uncertainty quantification
CN109172066B (en) Intelligent prosthetic hand based on voice control and visual recognition and system and method thereof
CN109758157A (en) Gait rehabilitation training and estimating method and system based on augmented reality
CN108983636A (en) Human-machine intelligence's symbiosis plateform system
He et al. Development of distributed control system for vision-based myoelectric prosthetic hand
US20220351824A1 (en) Systems for dynamic assessment of upper extremity impairments in virtual/augmented reality
Hernandez et al. Machine learning techniques for motion analysis of fatigue from manual material handling operations using 3D motion capture data
CN110456902A (en) It is mobile to control the skeleton pattern in computer system to track user
CN111863198A (en) Rehabilitation robot interaction system and method based on virtual reality
Yi et al. Home interactive elderly care two-way video healthcare system design
CN111134974B (en) Wheelchair robot system based on augmented reality and multi-mode biological signals
CN112494034A (en) Data processing and analyzing system and method based on 3D posture detection and analysis
CN113869090A (en) Fall risk assessment method and device
CN111625098B (en) Intelligent virtual avatar interaction method and device based on multi-channel information fusion
Han A table tennis motion correction system based on human motion feature recognition
Zhong Reliable deep learning for intelligent wearable systems
Lockwood et al. Leveraging submovements for prediction and trajectory planning for human-robot handover
CN110390298B (en) Gait simulation prediction system and simulation prediction method
Wang et al. Research on Multiperson Motion Capture System Combining Target Positioning and Inertial Attitude Sensing Technology
CN110135744A (en) Construction worker's safety behavior is accustomed to evaluation method
Mohamed et al. Automated Upper Limb Motor Functions Assessment System Using One-Class Support Vector Machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination