CN114998983A - Limb rehabilitation method based on augmented reality technology and posture recognition technology - Google Patents

Limb rehabilitation method based on augmented reality technology and posture recognition technology Download PDF

Info

Publication number
CN114998983A
CN114998983A CN202210377129.9A CN202210377129A CN114998983A CN 114998983 A CN114998983 A CN 114998983A CN 202210377129 A CN202210377129 A CN 202210377129A CN 114998983 A CN114998983 A CN 114998983A
Authority
CN
China
Prior art keywords
rehabilitation
limb
augmented reality
limb rehabilitation
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210377129.9A
Other languages
Chinese (zh)
Inventor
肖治国
李念峰
鲁光男
李强
王玉英
王春湘
丁天娇
于桦
杨永吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University
Original Assignee
Changchun University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University filed Critical Changchun University
Priority to CN202210377129.9A priority Critical patent/CN114998983A/en
Publication of CN114998983A publication Critical patent/CN114998983A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

The invention relates to a limb rehabilitation method based on an augmented reality technology and a posture recognition technology, which belongs to the technical field of intelligent limb rehabilitation, wherein an image acquisition unit acquires limb actions of a testee in real time and extracts skeleton key point information of the testee through the posture recognition technology; identifying the skeleton key point information of the tested person in real time by a posture identification technology, extracting the skeleton key point information of standard limb rehabilitation action, extracting the characteristics and forming a rehabilitation action characteristic library; comparing the rehabilitation actions of the testee in real time through a posture characteristic comparison algorithm, and evaluating a comparison result; and sending the evaluation result from the calculation unit to the display unit through the communication unit to complete information feedback. The invention assists the user to complete the rehabilitation training through the posture recognition technology and the augmented reality technology.

Description

Limb rehabilitation method based on augmented reality technology and posture recognition technology
Technical Field
The invention belongs to the technical field of intelligent limb rehabilitation, and particularly relates to a limb rehabilitation method based on an augmented reality technology and a posture recognition technology.
Background
The term "rehabilitation" is put forward in the medical field, and according to the definition of the world health organization, rehabilitation refers to the comprehensive utilization of various effective scientific theories, methods and technical means to promote the psychosomatic disorder to recover or rebuild the social participation ability of the psychosomatic disorder, such as the activity ability, the self-care ability of life, the occupational labor and the like to the maximum extent. Rehabilitation medicine is an important component of the modern medicine 'prevention, clinical treatment and rehabilitation', and the concept of rehabilitation becomes a special subject with the continuous research and deepening of the rehabilitation field. The rehabilitation science is the subject of recovering the dysfunction of a patient by active, active and targeted rehabilitation measures, so as to eliminate and relieve the dysfunction of the human, make up and reconstruct the functional deficiency of the human, try to improve and improve various functions of the human, and prevent, diagnose, evaluate, treat, train and treat the related dysfunction. According to statistics, about 3500 million people are needed for limb rehabilitation every year in China, the number of hospitalization cost, labor protection cost, medical cost and other related social welfare and public expenses for patient treatment and rehabilitation every year is huge, and a huge gap exists in the supply and clinical requirements of rehabilitation equipment. In addition, the existing limb rehabilitation equipment is expensive, individuals cannot bear special rehabilitation cost, the user experience of the rehabilitation equipment is poor, patients cannot insist on rehabilitation training, the rehabilitation effect is poor, and other contradictions are caused, and limb rehabilitation becomes a hot spot of modern rehabilitation medical engineering research according to market demands.
The traditional limb rehabilitation mainly comprises that a rehabilitation trainer trains a patient with one-to-one hands or trains the patient with a rehabilitation medical apparatus, so that a plurality of problems exist. Firstly, the strength of manual training or instrument training is not high, and the plasticity of the central nervous system is hardly influenced deeply, which results in an unsatisfactory training effect. Secondly, because the subjectivity is strong, the treatment effect of a rehabilitation treatment scheme is difficult to objectively evaluate, and the further deep research on the neural rehabilitation law is not favorable; thirdly, because there are many patients who need rehabilitation training, the manual training is often difficult to meet the requirements on quantity and quality, which also results in that some patients give up rehabilitation training and take home self-exercise, but often difficult to obtain better effect. Aiming at the problems of the traditional rehabilitation training, global scholars actively research innovative theories, methods and means applied to the rehabilitation training. The augmented reality technology is developed and applied to improve the effect of the traditional rehabilitation means, the integration of the somatosensory technology improves the effect of the augmented reality technology in limb exercise rehabilitation, and the integration of the two technologies is an important direction for the later limb rehabilitation development.
Augmented reality technology is an emerging technology field that combines computer graphics, multimedia technology, artificial intelligence, sensing technology, human-computer interface technology and computer simulation technology. In the aspect of rehabilitation therapy, the characteristics of interactivity, immersion and conceptuality make the rehabilitation therapy have huge application potential. The existing limb rehabilitation process is too monotonous and poor, and the patient hardly generates interest, so the rehabilitation effect is not ideal. On the other hand, the patient suffers from a huge psychological obstacle while suffering from physical motor dysfunction. Although the simple rehabilitation means can well solve the physiological disorder, the solution to the physiological disorder is still apparent. The virtual reality technology is applied to the rehabilitation engineering, the virtual environment can be combined with treatment, a patient can be placed in the virtual environment, the interest in the treatment process is increased, the optimistic emotion of the patient is improved, the mode of combining physiological treatment and psychological guidance can be utilized, diversified psychological prompt and induction are carried out on the patient through immersion of sound, language, characters and the like, the mental effect of the patient is fully mobilized, and therefore the effect of strengthening the physiological treatment is achieved. The motion sensing technology is a technology that people can interact with content personally on the scene by directly using limb actions to interact with peripheral devices or environments without using any complex control equipment. The technology can be divided into a human body posture information extraction mode realized by a somatosensory sensor, such as Kinect, Leapmotion and the like; one method is to extract human body gestures through a common camera in a deep learning manner, for example, the gesture information of a human body can be extracted in real time through frames such as Pythrch-OpenPose, MobilePose, Lightweight OpenPose and the like. The application of the motion sensing technology in virtual reality and augmented reality enables a man-machine interaction mode to be more natural and rapid, so that the virtual reality technology and the motion sensing technology are applied to the field of rehabilitation, the subjective motility of a patient can be fully mobilized, and the method is a technical form with great prospect. Therefore, the body feeling technology and the virtual simulation technology are combined together to provide a low-cost and good-experience limb rehabilitation training system for users.
As far as the AR devices are introduced in the industry at present, the AR devices are generally characterized by having a translucent lens for observing reality and simultaneously projecting images of a virtual world. In 2006, the virtual reality laboratory of the university of Pushu in the United states consumed 3 years to virtually reproduce the 911 event, and accurately calculate tens of thousands of fragment trends generated after impact, which plays a positive role in correcting decision plans. A virtual fire drilling system was introduced in 2015 by Beijing Huarui viewpoint digital science and technology Limited to construct a fire safety emergency virtual system with information display, fire-fighting exhibition hall, fire drilling, fire-fighting rescue, fire evaluation and other functions. The virtual reality environment and the integration research of body sensing technology mainly focus on the research of body sensing game and rehabilitation motion in recent years, use body sensing equipment to reflect physiological data in visual virtual scene in real time, strengthen the sense of immersion of virtual simulation, promote user experience. Chenhao et al designs a somatosensory interactive table tennis game for the elderly aiming at the upper limb movement of the elderly, and establishes a sentiment inference rule of a virtual human by adopting an intelligent body theory, so that the virtual human feeds back the operation result of a user by using expression and voice prompt. The motion sensing equipment improves the previous motion sensing interactive information input of people from two dimensions to three dimensions, and a plurality of researchers begin to research a human body posture evaluation method based on depth information and obtain remarkable achievement, mainly research motion recognition based on a three-dimensional depth image sequence from a time dimension. OREIFE O et al, which analyzes the histogram of the depth image sequence to perform segmentation and feature recognition on the depth image, have low time complexity and achieve good effects. Mengming et al introduced and improved the ViBe algorithm to process depth images for human motion detection, and performed false detection point denoising processing on classification results by using a threshold method and obtained better effects. And part of scholars use the extracted key point information of the human skeleton and use the information of the joint points as the characteristic expression of the human body, thereby providing more discriminative and intuitive characteristics. The reasonable force feedback algorithm is also one of key technologies of the virtual surgery simulation system, and the force tactile feedback technology can enable a doctor to feel real tactile feeling in the interaction process, so that the immersion of the interaction experience is enhanced.
In conclusion, the limb exercise rehabilitation combined with the somatosensory technology and the augmented reality technology does not have a mature product. The method is characterized in that a limb rehabilitation actual training field with the blending of real situations and virtual situations is established, real-time and efficient interaction between the real situations and the virtual situations is achieved, and the method for completing tracking evaluation on individuals participating in rehabilitation is the future development direction of rehabilitation equipment. According to the method, the Epson BT-300AR intelligent glasses are used as display carriers of virtual images and feedback results, the rehabilitation action of a testee is recognized in real time through a designed human posture estimation method, shot rehabilitation action images and calculated rehabilitation action evaluation results are fed back to the front of the testee in real time through the glasses, the pre-designed virtual images are merged, and a virtual-real combined mirror image rehabilitation process is formed.
Disclosure of Invention
The invention aims to provide a limb rehabilitation method based on an augmented reality technology and a posture recognition technology, so as to solve the technical problems mentioned in the background technology.
In order to achieve the purpose, the specific technical scheme of the limb rehabilitation method based on the augmented reality technology and the posture recognition technology is as follows:
a limb rehabilitation method based on an augmented reality technology and a posture recognition technology comprises the following steps, and the following steps are sequentially carried out:
1) acquiring limb action information and surrounding scene information of a testee in real time through an image acquisition unit;
2) identifying the bone key point information of the testee in real time by a gesture identification technology;
3) extracting skeleton key point information of standard limb rehabilitation action, extracting characteristics and forming a rehabilitation action characteristic library;
4) comparing the limb rehabilitation actions of the testee in real time through a posture characteristic comparison algorithm, and evaluating a comparison result;
5) transmitting the evaluation result and the virtual object information from the calculation unit to the display unit through the communication unit;
6) the testee receives the evaluation information in real time through the display unit, the calculation unit carries out real-time interactive calculation of the virtual scene and the real scene, and the image acquisition unit receives the feedback result of the calculation unit in real time through the communication unit.
9. The limb rehabilitation method based on the augmented reality technology and the posture recognition technology as claimed in claim 1, wherein the image acquisition unit in step 1) is a common RGB camera connected with the computing unit, and acquires the rehabilitation limb movement information of the subject and the information in the scene around the subject in real time through the camera.
Further, the identification of the bone key points in the step 2) is to obtain a model through thermodynamic diagrams and neural networks through training, and the specific steps are as follows:
firstly, performing thermodynamic diagram annotation on an image with complete limbs, acquiring the annotated image, shooting a complete human body through a camera, and shooting each limb rehabilitation action one by one from different directions; when the thermodynamic diagram is labeled, all skeletal key points need to be uniformly coded;
secondly, zooming the input limb picture, zooming in a unified way, and outputting a Gaussian heat map with all marked bone key points;
and finally, inputting the thermodynamic diagrams into a training network for training to obtain a human skeleton key point thermodynamic diagram extraction model.
Further, the step 3) specifically comprises the following steps, and the following steps are sequentially performed:
(1) data extraction, namely intercepting a standard limb rehabilitation action video image at the frequency of 3 frames per second, and generating a bone key point diagram from the intercepted video image; obtaining a standard unified framework key node graph by utilizing normalization and translation operations;
(2) data preprocessing, namely analyzing the motion change state of each node in the skeleton key point node diagram at different time periods respectively to generate a skeleton key point time sequence state diagram, and performing threshold processing on the node time sequence state diagram by using an activation function and a Gaussian filter algorithm;
(3) data processing, namely removing a non-change part in the node time sequence state diagram, only keeping the change states of the nodes at different times, and generating skeleton key point time sequence change three-dimensional data;
(4) and storing the time sequence change characteristic data of the three-dimensional skeleton key points into a file as a standard rehabilitation action characteristic.
Further, the step 4) specifically comprises the following steps, and the following steps are sequentially performed:
(1) comparing the attitude characteristics by using an improved DTW time sequence similarity matching algorithm SED _ DTW algorithm;
(2) and acquiring a node time sequence change characteristic diagram of the testee, and calculating the similarity distance between the node time sequence change characteristic diagram of the testee and the node time sequence change characteristic diagram in the preset standard rehabilitation action library through an SED _ DTW algorithm, wherein the smaller the distance, the higher the similarity is.
Further, the communication unit in the step 5) is a unit for communicating the augmented reality glasses with the computing unit, and a common wireless router is used as a bridge between the communication unit and the computing unit.
Further, the augmented reality glasses in step 6) receive the feedback result of the computing unit in real time through the communication unit, wherein the feedback result includes rehabilitation and evaluation information of limb actions and feedback information of interaction between a preset virtual scene and a real scene.
Further, the improved DTW algorithm SED _ DTW in step 4) specifically includes the following steps:
two time series are given for describing features X, Y, where X is m in length and Y is n in length, then:
X={x 1 ,x 2 ,x 3 ,…x m },Y={y 1 ,y 2 ,y 3 ,…y n }
construct an n × m matrix C ═ C (i, j)]Wherein the (i, j) th element of the matrix is x i And x j The distance of (c).
Constructing a matrix formula:
c(i,j)=||y i -x i || p
p is 2, Euclidean distance is adopted, the obtained skeleton key point information is three-dimensional, namely (x, y, z), and the distance between two points is calculated by adopting a weighted standardized Euclidean distance;
firstly, all the components are normalized to be equal in mean value and variance, assuming that the mathematical expectation or mean value of a sample set X is m and the standard deviation is s, the normalization variable of X is set to be X, the mathematical expectation of the normalization variable is 0 and the variance is 1, and the normalization process of the sample set is described as follows:
Figure BDA0003591144940000061
the normalized value is (the value before normalization-the mean of the components)/component, and the standard deviation is simply derived to obtain two n-dimensional vectors a (x) 11 ,x 12 ,…,x 1n ) And b (x) 21 ,x 22 ,…,x 2n ) Formula of normalized euclidean distance between:
Figure BDA0003591144940000062
the normalized Euclidean distance calculation is not complicated, where n in the formula is the vector length, x 1k And x 2k Two skeletal key points in the action sequence.
To look for y j To x i Defines a cumulative cost function of the regular path l between X and Y, and is formulated as follows:
Figure BDA0003591144940000063
l represents a queue mapped between X and Y, and the optimal regular path from X to Y is the shortest distance of c1(X, Y), so the distance formula is:
SED_DTW(X,Y)=c p *(X,Y)=min{c p (X,Y)}
the distance of the cumulative cost function can be expressed as:
γ(i,j)=c(i,j)+min{γ(i-1,j)+γ(i,j-1)+γ(i-1,j-1)
wherein i belongs to [1, m ], j belongs to [1, m ], and the SED _ DTW distance formula is finally obtained as follows:
Figure BDA0003591144940000071
the limb rehabilitation method based on the augmented reality technology and the posture recognition technology has the following advantages: the human body limb rehabilitation action is obtained by a human body posture estimation method in deep learning, and compared and evaluated with the standard rehabilitation action in the limb rehabilitation action characteristic library, and the feedback result of the current limb rehabilitation action is obtained in real time through an augmented reality technology.
(1) According to the invention, the limb rehabilitation action and the surrounding scene information of the testee are obtained in real time by using the common RGB camera, and the human body posture is recognized through a deep learning technology to obtain the human body skeleton joint points. Compared with general somatosensory sensors, such as Kinect,
Figure BDA0003591144940000072
Compared with RealSense, Leapmotion and the like, the bone binding algorithm has low cost, and solves the problem that the bone binding algorithm of the somatosensory sensor cannot be identified or identified by mistake when the body of a testee is partially shielded and is interfered by heat to a certain extent.
(2) According to the invention, the image acquisition unit is used for acquiring the image of the standard limb rehabilitation action, the key points of the standard skeleton are extracted, and the limb rehabilitation action feature library is further generated. The method for extracting the bone key point information is simple and efficient, and has low requirement on the computing capacity of equipment compared with other methods.
(3) The invention uses the posture characteristic comparison algorithm, can compare the limb action of the tested person with the standard limb rehabilitation action characteristic library in real time from a plurality of dimensions such as time difference, action amplitude, action speed and the like, and timely obtains the feedback result through AR glasses. And digital quantitative data is provided for subsequent rehabilitation assessment, so that rehabilitation doctors can track rehabilitation conditions conveniently.
(4) The invention adopts the modularized design idea and is divided into an image acquisition unit, a calculation unit, a communication unit and a display unit. The invention has portability and expansibility, the acquisition unit uses a common camera or a mobile phone, the communication unit uses a common router, the display unit can use different AR glasses, the calculation unit is the core unit of the invention, and the algorithm can be conveniently applied to the existing rehabilitation training robot.
Drawings
Fig. 1 is a skeletal key point diagram identified based on deep learning human body posture and a coded skeletal key point diagram.
Fig. 2 is a schematic flow chart of a limb rehabilitation method based on an augmented reality technology and a posture recognition technology.
Fig. 3 is a schematic diagram of mapping the encoded bone key point diagram into 16-channel bone key point feature extraction.
Fig. 4 is a schematic diagram of a skeletal key point extraction network structure.
Fig. 5 is a schematic diagram of bone key points in shoulder abduction rehabilitation training in embodiment 1.
Detailed Description
In order to better understand the purpose, structure and function of the present invention, a limb rehabilitation method based on augmented reality technology and posture recognition technology is described in further detail below with reference to the accompanying drawings.
The limb rehabilitation action comparison has very important function in limb rehabilitation training, and the key problem to be solved by the invention is to compare the limb rehabilitation action of a tested person with the standard limb rehabilitation action by using an augmented reality technology and a posture recognition technology and feed back the result to the tested person so as to correct and perfect the error.
As shown in fig. 1, is a skeletal structure based on deep learning human gesture recognition. The overall process of the invention is as shown in fig. 2, firstly, the image of the standard limb rehabilitation action is collected, the key skeletal node map of the standard action is obtained by using the human posture recognition based on the deep learning, and a limb rehabilitation action library is established. And the image acquisition unit is used for acquiring the limb actions of the testee and identifying the bone key points of the testee by utilizing the human body posture. And then, obtaining a comparison result by utilizing a posture characteristic comparison algorithm, and evaluating the result. And sending the evaluation result from the calculation unit to the display unit through the communication unit to complete information feedback.
The limb rehabilitation method based on the augmented reality technology and the posture recognition technology comprises the following steps:
1) the limb movement and surrounding scene information of a testee are collected in real time through an image collecting unit, and the method comprises the following specific steps: the image acquisition unit is a common RGB camera connected with the calculation unit, and acquires the rehabilitation limb action information of the testee and the information in the scene around the testee in real time through the camera.
2) The method comprises the following steps of identifying the bone key point information of a testee in real time through a gesture identification technology, and specifically:
the thermodynamic diagram represents each type of coordinate by a probability diagram, and gives a probability to each pixel position in the picture to represent the probability that the point belongs to the corresponding category key point. And the closer the position of the key point, the closer the probability of the pixel point is to 1, and the farther the position of the key point, the closer the probability of the pixel point is to 0. The identification of the key points of the skeleton can obtain a model through thermodynamic diagrams and neural networks through training, and the specific method comprises the following steps:
firstly, thermodynamic diagram annotation is needed to be carried out on an image with complete limbs, the image needs to have a complete human body, and the image is preferably composed of human bodies in different directions and various limb movement images, wherein the images can comprise limb rehabilitation movement images. When the thermodynamic diagram is labeled, all skeletal key points need to be uniformly coded, as shown in fig. 1. The quality of the image and the accuracy of the annotation can have a large impact on the effectiveness of the model.
Secondly, zooming the input limb picture to 256 multiplied by 256px in a unified way, and outputting a Gaussian heat map with all marked bone key points; the method comprises the steps that 16 key points of limb actions need to be acquired in the patent, 16 channels (as shown in figure 3) are needed for predicting an output feature diagram, namely, each channel predicts a joint point, and then the coordinate of the probability maximum value is obtained for each channel, namely the coordinate of the key point which is required to be obtained by people; on the right of fig. 3 is the resulting thermodynamic diagram, with dimensions 256 × 256px, as follows:
and (4) zooming the shot picture to a 256 × 256px scale, and setting the shot picture as a target point by taking the coordinates of the center point of the target key point and rounding. The radius R of the gaussian circle is calculated according to the target point size. On the thermodynamic diagram, a point is taken as a center of a circle, and the radius is R to fill a Gaussian function calculation value. The points are maxima and decrease outward along the radius as a gaussian function. The scaled image is input into the training network as a training image. In the patent, 16 key points of limb movement need to be acquired, and then 16 channels (as shown in fig. 3) are needed for predicting an output feature map, that is, each channel predicts a joint point, and then the coordinate of the probability maximum value is obtained for each channel, that is, the coordinate of the key point which we want to obtain.
The thermodynamic diagrams of the 16 channels are input into a convolutional neural network for training to obtain a human skeleton key point thermodynamic diagram extraction model. Considering the light weight characteristic of application equipment, the design of the convolutional neural network must reduce the calculation amount while ensuring the precision, and the complexity of the convolutional network is reduced to the maximum extent. Referring to a plurality of lightweight networks, the following networks are designed as training networks of the skeletal key point extraction model, as shown in fig. 4.
The original marked image is scaled to an image with 256 multiplied by 3 channels before entering a training network, and a characteristic matrix with 128 multiplied by 16 channels is formed through network convolution; forming a 64 multiplied by 32 channel feature matrix through depth separable convolution (DS Conv); through a full convolution (Conv) network, the image becomes a 32 × 32 × 128 channel feature matrix, where attention weights of the spatial attention module are calculated; generating a characteristic matrix of 32 multiplied by 128 channels through a convolution network; generating a 16 × 16 × 64 channel feature matrix through a depth separable convolutional layer (DS Conv); and finally, outputting the positions of the 16 key points through a maximum pooling and full connection layer and a softmax loss function.
The spatial attention module is obtained by network calculation according to the spatial characteristics of the distribution of key points of human bones and corresponding numbers, and the calculation mode is as follows:
the spatial attention mechanism block firstly independently adopts global average pooling for each channel, wherein the global average pooling means that the whole network is structurally normalized to prevent the overfitting problem of the model. The two full connectivity layers (FC) and the nonlinear Sigmoid function are then used to generate the weights for the channels. The two FC layers are intended to capture non-linear cross-channel interactions to achieve reduced dimensionality to control the complexity of the model. The spatial attention machine system block performs feature compression on spatial dimensions to obtain 1x1x128 channel description, the feature matrix has a global receptive field, that is, the whole spatial information on one channel is compressed into a global feature, and finally 128 global features are obtained, and the global feature pooling implementation is adopted, and the formula is as follows:
Figure BDA0003591144940000101
after global feature pooling, the original feature matrix is changed from 32x32x128 to 1x1x 128. The spatial attention mechanism module learns the feature weight through a loss function in the network to obtain the importance degree of each feature map, and then assigns a weight value to each feature channel according to the obtained importance degree, so that the neural network focuses on certain features, the effective feature weight is increased, the ineffective or small-effect feature weight is reduced, the model achieves a better effect, and meanwhile, parameters and calculated amount are inevitably increased.
3) And extracting the skeleton key point information of the standard limb rehabilitation action and extracting the characteristics to form a rehabilitation action characteristic library. The specific operation is as follows:
the characteristic sources in the rehabilitation action characteristic library are as follows: the method comprises the steps that firstly, uniformly collected characteristics of standard rehabilitation actions are stored in a warehouse, and the warehouse is a basic rehabilitation action characteristic library; secondly, according to individual requirements, specific rehabilitation actions provided by rehabilitation doctors can be collected, the characteristics are extracted by the method and are stored in a warehouse, and the method comprises the following steps:
firstly, the recovery action feature library limits the action acquisition angle of the camera according to the characteristics of specific recovery actions before construction. According to different rehabilitation actions, an optimal acquisition angle is used (the optimal acquisition angle is determined according to the type of the rehabilitation actions, all key points of the identified limb are mapped into a 2D space, the distinction degree between a moving point and a fixed point is more optimal as a target, the front face or the side face of a camera is determined to face a testee, and only the selection is carried out in the two directions), after the acquisition angle is limited, a rehabilitation action file is established, the standard rehabilitation actions can be acquired, and the extracted features are recorded in a feature library;
secondly, inputting the video image of the standard limb rehabilitation action into a human posture recognition module based on deep learning, and generating a skeleton key node map containing human joint points, as shown in figure 1.
Normalizing and translating the skeleton key points to obtain a standard unified skeleton key node graph, wherein a normalization coefficient l is as follows:
Figure BDA0003591144940000111
wherein i and j are nodes connected in the skeleton key node graph, and x, y and z are the abscissa and ordinate of the skeleton key node. And scaling the nodes of the key nodes of the skeleton in an equal proportion according to the normalization coefficient to obtain a standard and unified skeleton node graph.
Data preprocessing, analyzing the motion change state of each node in the skeleton key point node diagram at different time periods respectively to generate a node time sequence state diagram, and performing threshold processing on the node time sequence state diagram by using an activation function and a Gaussian filter algorithm; defining the generalization parameter of Gaussian filter as m, wherein the value of m is related to the action speed of a test user and can be generally defined as [5,9], and then using cv2. binary filter (array, m) in an opencv-Python library to obtain a final result;
data processing, namely removing unchanged parts in the node time sequence state diagram by using a wave crest and wave trough shearing method, only keeping the change states of the nodes at different time, and generating a time sequence change characteristic diagram (three-dimensional data) of the nodes;
and sixthly, storing the node time sequence change characteristic graph into a database to serve as a standard rehabilitation action characteristic library.
4) Comparing the rehabilitation actions of the testee in real time through a posture characteristic comparison algorithm, and evaluating a comparison result, wherein the operation is as follows:
the pose features are aligned using an improved DTW image time series similarity matching algorithm. Time series is a common representation of data. For time series, a common task is to compare the similarity of two sequences. Although the traditional euclidean distance has the characteristics of space-Time effectiveness and implementation simplicity, the traditional euclidean distance is very sensitive to distorted Time sequences, and a DTW (Dynamic Time Warping) algorithm solves the defect. The similarity between the two sequences is calculated by extending and contracting the time series. The existing feature data is three-dimensional, so that a certain improvement needs to be made on the DTW algorithm.
Modified DTW Algorithm SED _ DTW:
two time series are given for describing features X, Y, where X is m in length and Y is n in length, then:
X={x 1 ,x 2 ,x 3 ,…x m },Y={y 1 ,y 2 ,y 3 ,…y n }
the DTW may operate on time series with different lengths, and in order to make two time series non-linearly aligned on the time axis, an n × m matrix C ═ C (i, j) is first constructed]Wherein the (i, j) th element of the matrix is x i And x j The distance of (c).
Constructing a matrix formula:
c(i,j)=||y i -x i || p
usually p is 2, and euclidean distance is used, in this patent, the obtained bone key point information is three-dimensional, i.e. (x, y, z). That is, in the three-dimensional space, one more depth information z is added to the original plane information (x, y). The z information of the bone key points represents the distance between the bone key points of the trial users and the camera, so the Euclidean distance calculation method used in the original DTW algorithm is inaccurate. The depth information z is of great significance for limb rehabilitation, and in this case, a new method, a weighted normalized Euclidean distance, is required for calculating the distance between two points.
Considering that the distribution of each dimension component of the data is different, each component needs to be normalized to make the normalized mean and variance equal. Assuming that the mathematical expectation or mean of the sample set X is m and the standard deviation is s, then the "normalization variable" of X is set to X, the mathematical expectation of the normalization variable is 0, and the variance is 1. The normalization process for the sample set is described by the formula:
Figure BDA0003591144940000131
the normalized value is (the value before normalization-the mean of the components)/the component, and the standard deviation is deduced to obtain two n-dimensional vectors a (x) 11 ,x 12 ,…,x 1n ) And b (x) 21 ,x 22 ,…,x 2n ) The distance between them formula:
Figure BDA0003591144940000132
the normalized distance calculation is not complicated, where n in the formula is the vector length, x 1k And x 2k Two skeletal key points in the action sequence.
To look for y j To x i Defines a cumulative cost function of the regular path l between X and Y, and is formulated as follows:
Figure BDA0003591144940000133
l represents a queue mapped between X and Y, and the optimal regular path l from X to Y is the shortest distance of cl (X, Y), so the distance formula is:
SED_DTW(X,Y)=c p *(X,Y)=min{c p (X,Y)}
the distance of the cumulative cost function can be expressed as:
γ(i,j)=c(i,j)+min{γ(i-1,j)+γ(i,j-1)+γ(i-1,j-1)
wherein i belongs to [1, m ], j belongs to [1, m ], and the SED _ DTW distance formula is finally obtained as follows:
Figure BDA0003591144940000134
the SED-DTW algorithm well utilizes the depth information of the posture, and the feature comparison of the time-series action data is more accurate. The calculation amount is increased compared with the original DTW algorithm, but the whole calculation is not complex, and the method can be applied to a mobile terminal or edge equipment.
And acquiring a node time sequence change characteristic diagram of the testee, and calculating the similarity distance between the bone key point time sequence change characteristic diagram of the testee and a key point time sequence change characteristic diagram in a preset standard rehabilitation action characteristic library through an SED _ DTW algorithm. Smaller distances indicate higher similarity.
5) The evaluation result and the virtual object information are transmitted from the computing unit to the display unit through the communication unit, the communication unit is a unit for communicating the augmented reality glasses with the computing unit, and one common wireless router is used as a bridge of the two units.
6) The testee receives the evaluation information in real time through the display unit, and the calculation unit carries out real-time interactive calculation of the virtual scene and the real scene. The AR glasses receive the feedback result of the computing unit in real time through the communication unit, wherein the feedback result comprises rehabilitation and evaluation information of limb actions and feedback information of interaction between a preset virtual scene and a real scene.
Description of the invention: AR glasses case used in the invention
And (3) product website: https:// www.epson.com.cn/products/hmd/BT-300
The AR glasses can be connected to a computer, a smart phone and a television which support the Miracast technology through Miracast, and images on the equipment can be projected onto the AR glasses after interconnection. Nearly all AR glasses currently on the market support Miracast technology.
Example 1:
according to the limb rehabilitation method based on the augmented reality technology and the posture recognition technology, the testee performs shoulder abduction rehabilitation training, and the optimal acquisition angle is determined according to the rehabilitation action characteristics of the testee.
The optimal acquisition angle is to map all the identified key points of the limb into a 2D space, and the distinction degree between the moving point and the fixed point is more optimal as a target to decide that the front face or the side face of the camera faces the testee. The rehabilitation motion in the prior embodiment 1 is shoulder abduction, and the motion acquisition from the front is optimal according to the principle of the optimal acquisition angle.
After the existing rehabilitation action is selected, synchronous display and guidance are carried out on the display unit, and the prompting camera is placed right in front of the testee. The limb movement and the surrounding scene information of the testee are collected in real time through the image collecting unit.
2) The method comprises the following specific steps of identifying the bone key point information of a tested person in real time through a gesture identification technology (after the model is trained, the model is not trained any more later, and the identification of the key point can be realized by directly calling the model in a program):
the identification of the bone key point information is realized by combining thermodynamic diagram and convolution neural network. Firstly, thermodynamic diagram annotation is needed to be carried out on an image with complete limbs, the image needs to have a complete human body, and the human body and various limb actions are preferably shot from different directions. When the thermodynamic diagram is labeled, all skeletal key points need to be uniformly coded, as shown in fig. 1. The more the number of the marked images is, the better the action types and angles are, the better the performance of the final model is determined, the existing marked training data set is formed by collecting 12 people in different age groups, 100 images in the front, back, left and right directions of each person are collected, and the total number is 12000.
And secondly, uniformly cutting and zooming the input limb picture to 256 multiplied by 256px, and outputting a Gaussian heat map with all marked bone key points. According to the method, the limb rehabilitation action is judged by acquiring 16 skeleton key points, 16 channels (as shown in figure 3) are needed for predicting the output characteristic diagram, namely, each channel predicts a joint point, and then the coordinate of the maximum probability value is obtained for each channel, namely the coordinate of the key point which is required by people. On the right of fig. 3 is the resulting thermodynamic diagram, with dimensions 256 × 256 px.
Skeleton key points are extracted from the thermodynamic diagrams of the 16 channels through a designed lightweight network, and the skeleton key points are input into a convolutional neural network for training to obtain a human skeleton key point thermodynamic diagram extraction model. As shown in fig. 4, the process is:
before entering a training network, an original marked image is scaled to an image with 256 multiplied by 3 channels, and a characteristic matrix with 128 multiplied by 16 channels is formed through network convolution; forming a 64 multiplied by 32 channel characteristic matrix through depth separable convolution (DS Conv); through a full convolution (Conv) network, the image becomes a 32 × 32 × 128 channel feature matrix, where the attention weights of the spatial attention module are calculated; generating a characteristic matrix of 32 multiplied by 128 channels through a convolution network; generating a 16 × 16 × 64 channel feature matrix through a depth separable convolutional layer (DS Conv); and finally, outputting the positions of the 16 key points through a maximum pooling function, a full connection layer function and a softmax loss function. The formula for Softmax is as follows:
Figure BDA0003591144940000151
wherein z is i And C is the output value of the ith node, and the number of output nodes, namely the number of classified categories. The output value of the multi-classification can be converted into the range of [0,1 ] through the Softmax function]And a probability distribution of 1.
The spatial attention module is obtained by network calculation according to the spatial characteristics of the distribution of key points of human bones and corresponding numbers, and the calculation mode is as follows:
the spatial attention mechanism block firstly independently adopts global average pooling for each channel, wherein the global average pooling means that the whole network is structurally normalized to prevent the overfitting problem of the model. The two full connectivity layers (FC) and the nonlinear Sigmoid function are then used to generate the weights for the channels. The two FC layers are intended to capture non-linear cross-channel interactions to achieve reduced dimensionality to control the complexity of the model. The spatial attention mechanism block performs feature compression on spatial dimensions to obtain 1 × 1 × 128 channel description, the feature matrix has a global receptive field, that is, the whole spatial information on one channel is compressed into a global feature, and finally 128 global features are obtained, and the global feature pooling implementation is adopted, and the formula is as follows:
Figure BDA0003591144940000161
after global feature pooling, the original feature matrix is changed from 32 × 32 × 128 to 1 × 1 × 128. The spatial attention mechanism module learns the feature weight through a loss function in the network to obtain the importance degree of each feature map, and then assigns a weight value to each feature channel according to the obtained importance degree, so that the neural network focuses on certain features, effective feature weight is increased, and ineffective or small-effect feature weight is reduced.
The result of real-time identification of the skeletal key point information of the human subject by the gesture recognition technology is shown in fig. 5.
3) The rehabilitation action of the testee is shoulder abduction rehabilitation training, the action extracts the skeleton key point information of standard limb rehabilitation action from the front side and extracts the characteristics before the testee uses the rehabilitation action, and the extracted characteristic is stored in a rehabilitation action characteristic library. The specific operation is as follows:
firstly, the recovery action feature library limits the action acquisition angle of the camera according to the characteristics of specific recovery actions before construction. According to different rehabilitation actions, an optimal acquisition angle is used, the optimal acquisition angle of the shoulder abduction action is opposite to the camera in the front direction, after the acquisition angle is limited, a rehabilitation action file is established, the standard rehabilitation action can be acquired, and the extracted features are recorded in a feature library;
secondly, inputting the video image of the standard limb rehabilitation action into a human posture recognition module based on deep learning to generate a skeleton key node map containing human joint points, as shown in figure 1.
Normalizing and translating the skeleton key points to obtain a standard unified skeleton key node graph, wherein a normalization coefficient l is as follows:
Figure BDA0003591144940000171
wherein i and j are nodes connected in the skeleton key node graph, and x, y and z are the abscissa and the ordinate of the skeleton key node. And scaling the nodes of the key skeletal nodes in equal proportion according to the normalization coefficient to obtain a standard and unified skeletal node graph.
Data preprocessing, analyzing the motion change state of each node in the skeleton key point node diagram at different time periods respectively to generate a node time sequence state diagram, and performing threshold processing on the node time sequence state diagram by using an activation function and a Gaussian filter algorithm; the generalization parameter of the gaussian filter is defined as m, the value of m is related to the collected action speed, and can be generally defined as [5,9], in this embodiment, the action speed of the subject is moderate, and the value of m is 7. Then using cv2. binary Filter (array, m) in opencv-Python library to obtain the final result;
data processing, namely removing the unchanged part in the node time sequence state diagram by using a wave crest and wave trough shearing method, only keeping the change state of the node at different time, and generating a time sequence change characteristic diagram (three-dimensional data) of the node;
and sixthly, storing the node time sequence change characteristic graph into a database to serve as a standard rehabilitation action characteristic library.
4) The rehabilitation action of the testee is compared in real time through a posture characteristic comparison algorithm, and the comparison result is evaluated, and the method comprises the following specific operations:
the alignment is performed using an improved DTW image time series similarity matching algorithm SED _ DTW:
given a shoulder abduction motion feature sequence X and a motion feature sequence Y retrieved in a feature library, where X is m in length and Y is n in length, then:
X={x 1 ,x 2 ,x 3 ,…x m },Y={y 1 ,y 2 ,y 3 ,…y n }
the rehabilitation actions of the testee are compared in real time through a posture characteristic comparison algorithm, and the specific contents are as follows:
in order to align the two time sequences non-linearly on the time axis, an n × m matrix C ═ C (i, j) is first constructed]Wherein the (i, j) th element of the matrix is x i And y j The distance of (c).
Constructing a matrix formula:
c(i,j)=||y j -x i || p
and acquiring bone key point information (x, y, z).
Considering that the distribution of each dimension component of the data is different, each component needs to be normalized, and the normalization process of the sample set is described by the formula:
Figure BDA0003591144940000181
the normalized value is (the value before normalization-the mean of the components)/the component, and the standard deviation is deduced to obtain two n-dimensional vectors a (x) 11 ,x 12 ,…,x 1n ) And b (x) 21 ,x 22 ,…,x 2n ) The distance between them formula:
Figure BDA0003591144940000182
to look for y j To x i Defines a cumulative cost function of the regular path l between X and Y, and is formulated as follows:
Figure BDA0003591144940000183
l represents a queue mapped between X and Y, and the optimal regular path l from X to Y is the shortest distance of cl (X, Y), so the distance formula is:
SED_DTW(X,Y)=c p *(X,Y)=min{c p (X,Y)}
the distance of the cumulative cost function can be expressed as:
γ(i,j)=c(i,j)+min{γ(i-1,j)+γ(i,j-1)+γ(i-1,j-1)
wherein i belongs to [1, m ], j belongs to [1, m ], and the SED _ DTW distance formula is finally obtained as follows:
Figure BDA0003591144940000184
the SED-DTW algorithm well utilizes the depth information of the posture, and the feature comparison of the time sequence action data is more accurate.
And acquiring a node time sequence change characteristic diagram of the testee, and calculating the similarity distance between the bone key point time sequence change characteristic diagram of the testee and a key point time sequence change characteristic diagram in a preset standard rehabilitation action characteristic library through an SED _ DTW algorithm. Smaller distances indicate higher similarity. And giving an evaluation result according to the similarity.
5) The evaluation result information and the added virtual object information are transmitted to the display unit through the communication unit. The virtual object information is graphic information representing reward or virtual rehabilitation doctor image video information, and is used for evaluating or demonstrating a human subject.
6) The testee receives the evaluation information in real time through the display unit, and interactive feedback of virtual and real scenes is achieved.
It is to be understood that the present invention has been described with reference to certain embodiments and that various changes in form and details may be made therein by those skilled in the art without departing from the spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (8)

1. A limb rehabilitation method based on an augmented reality technology and a posture recognition technology is characterized by comprising the following steps which are sequentially carried out:
1) acquiring limb action information and surrounding scene information of a testee in real time through an image acquisition unit;
2) identifying the bone key point information of the testee in real time by a gesture identification technology;
3) extracting skeleton key point information of standard limb rehabilitation action, extracting characteristics and forming a rehabilitation action characteristic library;
4) comparing the limb rehabilitation actions of the testee in real time through a posture characteristic comparison algorithm, and evaluating a comparison result;
5) transmitting the evaluation result and the virtual object information from the calculation unit to the display unit through the communication unit;
6) the testee receives the evaluation information in real time through the display unit, the calculation unit carries out real-time interactive calculation of the virtual scene and the real scene, and the image acquisition unit receives the feedback result of the calculation unit in real time through the communication unit.
2. The limb rehabilitation method based on augmented reality technology and posture recognition technology according to claim 1, wherein the image acquisition unit in step 1) is an AR glasses connected with the computing unit, and acquires the rehabilitation limb movement information of the subject and the information in the scene around the subject in real time through the AR glasses.
3. The limb rehabilitation method based on the augmented reality technology and the posture recognition technology according to claim 1, wherein the identification of the skeletal key points in the step 2) is a model obtained through thermodynamic diagram and neural network training, and the specific steps are as follows:
firstly, performing thermodynamic diagram annotation on an image with complete limbs, acquiring the annotated image, shooting a complete human body through a camera, and shooting each limb rehabilitation action one by one from different directions; when the thermodynamic diagram is labeled, all skeletal key points need to be uniformly coded;
secondly, zooming the input limb picture, zooming in a unified way, and outputting a Gaussian heat map with all marked bone key points;
and finally, inputting the thermodynamic diagrams into a training network for training to obtain a human skeleton key point thermodynamic diagram extraction model.
4. The limb rehabilitation method based on the augmented reality technology and the posture recognition technology according to claim 1, wherein the step 3) specifically comprises the following steps, and the following steps are sequentially performed:
(1) data extraction, namely intercepting a standard limb rehabilitation action video image at a certain frequency, and generating a bone key point diagram from the intercepted video image; obtaining a standard unified framework key node graph by utilizing normalization and translation operations;
(2) data preprocessing, namely analyzing the motion change state of each node in the skeleton key point node diagram at different time periods respectively to generate a skeleton key point time sequence state diagram, and performing threshold processing on the node time sequence state diagram by using an activation function and a Gaussian filter algorithm;
(3) data processing, namely removing a non-change part in the node time sequence state diagram, only keeping the change states of the nodes at different times, and generating skeleton key point time sequence change three-dimensional data;
(4) and storing the time sequence change characteristic data of the three-dimensional skeleton key points into a file as a standard rehabilitation action characteristic.
5. The limb rehabilitation method based on the augmented reality technology and the posture recognition technology according to claim 1, wherein the step 4) specifically comprises the following steps, and the following steps are sequentially performed:
(1) comparing the attitude characteristics by using an improved DTW time sequence similarity matching algorithm SED _ DTW algorithm;
(2) and acquiring a node time sequence change characteristic diagram of the testee, and calculating the similarity distance between the node time sequence change characteristic diagram of the testee and the node time sequence change characteristic diagram in the preset standard rehabilitation action library through an SED _ DTW algorithm, wherein the smaller the distance, the higher the similarity is.
6. The limb rehabilitation method based on the augmented reality technology and the posture recognition technology according to claim 1, wherein the communication unit in step 5) is a unit for communicating augmented reality glasses with the computing unit, and a common wireless router is used as a bridge between the communication unit and the computing unit.
7. The limb rehabilitation method based on the augmented reality technology and the posture recognition technology as claimed in claim 1, wherein the augmented reality glasses in step 6) receive the feedback result of the computing unit in real time through the communication unit, including the rehabilitation and evaluation information of the limb movement, and the feedback information of the interaction between the preset virtual scene and the real scene.
8. The limb rehabilitation method based on augmented reality technology and posture recognition technology of claim 5, wherein the improved DTW algorithm SED _ DTW in the step 4) specifically comprises the following steps:
two time series are given for describing features X, Y, where X is m in length and Y is n in length, then:
X={x 1 ,x 2 ,x 3 ,…x m },Y={y 1 ,y 2 ,y 3 ,…y n }
construct an n × m matrix C ═ C (i, j)]Wherein the (i, j) th element of the matrix is x i And x j The distance of (c).
Constructing a matrix formula:
c(i,j)=||y i -x i || p
p is 2, the Euclidean distance is adopted, the obtained skeleton key point information is three-dimensional, namely (x, y, z), and the distance between two points needs to be calculated by adopting a weighted standardized Euclidean distance;
firstly, all the components are normalized to be equal in mean value and variance, assuming that the mathematical expectation or mean value of a sample set X is m and the standard deviation is s, the normalization variable of X is set to be X, the mathematical expectation of the normalization variable is 0 and the variance is 1, and the normalization process of the sample set is described as follows:
Figure FDA0003591144930000031
the normalized value is (the value before normalization-the mean of the components)/the component, and the standard deviation can be simply derived to obtain two n-dimensional vectors a (x) 11 ,x 12 ,…,x 1n ) And b (x) 21 ,x 22 ,…,x 2n ) Formula of normalized euclidean distance between:
Figure FDA0003591144930000032
the normalized Euclidean distance calculation is not complicated, where n in the formula is the vector length, x 1k And x 2k Two skeletal key points in the action sequence.
To look for y j To x i Defines a cumulative cost function of the regular path l between X and Y, and is formulated as follows:
Figure FDA0003591144930000041
l represents a queue mapped between X and Y, and the optimal regular path l from X to Y is the shortest distance of cl (X, Y), so the distance formula is:
SED_DTW(X,Y)=c p *(X,Y)=min{c p (X,Y)}
the distance of the cumulative cost function can be expressed as:
γ(i,j)=c(i,j)+min{γ(i-1,j)+γ(i,j-1)+γ(i-1,j-1)
wherein i belongs to [1, m ], j belongs to [1, m ], and the SED _ DTW distance formula is finally obtained as follows:
Figure FDA0003591144930000042
CN202210377129.9A 2022-04-12 2022-04-12 Limb rehabilitation method based on augmented reality technology and posture recognition technology Pending CN114998983A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210377129.9A CN114998983A (en) 2022-04-12 2022-04-12 Limb rehabilitation method based on augmented reality technology and posture recognition technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210377129.9A CN114998983A (en) 2022-04-12 2022-04-12 Limb rehabilitation method based on augmented reality technology and posture recognition technology

Publications (1)

Publication Number Publication Date
CN114998983A true CN114998983A (en) 2022-09-02

Family

ID=83023324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210377129.9A Pending CN114998983A (en) 2022-04-12 2022-04-12 Limb rehabilitation method based on augmented reality technology and posture recognition technology

Country Status (1)

Country Link
CN (1) CN114998983A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115346640A (en) * 2022-10-14 2022-11-15 佛山科学技术学院 Intelligent monitoring method and system for closed-loop feedback of functional rehabilitation training
CN117137435A (en) * 2023-07-21 2023-12-01 北京体育大学 Rehabilitation action recognition method and system based on multi-mode information fusion
CN117173382A (en) * 2023-10-27 2023-12-05 南京维赛客网络科技有限公司 Virtual digital human state correction method, system and storage medium in VR interaction
CN117357103A (en) * 2023-12-07 2024-01-09 山东财经大学 CV-based limb movement training guiding method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298279A (en) * 2019-06-20 2019-10-01 暨南大学 A kind of limb rehabilitation training householder method and system, medium, equipment
CN112084967A (en) * 2020-09-12 2020-12-15 周美跃 Limb rehabilitation training detection method and system based on artificial intelligence and control equipment
US20200401224A1 (en) * 2019-06-21 2020-12-24 REHABILITATION INSTITUTE OF CHICAGO d/b/a Shirley Ryan AbilityLab Wearable joint tracking device with muscle activity and methods thereof
CN112619109A (en) * 2020-11-27 2021-04-09 重庆电子工程职业学院 Rehabilitation training system and method based on AR technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298279A (en) * 2019-06-20 2019-10-01 暨南大学 A kind of limb rehabilitation training householder method and system, medium, equipment
US20200401224A1 (en) * 2019-06-21 2020-12-24 REHABILITATION INSTITUTE OF CHICAGO d/b/a Shirley Ryan AbilityLab Wearable joint tracking device with muscle activity and methods thereof
CN112084967A (en) * 2020-09-12 2020-12-15 周美跃 Limb rehabilitation training detection method and system based on artificial intelligence and control equipment
CN112619109A (en) * 2020-11-27 2021-04-09 重庆电子工程职业学院 Rehabilitation training system and method based on AR technology

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
吴绍春: "《地震预报中的数据挖掘方法研究》", 30 June 2009, 上海大学出版社 *
李孟歆, 中国矿业大学出版社 *
杨永吉等: "基于VR技术的沉浸式听障儿童发音康复训练系统", 《吉林大学学报(信息科学版)》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115346640A (en) * 2022-10-14 2022-11-15 佛山科学技术学院 Intelligent monitoring method and system for closed-loop feedback of functional rehabilitation training
CN117137435A (en) * 2023-07-21 2023-12-01 北京体育大学 Rehabilitation action recognition method and system based on multi-mode information fusion
CN117173382A (en) * 2023-10-27 2023-12-05 南京维赛客网络科技有限公司 Virtual digital human state correction method, system and storage medium in VR interaction
CN117173382B (en) * 2023-10-27 2024-01-26 南京维赛客网络科技有限公司 Virtual digital human state correction method, system and storage medium in VR interaction
CN117357103A (en) * 2023-12-07 2024-01-09 山东财经大学 CV-based limb movement training guiding method and system
CN117357103B (en) * 2023-12-07 2024-03-19 山东财经大学 CV-based limb movement training guiding method and system

Similar Documents

Publication Publication Date Title
CN106650687B (en) Posture correction method based on depth information and skeleton information
Zhang et al. Cooperative sensing and wearable computing for sequential hand gesture recognition
Avola et al. An interactive and low-cost full body rehabilitation framework based on 3D immersive serious games
CN114998983A (en) Limb rehabilitation method based on augmented reality technology and posture recognition technology
Du et al. Non-contact emotion recognition combining heart rate and facial expression for interactive gaming environments
Avola et al. Deep temporal analysis for non-acted body affect recognition
Chen et al. Analyze spontaneous gestures for emotional stress state recognition: A micro-gesture dataset and analysis with deep learning
CN111222486B (en) Training method, device and equipment for hand gesture recognition model and storage medium
Gavrilova et al. Multi-modal motion-capture-based biometric systems for emergency response and patient rehabilitation
Vakanski et al. Mathematical modeling and evaluation of human motions in physical therapy using mixture density neural networks
CN110490109A (en) A kind of online human body recovery action identification method based on monocular vision
Rivas et al. Multi-label and multimodal classifier for affective states recognition in virtual rehabilitation
CN115188074A (en) Interactive physical training evaluation method, device and system and computer equipment
Alshammari et al. Robotics Utilization in Automatic Vision-Based Assessment Systems From Artificial Intelligence Perspective: A Systematic Review
Sharma et al. Real-time recognition of yoga poses using computer vision for smart health care
Kwolek et al. Recognition of JSL fingerspelling using deep convolutional neural networks
Sosa-Jiménez et al. A prototype for Mexican sign language recognition and synthesis in support of a primary care physician
CN112230777A (en) Cognitive training system based on non-contact interaction
TW202133117A (en) Avatar facial expression generating system and method of avatar facial expression generation
CN111310655A (en) Human body action recognition method and system based on key frame and combined attention model
CN115985462A (en) Rehabilitation and intelligence-developing training system for children cerebral palsy
CN115530814A (en) Child motion rehabilitation training method based on visual posture detection and computer deep learning
Usman et al. Skeleton-based motion prediction: A survey
CN115966003A (en) System for evaluating online learning efficiency of learner based on emotion recognition
Dutta et al. A Hand Gesture-operated System for Rehabilitation using an End-to-End Detection Framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination