CN115661856A - User-defined rehabilitation training monitoring and evaluating method based on Lite-HRNet - Google Patents

User-defined rehabilitation training monitoring and evaluating method based on Lite-HRNet Download PDF

Info

Publication number
CN115661856A
CN115661856A CN202211237629.9A CN202211237629A CN115661856A CN 115661856 A CN115661856 A CN 115661856A CN 202211237629 A CN202211237629 A CN 202211237629A CN 115661856 A CN115661856 A CN 115661856A
Authority
CN
China
Prior art keywords
lite
data set
hrnet
data
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211237629.9A
Other languages
Chinese (zh)
Inventor
黄德青
张坤
秦娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202211237629.9A priority Critical patent/CN115661856A/en
Publication of CN115661856A publication Critical patent/CN115661856A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a user-defined rehabilitation training monitoring and evaluating method based on Lite-HRNet, which specifically comprises the following steps: shooting a pre-trained rehabilitation training action picture by using a camera, and newly building or updating a rehabilitation training action image data set; building a Lite-HRnet network model, and respectively adopting an open source COCO data set and a user-defined data set to train and fine tune the network model; clustering, grouping and labeling are carried out through a Kmeans algorithm, and an individualized data set is constructed; and extracting the key frames to a Lite-HRnet network for training, and performing KNN classification after key point information is obtained, thereby completing similarity calculation and threshold detection. The invention solves the problem of manpower loss caused by complex data and realizes full-automatic processing of the data to a certain extent; the algorithm model is adopted, the operation efficiency is considered on the basis of maintaining the high resolution performance, powerful guarantee can be provided for follow-up work, and the reliability and convenience of rehabilitation training standard evaluation are guaranteed.

Description

User-defined rehabilitation training monitoring and evaluating method based on Lite-HRNet
Technical Field
The invention belongs to the field of human body posture estimation, and particularly relates to a user-defined rehabilitation training monitoring and evaluating method based on Lite-HRNet.
Background
With the aging population, industry, traffic accidents, natural disasters, working pressure, diseases and other reasons, the number of people with rehabilitation needs is increasing year by year at an alarming rate. The expansion of the rehabilitation demand end causes the problems of shortage of nursing staff and low rehabilitation efficiency in the rehabilitation field. In addition, in the conventional medical rehabilitation training, a professional technician is generally required to conduct guidance so as to avoid physical injury caused by improper exercise, and a specific environment is required for training, so that the aim of performing high-efficiency rehabilitation training anytime and anywhere is achieved, and a great research space is provided. The development of the machine vision technology provides a new development idea for the rehabilitation field, the machine vision technology and the rehabilitation technology are combined, the rehabilitation personnel can be assisted to independently complete a training task, the active training participation degree and the self-reliability of the rehabilitation personnel are improved, meanwhile, the network can be used for carrying out real-time effect evaluation and training adjustment on the family patients, the rehabilitation efficiency is improved, and meanwhile, the burden of medical personnel and family members is effectively relieved.
At present, human body posture estimation is mainly applied to the fields of face recognition, intelligent security, unmanned driving and the like. The research algorithm in the aspect of multi-person human posture detection mainly comprises two aspects: (1) The top-down method comprises two parts of human body detection and single human body key point detection, namely, the human body existing in the image is detected through a target detection algorithm, and then the key point of each human body is detected on the basis of detecting the human body. The advantage of this type of algorithm is that the recognition efficiency is generally high, but the algorithm is slow to run because it needs to detect every individual in the image. (2) The method comprises key point detection and clustering combination of the detected key points, namely, firstly detecting the key points of all people in the image, and then clustering analyzing the detected key points to combine different individuals. The algorithm has the main advantages of high speed and no influence of the number of people in the image, but the identification accuracy of the whole algorithm is low due to the uncertainty of the clustering.
At present, most of data sets which depend on the gesture detection algorithm verification are public open source data sets. However, the requirements for performing rehabilitation training mode are different according to different application backgrounds and different bearing capacities of users. At present, the mainstream open source data sets mainly comprise COCO and MPII. The COCO is a large-scale target detection, segmentation, key point detection and caption data set, consisting of 32.8 ten thousand images including 17 human body key points. The MPII data set comprises 25000 marked pictures of more than 40k persons and has 16 key point characteristics of the persons. These data sets are typically too large and waste unnecessary resources. Secondly, the lack of pertinence cannot meet the requirement of private customization of users. However, for the data sets homemade in other researches, the labels are not needed or need to be considered to be added, and the machine autonomy performance is lacked.
From the above background, it can be clearly obtained that the five key points that must be solved for the rehabilitation training monitoring and evaluation by using the deep learning algorithm are: (1) The algorithm model can effectively inhibit the influence of environmental factors such as light, shielding and the like, has strong robustness and overcomes the limitation of common images. (2) The algorithm model must ensure high resolution, high processing performance, to replace the current caregiver guidance and also to ensure standardization of the assessment. (3) The algorithm model has to meet the requirement of high efficiency, the time length and the strength of training are limited during use, and the identification and estimation of the whole detection video image sequence are required to be completed during the data processing period, so the attitude estimation efficiency has high requirement, and the algorithm model is required to accurately complete the attitude estimation in a short time. (4) The data set needs to be modifiable, customizable, and can cluster and group data by itself. (5) The evaluation standard needs to be quantitatively expressed, and can score the action to be detected and give an alarm for dangerous actions.
Disclosure of Invention
In order to realize that the algorithm model can continuously maintain the high-resolution state and the high-speed processing performance in the whole rehabilitation training monitoring and evaluating process; the rehabilitation patients can carry out customized requirements and evaluate and judge by calculating relevant objective data. The method not only can meet the customized requirements of users, but also can solve the problem of contradiction between performance and efficiency in posture estimation and application. The invention provides a user-defined rehabilitation training monitoring and evaluating method based on Lite-HRNet.
The invention discloses a user-defined rehabilitation training monitoring and evaluating method based on Lite-HRNet, which comprises the following steps:
step 1: and shooting a pre-trained posture action picture by adopting a camera, and newly building or updating a rehabilitation training action image data set.
Step 2: and constructing a Lite-HRnet network model, and training and fine-tuning the network model by adopting two data sets.
S21: and loading the COCO data set into a Lite-HRnet network model. And pre-training the constructed network, and positioning the preliminarily interested whole body region of the human body in the whole image.
S22: and loading the custom picture library into a Lite-HRnet network model. And fine-tuning the constructed data set, cutting the preliminary interested area from the full graph, and positioning each key point to be detected in the preliminary interested area.
And 3, step 3: clustering, grouping and labeling are carried out through a Kmeans algorithm, and an individualized data set is constructed.
S31: the information of key points such as the trunk, the abdomen, the two arms and the like of the human body is obtained in the process, and characteristic values such as angle values are extracted.
S32: and loading the data set and the characteristic indexes into a Kmeans algorithm model. And selecting Euclidean distance to calculate sample data to obtain a distance matrix for clustering. Distance formula:
Figure BDA0003882885630000021
where D represents the number of attributes of the data object.
Such as: the vertical supporting stage when the person walks, namely the state that the supporting legs and the trunk form a straight line, is selected as the trunk, and the included angle alpha between the horizontal line and the connecting line between the heels and the shoulder joints is taken as the data of the trunk posture. The side surface of the head of the human body takes the included angle beta between the connecting line of the highest point of the ears and the eyes and the vertical line of the ground as the data of the head posture. The human body walking elevation graph takes an included angle gamma between a perpendicular line at the top of the head and the highest point and one arm as arm posture data. Torso angle, head angle, and two arm angle data are collected.
S33: continuing the iteration, the corresponding family center needs to be recalculated (updated):
Figure BDA0003882885630000031
wherein K represents the number of clusters. And when the difference value of the two iterations J is smaller than a certain threshold value, namely delta J is smaller than delta, terminating the iteration, and obtaining the cluster which is the final clustering result.
And 4, step 4: and extracting the key frames to a Lite-HRnet network for training, obtaining the key point information, and performing KNN classification to further complete similarity calculation.
S41: reading a video image to be detected, and obtaining average interframe difference strength by using a frame difference algorithm on the read video frame image; and then carrying out convolution smoothing operation on the sequence to obtain a key video key frame.
S42: and loading the extracted key frame image to a Lite-HRNet network for attitude estimation, and acquiring the coordinate information of the key point.
S43: when the video to be tested is evaluated, the video to be tested is firstly classified by means of a KNN algorithm so as to determine the class of the data set, wherein two pixel points (x) in a two-dimensional space 1 ,y 1 ),(x 2 ,y 2 ) A distance therebetweenThe formula is as follows:
Figure BDA0003882885630000032
and calculating the distances between the predicted points and all the points by using a KNN algorithm, then storing and sequencing, and selecting the front K values to see which types are more, thereby judging the type of the rehabilitation training action mode.
S44: defining a limb vector: firstly, taking the trunk as a center, removing irrelevant key points (such as a nose and the like), defining the extending direction of the limbs as the direction of a human skeleton vector, and then defining the limb segment vector according to the direction of the skeleton vector.
S45: and calculating cosine values of vector included angles. Utilize skeleton joint point coordinate to carry out cosine value calculation to the template gesture and the contained angle of waiting to detect the same limb segment vector of gesture, indirectly judge the contained angle size between the same limb of 2 contrast gestures through the cosine value of contained angle to obtain the similarity of template training action and the action of awaiting measuring, promptly:
Figure BDA0003882885630000033
wherein the content of the first and second substances,
Figure BDA0003882885630000034
a limb vector of the template picture action;
Figure BDA0003882885630000035
the limb vector of the video motion to be detected by the user.
Because the lengths of all the bone segments of the human skeleton are different, when the similarity is calculated, the longer bone segments have more important influence on the calculation of the human posture similarity, and the shorter bone segments have less influence on the calculation of the human posture similarity. The weight of each joint point is different in value. Wherein the weight corresponding to the key node not involved is set to 0. Therefore, after the weight of each joint point is determined, the average value of the cosine similarity of the 14 extracted joint angles of the human skeleton is calculated in a weighted manner, namely:
Figure BDA0003882885630000036
wherein n has a value of 14; the value range of i is [1,14].
S46: calculating cosine values of the included angles of the limb segment vectors, obtaining cosine similarity of the human body postures of the key frame images, converting the cosine similarity into a percentile system, and outputting the percentile system, namely:
Figure BDA0003882885630000041
wherein score represents a score value for measuring the training action criteria.
S47: and setting a threshold detection link, namely, giving an alarm when the score is lower than 70. At this time, we roughly think that the physical health is damaged due to too much action variability.
Figure BDA0003882885630000042
In this way, the calculated data can be uploaded as a quantitative value of the standard evaluation through the network and transmitted to the nursing staff.
The beneficial technical effects of the invention are as follows:
1. the invention adopts a Kmeans and KNN combined exercise to realize the full-automatic processing of data, and the main scheme is that a Kmeans algorithm model is firstly utilized to cluster and group the input label-free data according to the angle characteristic value, then labeling operation is carried out on the groups, and a KNN algorithm model is utilized to classify the data to be detected in the later period. The method solves the problem of manpower loss caused by complex data, and realizes full-automatic processing of the data to a certain extent.
2. According to the requirement of posture estimation, a high-performance Lite-HRnet network algorithm model is adopted in the scene, and the main implementation scheme is that pre-training is performed by utilizing a public open source data set, and then retraining and fine tuning are performed by utilizing a custom data set. The network algorithm model solves the common defect of the two types of existing attitude estimation algorithms, namely the index of operational efficiency is considered while the requirement of continuous high-resolution continuous output is met.
3. The invention realizes the customization of the data set by combining the demand difference of the rehabilitation patients and ensures the fullness of the data. The training mode can be designed according to the physical quality of the patient, the data set can be updated at any time, and the psychological appeal of the patient is met. The image data of each posture action can be randomly changed according to different environmental factors (such as illumination and the like) to expand a data set. The method improves the generalization capability of the model and ensures the robustness of the whole system method.
Drawings
FIG. 1 is a flow chart of a custom rehabilitation training monitoring and evaluation method based on Lite-HRNet of the present invention.
Fig. 2 is a vector diagram defining a limb.
Detailed Description
The invention is described in further detail below with reference to the figures and the detailed description.
The invention discloses a custom rehabilitation training monitoring and evaluating method based on Lite-HRNet, which is shown in figure 1 and comprises the following steps:
step 1: and shooting a pre-trained rehabilitation training action picture by using a camera, and newly building or updating a training action image data set.
The image data acquisition method is characterized in that the image data is shot by equipment such as a camera, and the acquired data is a high-definition action map without occlusion. In order to reduce the influence of the shooting light and the like on the attitude estimation as much as possible, attention should be paid to shooting in multiple directions, multiple angles and without occlusion. Secondly, in order to improve the generalization capability of the network model, the exposure, saturation and hue of the image with a single training action are randomly changed to form image data under different illumination and colors, and a data set is expanded.
Step 2: and constructing a Lite-HRnet network model, and training and fine-tuning the network model by adopting two data sets.
In order to meet the high-resolution and high-performance attitude estimation, the adopted network model is that on the basis of the HRNet network model in the conventional sense, a shuffle block is applied to the HRNet, and then the lightweight native Lite HRNet can be realized. The 2 nd 3 x 3 convolution of the handle of the lightweight HRNet is replaced with a shuffle block and all the regular residual blocks (consisting of 2 3 x 3 convolutions) are replaced. The conventional convolution in multiresolution fusion is replaced by separable convolution to form an original Lite-HRNet. And introducing an efficient conditional channel weighting unit to replace point-by-point (1 x 1) convolution with high calculation cost in the shuffle block, and calculating weight across channels and resolution to construct a required Lite-HRNet network model.
Next, in order to make the model have good robustness, we adopt two data sets to train and fine tune the network model. Firstly, a public open source COCO data set is loaded into a Lite-HRnet network model, and the constructed network is pre-trained. And loading the custom image data set into a Lite-HRnet network model, and retraining and fine-tuning the constructed data set. The expressive force of the scheme is better than that of one-time data set training, and the model is more specific due to the two-step data set training, so that the adaptability of the following video frame image to be detected is enhanced.
And step 3: clustering, grouping and labeling are carried out through a Kmeans algorithm, and an individualized data set is constructed.
Because several artificial data sets are labeled in the initial state, a lot of time and energy are consumed when the data sets are slightly larger, so that a Kmeans algorithm is adopted for clustering analysis, and the training action of the lumbar and back muscles is taken as an example: first a series of image data was taken, and the following groupings were pre-established:
group = { flying swallow type is erect, three-point support is erect, five-point support is erect, flat plate support is erect }
We only need to set the value of K to 4 to divide the data into four classes and label them as required. Firstly, extracting the coordinates of key points such as the trunk, the abdomen, the two arms and the like of a human body through a posture estimation algorithm, and calculating angle data. And then selecting Euclidean distance to calculate sample data to obtain a distance matrix for clustering. Distance formula:
Figure BDA0003882885630000051
where D represents the number of attributes of the data object. Such as (vector angle): the vertical supporting stage when the person walks, namely the state that the supporting legs and the trunk are in a straight line, is selected as the trunk, and the included angle alpha between the horizontal line and the connecting line between the heels and the shoulder joints is taken as the data of the posture of the trunk. The side surface of the head of the human body takes an included angle beta between a connecting line of the highest point of the ears and the eyes and a ground vertical line as data of the head posture. The human body walking elevation graph takes an included angle gamma between a perpendicular line at the top of the head and the highest point and one arm as arm posture data. Torso angle, head angle, and two arm angle data are collected.
Continuing the iteration, the corresponding family center needs to be recalculated (updated): and the mean value of all the data objects in the corresponding class cluster is the cluster center of the updated class cluster. Defining the K-th class cluster as the Center k Then, the update mode of the cluster center is as follows:
Figure BDA0003882885630000061
wherein K represents the number of the class clusters. And when the difference value of the J of the two iterations is smaller than a certain threshold value, namely delta J is smaller than delta, terminating the iteration, and obtaining the cluster which is the final clustering result. To prevent unavoidable errors, fine tuning may be performed for a portion of the error data. This scheme can effectively reduce manpower for medium and large data sets.
And 4, step 4: and extracting the key frames to a Lite-HRnet network for training, obtaining the key point information, and performing KNN classification to further complete similarity calculation.
Reading a video image to be detected, and obtaining average interframe difference strength by using a frame difference algorithm on the read video frame image; then rolling the sequence thereofAnd obtaining key video key frames by product smoothing operation. And loading the extracted key frame image to a Lite-HRNet network for attitude estimation to obtain the coordinate information of the key point. When the video to be detected is evaluated, the video to be detected is firstly classified by means of a KNN algorithm to determine the category of the data set, wherein two pixel points (x) in a two-dimensional space 1 ,y 1 ),(x 2 ,y 2 ) Distance formula between:
Figure BDA0003882885630000062
and calculating the distances between the predicted points and all the points by using a KNN algorithm, then storing and sequencing the distances, and selecting the front K values to see which types are more, thereby judging the rehabilitation training action types.
Next, a limb vector is defined (as shown in FIG. 2): firstly, taking the trunk as a center, removing irrelevant key points (such as a nose and the like) to define the extending direction of the limbs as the skeleton vector direction of the human body, and then defining the limb segment vector according to the skeleton vector direction. Then, calculating a cosine value of a vector included angle: utilize skeleton joint point coordinate to carry out cosine value calculation to the template gesture and the contained angle of waiting to detect the same limb segment vector of gesture, indirectly judge 2 contrast gestures the same limb within a definite time contained angle size through the cosine value of contained angle to obtain the template action and the similarity of the action that awaits measuring, promptly:
Figure BDA0003882885630000063
wherein the content of the first and second substances,
Figure BDA0003882885630000064
a limb vector of the template picture action;
Figure BDA0003882885630000065
the limb vector of the video motion to be detected by the user.
Because the lengths of all the bone segments of the human skeleton are different, when the similarity is calculated, the longer bone segments have more important influence on the calculation of the human posture similarity, and the shorter bone segments have less influence on the calculation of the human posture similarity. The weight of each joint point is different in value. Wherein the weight corresponding to the key node not involved is set to 0. Therefore, after the weight of each joint point is determined, the average value of the cosine similarity of the 14 extracted joint angles of the human skeleton is calculated in a weighted manner, namely:
Figure BDA0003882885630000071
wherein n has a value of 14; the value range of i is [1,14]. Calculating cosine values of the included angles of the limb segment vectors, obtaining cosine similarity of the human body postures of the key frame images, converting the cosine similarity into a percentile system, and outputting the percentile system, namely:
Figure BDA0003882885630000072
where score represents a score for measuring the action criterion.
A threshold detection stage is provided, that is, when the score is lower than 70, an alarm is given. At this time, we roughly think that the physical health is damaged due to too much action variability.
Figure BDA0003882885630000073
In this way, the calculated data can be uploaded as a quantitative value of the standard evaluation to the caregiver via the network.
The whole scheme of the invention has a perfect rehabilitation training monitoring and evaluating process, realizes the customization of the data set by combining the demand difference of the rehabilitation personnel, and ensures the fullness of the data. The exercise mode can be designed according to the physical quality of the patient, the data set can be updated at any time, and the psychological appeal of the patient is met. The image data of each rehabilitation training action can be subjected to random change processing such as exposure, saturation, hue and the like according to different environmental factors (such as light color and the like), and a data set is expanded. The method improves the generalization capability of the model and ensures the robustness of the whole system method.
According to the invention, kmeans and KNN combined exercise are adopted to realize full-automatic processing of data, a Kmeans algorithm model is firstly utilized to cluster and group input label-free data according to angle characteristic values in the early stage, then labeling operation is carried out on the groups, and a KNN algorithm model is utilized to classify the data to be detected in the later stage. The method solves the problem of manpower loss caused by complex data, and realizes full-automatic processing of the data to a certain extent.
According to the attitude estimation requirement, a high-performance Lite-HRnet network algorithm model is adopted, and a deep learning model is trained by a large number of samples, so that the method has the advantages of strong generalization, strong adaptability, good transportability and the like. The method first uses a public open source data set for pre-training, and then uses a user-defined data set for retraining and fine-tuning. The model has the reasoning speed basically stabilized at 0.16s, the flops at 0.42G and the parameter quantity at 1.76M. The network algorithm model solves the common defects of the two algorithms of the existing attitude estimation, namely the network algorithm model meets the requirement of continuous high-resolution continuous output and also considers the index of the operational efficiency. The method has excellent performance in real-time algorithm detection, provides powerful guarantee for follow-up work, and ensures reliability and convenience of rehabilitation training standard evaluation.

Claims (1)

1. A custom rehabilitation training monitoring and evaluating method based on Lite-HRNet is characterized by comprising the following steps:
step 1: shooting a pre-trained rehabilitation training action picture by using a camera, and newly building or updating an action image data set;
and 2, step: constructing a Lite-HRnet network model, and respectively adopting two data sets to train and fine tune the network model;
s21: loading a COCO data set into a Lite-HRnet network model, pre-training the constructed network, and positioning the preliminarily interested whole body region of the human body in the whole graph to obtain a first training set;
s22: loading the custom picture data set into a Lite-HRnet network model; fine-tuning the constructed data set, cutting the primary region of interest from the full image, and positioning each key point to be detected in the primary region of interest to obtain a second training set;
and step 3: clustering, grouping and labeling by a Kmeans algorithm to construct an individualized data set;
s31: extracting a characteristic value of the key point information;
s32: loading the data set and the characteristic indexes into a Kmeans algorithm model, selecting Euclidean distances to calculate sample data to obtain a distance matrix for clustering, wherein the distance formula is as follows:
Figure FDA0003882885620000011
wherein D represents the number of data object;
s33: continuing the iteration, the corresponding family center needs to be recalculated or updated:
Figure FDA0003882885620000012
wherein K represents the number of the clusters; when the difference value of the two iterations J is smaller than a certain threshold value, namely delta J is smaller than delta, the iteration is terminated, and the obtained cluster is the final clustering result;
and 4, step 4: extracting the key frames to a Lite-HRnet network for training, obtaining key point information, and performing KNN classification to further complete similarity calculation;
s41: reading a video image to be detected, and obtaining average inter-frame differential strength from the read video frame image by using a frame difference algorithm; then, carrying out convolution smoothing operation on the sequence to obtain a key video key frame;
s42: loading the extracted key frame image to a Lite-HRNet network for attitude estimation to obtain key point coordinate information;
s43: when proceeding withWhen the video to be detected is evaluated, the video to be detected is firstly classified by means of a KNN algorithm to determine the category of the data set, wherein two pixel points (x) in a two-dimensional space 1 ,y 1 ),(x 2 ,y 2 ) The distance between them is given by the formula:
Figure FDA0003882885620000013
calculating the distances between the predicted points and all the points by using a KNN algorithm, then storing and sequencing the distances, and selecting the front K values to see which types are more, thereby judging the types of the rehabilitation training mode;
s44: defining a limb vector: firstly, taking a trunk as a center, removing irrelevant key points to define the extending direction of limbs as the skeleton vector direction of a human body, and then defining a limb segment vector according to the skeleton vector direction;
s45: calculating cosine values of vector included angles; utilize skeleton joint point coordinate to carry out cosine value calculation to the contained angle of template action gesture and the same limb body segment vector of action gesture of waiting to detect, indirectly judge the contained angle size between the same limb of 2 contrast gestures through the cosine value of contained angle to obtain the similarity of template gesture and the gesture of waiting to detect, promptly:
Figure FDA0003882885620000021
wherein the content of the first and second substances,
Figure FDA0003882885620000022
a limb vector of the motion posture of the template picture is obtained;
Figure FDA0003882885620000023
a limb vector of the action gesture of the video to be detected of the user;
after the weight of each joint point is determined, the average value of cosine similarity of 17 condyle included angles of the extracted human skeleton is calculated in a weighting mode, namely:
Figure FDA0003882885620000024
wherein n has a value of 14; the value range of i is [1,14];
s46: calculating cosine values of the included angles of the limb segment vectors, obtaining cosine similarity of the human body postures of the key frame images, converting the cosine similarity into a percentile system, and outputting the percentile system, namely:
Figure FDA0003882885620000025
wherein score represents a score for measuring motion criteria;
s47: setting a threshold detection link, namely when the score is lower than 70, giving an alarm, and considering that the physical health is damaged due to overlarge action difference;
Figure FDA0003882885620000026
and uploading the calculated data serving as a quantitative value of standard evaluation to a nursing staff through a network.
CN202211237629.9A 2022-10-10 2022-10-10 User-defined rehabilitation training monitoring and evaluating method based on Lite-HRNet Pending CN115661856A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211237629.9A CN115661856A (en) 2022-10-10 2022-10-10 User-defined rehabilitation training monitoring and evaluating method based on Lite-HRNet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211237629.9A CN115661856A (en) 2022-10-10 2022-10-10 User-defined rehabilitation training monitoring and evaluating method based on Lite-HRNet

Publications (1)

Publication Number Publication Date
CN115661856A true CN115661856A (en) 2023-01-31

Family

ID=84987663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211237629.9A Pending CN115661856A (en) 2022-10-10 2022-10-10 User-defined rehabilitation training monitoring and evaluating method based on Lite-HRNet

Country Status (1)

Country Link
CN (1) CN115661856A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580813A (en) * 2023-07-10 2023-08-11 西南交通大学 Deep learning-based lumbar muscle exercise monitoring and evaluating device and method
CN117097909A (en) * 2023-10-20 2023-11-21 深圳市星易美科技有限公司 Distributed household audio and video processing method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580813A (en) * 2023-07-10 2023-08-11 西南交通大学 Deep learning-based lumbar muscle exercise monitoring and evaluating device and method
CN117097909A (en) * 2023-10-20 2023-11-21 深圳市星易美科技有限公司 Distributed household audio and video processing method and system
CN117097909B (en) * 2023-10-20 2024-02-02 深圳市星易美科技有限公司 Distributed household audio and video processing method and system

Similar Documents

Publication Publication Date Title
CN105787439B (en) A kind of depth image human synovial localization method based on convolutional neural networks
CN112861624A (en) Human body posture detection method, system, storage medium, equipment and terminal
CN109815826B (en) Method and device for generating face attribute model
WO2020224123A1 (en) Deep learning-based seizure focus three-dimensional automatic positioning system
CN115661856A (en) User-defined rehabilitation training monitoring and evaluating method based on Lite-HRNet
CN107423730A (en) A kind of body gait behavior active detecting identifying system and method folded based on semanteme
CN109635727A (en) A kind of facial expression recognizing method and device
CN111597946B (en) Processing method of image generator, image generation method and device
CN110490109B (en) Monocular vision-based online human body rehabilitation action recognition method
CN109410168A (en) For determining the modeling method of the convolutional neural networks model of the classification of the subgraph block in image
CN110827304B (en) Traditional Chinese medicine tongue image positioning method and system based on deep convolution network and level set method
CN112102947B (en) Apparatus and method for body posture assessment
CN112465905A (en) Characteristic brain region positioning method of magnetic resonance imaging data based on deep learning
CN106951834B (en) Fall-down action detection method based on old-age robot platform
CN110321827A (en) A kind of pain level appraisal procedure based on face pain expression video
CN115346272A (en) Real-time tumble detection method based on depth image sequence
CN106846372A (en) Human motion quality visual A+E system and method
CN114601454A (en) Method for monitoring bedridden posture of patient
Dantcheva et al. Expression recognition for severely demented patients in music reminiscence-therapy
Li et al. Infant monitoring system for real-time and remote discomfort detection
CN114550299A (en) System and method for evaluating daily life activity ability of old people based on video
CN107967941A (en) A kind of unmanned plane health monitoring method and system based on intelligent vision reconstruct
CN114299279A (en) Unmarked group rhesus monkey motion amount estimation method based on face detection and recognition
CN112149613B (en) Action pre-estimation evaluation method based on improved LSTM model
CN116805433B (en) Human motion trail data analysis system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination