CN117746305A - Medical care operation training method and system based on automatic evaluation - Google Patents

Medical care operation training method and system based on automatic evaluation Download PDF

Info

Publication number
CN117746305A
CN117746305A CN202410191159.XA CN202410191159A CN117746305A CN 117746305 A CN117746305 A CN 117746305A CN 202410191159 A CN202410191159 A CN 202410191159A CN 117746305 A CN117746305 A CN 117746305A
Authority
CN
China
Prior art keywords
training
action
motion
node
sensing devices
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410191159.XA
Other languages
Chinese (zh)
Other versions
CN117746305B (en
Inventor
王晓玲
陈静
阮顺莉
陈芳
李颖
吴锦晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
West China Hospital of Sichuan University
Original Assignee
West China Hospital of Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by West China Hospital of Sichuan University filed Critical West China Hospital of Sichuan University
Priority to CN202410191159.XA priority Critical patent/CN117746305B/en
Publication of CN117746305A publication Critical patent/CN117746305A/en
Application granted granted Critical
Publication of CN117746305B publication Critical patent/CN117746305B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a medical care operation training method and system based on automatic evaluation, which relate to the technical field of motion detection and comprise the following steps: after medical staff to be trained wears the plurality of wearable sensing devices, acquiring the relative position relation among the wearable sensing devices; inputting the relative position relation into a trained action analysis model to obtain a target space vector; shooting training videos of medical staff to be trained through a camera; determining a time stamp of the training node according to the training video; determining training space vectors among the wearable sensing devices when training the nodes according to the time stamps; determining a training evaluation score of a preset operation action according to the target space vector and the training space vector; and generating an operation training report according to the training evaluation score. According to the invention, subjectivity of manual evaluation can be avoided, comprehensiveness, objectivity and accuracy of the evaluation are improved, and training effect is improved.

Description

Medical care operation training method and system based on automatic evaluation
Technical Field
The invention relates to the technical field of motion detection, in particular to a medical care operation training method and system based on automatic evaluation.
Background
In the related art, the evaluation of the training actions of the medical staff mainly depends on manual evaluation, the manual evaluation coverage area is small, the efficiency is low, the training actions of the medical staff cannot be comprehensively evaluated, the subjectivity of the manual evaluation is high, and the training effect of the medical staff is difficult to accurately judge.
The information disclosed in the background section of this application is only for enhancement of understanding of the general background of this application and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Disclosure of Invention
The invention provides a medical care operation training method and system based on automatic evaluation, which can solve the technical problem that the training effect of medical care personnel is difficult to accurately judge.
According to a first aspect of the present invention, there is provided a medical care operation training method based on automatic evaluation, comprising:
after the medical staff to be trained wears the plurality of wearable sensing devices, acquiring relative position relations among the wearable sensing devices under the condition that the medical staff to be trained keeps a preset posture, wherein the wearable sensing devices comprise head sensing devices, shoulder sensing devices, elbow sensing devices, wrist sensing devices, waist sensing devices, knee sensing devices and ankle sensing devices, and the relative position relations comprise space vectors among the space positions of the wearable sensing devices;
Inputting a trained action analysis model relative to the position relation to obtain target space vectors of a plurality of action nodes of a preset operation action, wherein the target space vectors are used for representing space vectors between space positions of each wearable sensing device under the condition that the body type of the medical staff to be trained keeps the standard posture corresponding to the action nodes;
shooting training videos of medical staff to be trained through cameras with multiple visual angles in the process that the medical staff to be trained executes the preset operation actions;
determining a time stamp corresponding to each training node when the medical staff to be trained moves to according to the training video;
determining training space vectors among the wearable sensing devices at the time corresponding to the time stamp according to the time stamp;
determining a training evaluation score of a preset operation action according to the target space vector and the training space vector;
and generating an operation training report according to the training evaluation score.
According to a second aspect of the present invention, there is provided an automatic assessment-based medical practice training system comprising:
the relative position relation module is used for acquiring the relative position relation among the wearable sensing devices under the condition that the medical staff to be trained keeps a preset posture after the medical staff to be trained wears the plurality of wearable sensing devices, wherein the wearable sensing devices comprise head sensing devices, shoulder sensing devices, elbow sensing devices, wrist sensing devices, waist sensing devices, knee sensing devices and ankle sensing devices, and the relative position relation comprises space vectors among the space positions of the wearable sensing devices;
The target space vector module is used for inputting the relative position relation into the trained motion analysis model to obtain target space vectors of a plurality of motion nodes of the preset operation motion, wherein the target space vectors are used for representing space vectors among the space positions of each wearable sensing device under the condition that the body type of the medical staff to be trained keeps the standard posture corresponding to the motion nodes;
the training video module is used for shooting training videos of medical staff to be trained through cameras with multiple visual angles in the process that the medical staff to be trained executes the preset operation actions;
the timestamp module is used for determining a timestamp corresponding to each training node when the medical staff to be trained moves to according to the training video;
the training space vector module is used for determining training space vectors among the wearable sensing devices at the time corresponding to the time stamp according to the time stamp;
the training evaluation score module is used for determining training evaluation scores of preset operation actions according to the target space vector and the training space vector;
and the operation training report module is used for generating an operation training report according to the training evaluation score.
According to a third aspect of the present invention, there is provided an automatic assessment-based medical practice training apparatus comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored by the memory to perform the automated assessment-based healthcare operation training method.
According to a fourth aspect of the present invention, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the automatic assessment based healthcare operation training method.
The technical effects are as follows: according to the invention, the action nodes of medical staff to be trained when the medical staff to be trained execute the preset operation actions can be accurately analyzed, the training space vectors corresponding to the action nodes are obtained, and the action evaluation score is determined by the target space vectors of the standard posture which are output by the action analysis model and matched with the body types of the medical staff to be trained, so that automatic evaluation can be performed during medical operation training, the subjectivity of manual evaluation is avoided, the comprehensiveness, objectivity and accuracy of evaluation are improved, and the training effect is facilitated to be improved. When the training node is identified, according to the i-1 th motion vector of each key point and the i-th motion vector of each key point, determining the angle between the motion vectors as an action node identification parameter, judging whether the action direction of the medical staff to be trained changes according to the angle between the motion vectors, and determining a timestamp corresponding to the moment when the action direction of the medical staff to be trained changes, wherein the judgment accuracy of the training node can be improved through the operation of the motion vectors. When the gesture evaluation score is determined, cosine similarity calculation is carried out according to the target space vector of each action node and the training space vector of the training node with the same sequence number as the action node, and the gesture evaluation score is determined according to the average value of the cosine similarity, so that whether the gesture of the medical staff to be trained is standard at the action node or not is conveniently determined, and the accuracy, the scientificity and the objectivity of the gesture evaluation score are improved. When determining the action evaluation score, determining the action evaluation score according to the first action vector and the second action vector, and fully considering the difference between the training action and the standard action in the x, y and z directions and the difference between the training action and the standard action in the amplitude in the calculation process, thereby improving the accuracy, the scientificity and the objectivity of the action evaluation score. When the motion analysis model is trained, the loss function of the motion analysis model is determined through the quantity errors which are satisfied by the predicted motion nodes and the errors between the predicted space vector and the sample space vector, so that the errors of the predicted space vector and the sample space vector are reduced in the training process of the motion analysis model, the errors of the quantity of the predicted nodes and the quantity of the motion nodes are reduced, and the accuracy of the motion analysis model is improved more pertinently.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed. Other features and aspects of the present invention will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the invention or the solutions of the prior art, the drawings which are necessary for the description of the embodiments or the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other embodiments may be obtained from these drawings without inventive effort to a person skilled in the art,
FIG. 1 schematically illustrates a flow chart of a method of training a healthcare operation based on automatic assessment in accordance with an embodiment of the present invention;
FIG. 2 schematically illustrates a block diagram of an automated assessment based healthcare operation training system according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Fig. 1 schematically illustrates a flow chart of a method for training a healthcare operation based on automatic assessment according to an embodiment of the present invention, the method comprising:
step S101, after a medical staff to be trained wears a plurality of wearable sensing devices, acquiring relative position relations among the wearable sensing devices under the condition that the medical staff to be trained keeps a preset posture, wherein the wearable sensing devices comprise head sensing devices, shoulder sensing devices, elbow sensing devices, wrist sensing devices, waist sensing devices, knee sensing devices and ankle sensing devices, and the relative position relations comprise space vectors among space positions of the wearable sensing devices;
step S102, inputting a relative position relation into a trained action analysis model to obtain target space vectors of a plurality of action nodes of a preset operation action, wherein the target space vectors are used for representing space vectors between space positions of each wearable sensing device under the condition that the body type of the medical staff to be trained keeps a standard posture corresponding to the action node;
Step S103, shooting training videos of medical staff to be trained through cameras with multiple visual angles in the process that the medical staff to be trained executes the preset operation actions;
step S104, determining a time stamp corresponding to each training node when the medical staff to be trained moves to according to the training video;
step S105, determining training space vectors among the wearable sensing devices at the time corresponding to the time stamp according to the time stamp;
step S106, determining a training evaluation score of a preset operation action according to the target space vector and the training space vector;
step S107, generating an operation training report according to the training evaluation score.
According to the medical care operation training method based on automatic evaluation, which is disclosed by the embodiment of the invention, the action nodes of medical care personnel to be trained when the medical care personnel execute preset operation actions can be accurately analyzed, the training space vectors corresponding to the action nodes are obtained, and the action evaluation score is determined by the target space vectors of the standard posture which are output by the action analysis model and matched with the body types of the medical care personnel to be trained, so that automatic evaluation can be performed during medical care operation training, the subjectivity of manual evaluation is avoided, the comprehensiveness, objectivity and accuracy of evaluation are improved, and the training effect is facilitated to be improved.
According to one embodiment of the present invention, after the medical staff to be trained wears the plurality of wearable sensing devices, a relative positional relationship between the wearable sensing devices is acquired under the condition that the medical staff to be trained maintains a preset posture, wherein the wearable sensing devices comprise head sensing devices, shoulder sensing devices, elbow sensing devices, wrist sensing devices, waist sensing devices, knee sensing devices and ankle sensing devices, and the relative positional relationship comprises spatial vectors between spatial positions of the wearable sensing devices.
For example, performing cardiopulmonary resuscitation training on a medical staff to be trained, the medical staff to be trained needs to wear a plurality of wearable sensing devices, including head sensing devices, shoulder sensing devices, elbow sensing devices, wrist sensing devices, waist sensing devices, knee sensing devices and ankle sensing devices, which are worn at corresponding positions, and can determine the relative positional relationship between the sensing devices, for example, based on the head sensing devices, the spatial vector between the shoulder sensing devices and the head sensing devices in three-dimensional space is the relative positional relationship, and similarly, other wearable sensing devices can also use the head sensing devices as references and determine the spatial vector between the head sensing devices and the head sensing devices. The preset posture can be a standing posture or a sitting posture of the medical staff to be trained and other static postures, and the acquired relative position relationship can be used for representing the body type of the medical staff to be trained. The wearable sensing device may be a magnetic sensor, an ultrasonic sensor or the like which can be used for determining the position information, and the specific type of the wearable sensing device is not limited by the invention.
According to one embodiment of the present invention, in step S102, the trained motion analysis model is input with respect to the position relationship, and a target space vector of a plurality of motion nodes of the preset operation motion is obtained, where the target space vector is used to represent a space vector between spatial positions of each wearable sensing device when the body type of the medical staff to be trained maintains the standard posture corresponding to the motion node.
For example, when performing cardiopulmonary resuscitation, the medical staff performs a downward pressing action of both hands, and when performing a rebound action after pressing, the medical staff performs a single action procedure, the critical points of the opposite force directions in two continuous action procedures are action nodes, the relative position relationship is input into a trained action analysis model, target space vectors of a plurality of action nodes of a preset operation action are obtained, the target space vectors are space vectors between each sensing device when the body shape of the medical staff to be trained maintains the standard posture corresponding to the action nodes, and the space vectors can also be used as standard space vectors when the body shape of the medical staff to be trained maintains the posture of the action nodes.
According to one embodiment of the present invention, in step S103, training videos of the medical staff to be trained are captured by cameras with multiple angles of view during the process of the medical staff to be trained performing the preset operation actions.
For example, in the process of training the cardiopulmonary resuscitation action of the medical staff to be trained, training videos of the medical staff to be trained are shot from a plurality of directions such as the front face, the two side faces, the back face and the like of the medical staff to be trained through the camera. Therefore, the position information of each part of the medical staff to be trained can be determined by combining training videos shot by cameras with a plurality of visual angles, and the situation that the training videos with one visual angle are difficult to shoot all parts of the medical staff to be trained is avoided.
According to one embodiment of the present invention, in step S104, a timestamp corresponding to each training node when the medical staff member to be trained moves to is determined according to the training video.
For example, a training video is composed of tens of video frames per second, and the time in the training video corresponding to the video frames is a time stamp.
According to one embodiment of the present invention, determining, according to the training video, a timestamp corresponding to the movement of the medical staff to be trained to each training node includes: determining a plurality of key points of the medical staff to be trained in each video frame of the training video of each view angle, wherein the key points comprise a head key point, a shoulder key point, an elbow key point, a wrist key point, a waist key point, a knee key point and an ankle key point; respectively determining the position information of each key point in the ith-1 video frame of the training video of each view angle, the position information in the ith video frame of the training video of each view angle and the position information in the (i+1) th video frame of the training video of each view angle; according to the position information of each key point in the ith-1 video frame of the training video of each view angle and the position information of each key point in the ith video frame of the training video of each view angle, the ith-1 motion vector of each key point in the training video of each view angle is obtained; according to the position information of each key point in the ith video frame of the training video of each view angle and the position information of each key point in the (i+1) th video frame of the training video of each view angle, the ith motion vector of each key point in the training video of each view angle is obtained; determining an action node identification parameter of an ith video frame according to the ith-1 motion vector of each key point and the ith motion vector of each key point; judging whether the timestamp of the ith video frame is the moment corresponding to the training node according to the action node identification parameter of the ith video frame; and if the time stamp of the ith video frame is the time corresponding to the training node, determining the time stamp of the ith video frame as the time stamp corresponding to the training node.
For example, a cardiopulmonary resuscitation training continuous video for 1 minute contains thousands of video frames, and according to each video frame of the training video, the positions of a plurality of sensing devices worn by medical staff to be trained are determined by using an image processing model, namely, head key points, shoulder key points, elbow key points, wrist key points, waist key points, knee key points and ankle key points; respectively determining the position information of each key point in three continuous i-1, i and i+1 video frames; the position information of the key point in the ith video and the i-1 video is differenced, and the i-1 motion vector of the key point can be obtained; similarly, the position information of the key point in the (i+1) th video and the (i) th video is differenced, so that the (i) th motion vector of the key point can be obtained; in the same manner, motion vectors for each key point in the training video for each view can be obtained. According to the ith and the (i-1) th motion vectors, whether the moment corresponding to the ith video frame is a critical point of changing the action direction or not, namely, a training node can be judged; if the moment corresponding to the ith video frame is the training node, the moment corresponding to the ith video frame is a time stamp corresponding to the training node.
According to one embodiment of the invention, according to The i-1 th motion vector of each key point and the i-th motion vector of each key point determine the motion node identification parameters of the i-th video frame, and the method comprises the following steps: determining motion node identification parameters for an ith video frame according to equation (1)
(1)
Wherein,i-1 th motion vector for the jth key point in the training video for the kth view,/v>The motion vector is the ith motion vector of the jth key point in the training video of the kth view angle, n is the number of key points, m is the number of cameras, j is more than or equal to 1 and less than or equal to n, k is more than or equal to 1 and less than or equal to m, k, j, m and n are positive integers, max is a maximum function, and arccos is an inverse cosine function.
According to one embodiment of the invention, n is the number of keypoints, m is the number of cameras,cosine similarity between the (i-1) th motion vector of the (j) th key point in the training video of the (k) th view and the (i) th motion vector of the (j) th key point in the training video of the (k) th view>To determine the angle between the i-1 th motion vector and the i-th motion vector according to the inverse cosine function,/v>For the motion node identification parameter, the value of the motion node identification parameter is the maximum value of the angle between the i-1 motion vector and the i-th motion vector of the n key points in the m visual angle training videos, and the processing of taking the maximum value can be used for determining the medical care to be trained And if the action direction of the person changes in the training process, the time stamp of the ith video frame is the time corresponding to the training node. In an example, it may be determined whether the action node identification parameter is greater than or equal to +.>If the action node identification parameter is greater than or equal to +.>The i-1 th motion vector and the direction of the i-th motion vector of at least one key point in the multiple view angles are changed, namely, the action direction of medical staff to be trained is changed, so that the timestamp of the i-th video frame is the timestamp corresponding to the training node.
In this way, according to the i-1 th motion vector of each key point and the i-th motion vector of each key point, the angle between the motion vectors is determined and used as an action node identification parameter, and whether the action direction of the medical staff to be trained changes or not is judged according to the angle between the motion vectors, so that the time stamp corresponding to the moment when the action direction of the medical staff to be trained changes is determined, and the judgment accuracy of the training node can be improved through the operation of the motion vectors.
According to one embodiment of the present invention, in step S105, a training space vector between the wearable sensing devices at the time corresponding to the time stamp is determined according to the time stamp.
For example, during the training process, a spatial vector between each wearable sensing device, i.e., a training spatial vector, may be determined at a time corresponding to the timestamp when the medical personnel to be trained perform the training action. For example, a spatial vector between the locations of the respective wearable sensing device and the head sensing device, i.e., a training spatial vector, may be determined at a time corresponding to the time stamp.
According to one embodiment of the present invention, in step S106, a training evaluation score of a preset operation action is determined according to the target space vector and the training space vector.
For example, the training evaluation score of the preset operation action is determined from the target space vector and the training space vector, that is, the space vector corresponding to the standard posture and the space vector obtained based on the posture of the medical staff to be trained when training.
According to one embodiment of the present invention, determining a training evaluation score of a preset operation action according to the target space vector and the training space vector includes: determining an attitude evaluation score according to the target space vector of each action node and the training space vector of the training node with the same sequence number as the action node; determining first position coordinates of a plurality of wearable sensing devices corresponding to each action node according to the target space vector; determining a first motion vector of each wearable sensing device between adjacent motion nodes according to the first position coordinates; determining second position coordinates of a plurality of wearable sensing devices corresponding to each training node according to the training space vector; determining a second motion vector of each wearable sensing device between adjacent training nodes according to the second position coordinates; determining an action evaluation score according to the first action vector and the second action vector; and determining the training evaluation score according to the gesture evaluation score and the action evaluation score.
For example, the attitude evaluation score may be determined from a plurality of target space vectors among a plurality of action nodes output by the neural network model and a plurality of training space vectors among a plurality of training nodes having the same sequence numbers as the action nodes.
According to one embodiment of the present invention, determining a pose evaluation score according to a target space vector of each action node and a training space vector of a training node having the same sequence number as the action node includes:
determining a pose evaluation score according to formula (2)
(2)
Wherein,the target space vector between the (t) wearable sensing device and the (1) wearable sensing device which are the(s) action nodes, wherein the (1) wearable sensing device is a head sensing device, and the (a) wearable sensing device is a (1) wearable sensing device>The training space vector between the t wearable sensing equipment and the 1 st wearable sensing equipment is the s training node, M is the total number of the wearable sensing equipment, N is the total number of action nodes, s is less than or equal to N, t is less than or equal to M, and s, t, N and M are all positive integers.
In accordance with one embodiment of the present invention,the cosine similarity of the target space vector between the t-th wearable sensing device and the 1 st wearable sensing device of the s-th action node and the training space vector between the t-th wearable sensing device and the 1 st wearable sensing device of the s-th training node is larger, the action gesture of the patient to be tested is closer to the standard action gesture, and the cosine similarity is larger >In order to perform averaging operation on cosine similarity of a plurality of target space vectors and training space vectors of each action node, an attitude evaluation score is determined, and the action attitude of a medical staff to be trained in training is closer to a standard attitude as the attitude evaluation score is larger.
In this way, cosine similarity calculation is performed according to the target space vector of each action node and the training space vector of the training node with the same sequence number as the action node, and the gesture evaluation score is determined according to the average value of the cosine similarity, so that whether the gesture of the medical staff to be trained is standard at the action node or not is conveniently determined, and the accuracy, scientificity and objectivity of the gesture evaluation score are improved.
According to an embodiment of the present invention, first position coordinates of a plurality of sensing devices corresponding to each action node, that is, spatial positions of a plurality of sensing devices corresponding to each action node in a three-dimensional space are determined according to a target spatial vector, for example, first coordinate positions of head sensing devices may be determined based on position information detected by the head sensing devices as references, and first position coordinates of the respective wearable sensing devices may be determined based on the target spatial vector representing a relative positional relationship between the respective wearable sensing devices and the head sensing devices.
In an example, a first motion vector of the head sensing device between the first motion node and the second motion node is determined based on a first position coordinate of the head sensing device at the first motion node and a first position coordinate of the head sensing device at the second motion node. In this way, a first motion vector for each wearable sensing device between each adjacent motion node may be obtained. And may obtain a second motion vector for each wearable sensing device between each adjacent motion node in a similar manner.
According to one embodiment of the present invention, determining an action evaluation score from the first action vector and the second action vector includes: determining an action evaluation score according to equation (3)
(3)
Wherein,for the coordinates of the t-th wearable sensing device in the x direction of the first motion vector between the s-th motion node and the s+1th motion node, < >>First movement of the t-th wearable sensing device between the s-th action node and the s+1th action nodeCoordinate in y-direction as vector, +.>Is the z-direction coordinate of the first motion vector of the t-th wearable sensing device between the s-th motion node and the s+1th motion node, For the coordinates of the t-th wearable sensing device in the x direction of the second motion vector between the s-th motion node and the s+1th training node, < >>For the coordinates of the t-th wearable sensing device in the y direction of the second motion vector between the s-th motion node and the s+1th training node, < >>The coordinate in the z direction of a second motion vector between the(s) th motion node and the (s+1) th training node of the (t) th wearable sensing device is represented by M, the total number of the wearable sensing devices is represented by N, the total number of the motion nodes is not greater than s and not greater than N, t is not greater than M, the s, t, N and M are all positive integers, and the max is a maximum function.
According to one embodiment of the invention, N is the total number of action nodes, N-1 is the number of first action vectors between action nodes,is the ratio of the coordinate of the x direction of the first motion vector of the t-th wearable sensing device between the s-th motion node and the s+1th motion node to the coordinate of the x direction of the second motion vector of the t-th wearable sensing device between the s-th motion node and the s+1th training node, and is>The coordinate of the y direction of the first motion vector between the(s) th motion node and the (s+1) th motion node of the (t) th wearable sensing device and the coordinate of the y direction of the second motion vector between the(s) th motion node and the (s+1) th training node of the (t) th wearable sensing device The ratio of the two values of the ratio,is the ratio of the z-direction coordinate of the first motion vector of the t-th wearable sensing device between the s-th motion node and the s+1-th motion node to the z-direction coordinate of the second motion vector of the t-th wearable sensing device between the s-th motion node and the s+1-th training node, if the motion of the medical staff to be trained is completely consistent with the standard motion,/>And->Equal, but there may be an error between the movements of the medical staff to be trained and the standard movements, so that the above three ratios are not equal, therefore +.>To take the maximum value of the difference of the ratio of the coordinates of the first motion vector and the second motion vector between the ith motion node and the (s+1) th motion node of the (t) th wearable sensing device in the x, y and z directions, the maximum value of the errors of the training motion and the standard motion in the x, y and z directions can be expressed as%>For the minimum similarity degree of the first motion vector and the second motion vector in the x, y and z directions, the larger the minimum similarity degree is, the more similar the training motion and the target motion are.
According to one embodiment of the invention, the modulus of the first motion vector and the second motion vector represent the magnitudes of the training motion and the standard motion, respectively, and when the training motion is exactly identical to the standard motion, the modulus of the first motion vector and the second motion vector is identical, that is, The value of (2) is 1. But if the action and theThe magnitude of the standard motion is not uniform, the ratio of the modes of the first motion vector and the second motion vector is not 1,the error of the training motion from the standard motion amplitude can be represented,to train the similarity of the motion with the standard motion in the motion amplitude, the greater the similarity, the closer the modulus of the training motion is to the amplitude of the standard motion.
According to one embodiment of the present invention, the minimum similarity of the motion in the three directions and the similarity of the motion amplitudes may be multiplied, and the average value of the products may be solved, so that the motion evaluation score may be determined, and the greater the motion evaluation score, the closer the training motion to the standard motion.
In this way, the action evaluation score is determined according to the first action vector and the second action vector, and in the calculation process, the difference between the training action and the standard action in the x, y and z directions and the difference between the training action and the standard action in the amplitude are fully considered, so that the accuracy, the scientificity and the objectivity of the action evaluation score are improved.
According to one embodiment of the invention, the above gesture evaluation score and the action evaluation score are weighted and summed to obtain a training evaluation score, and the higher the training evaluation score is, the more standard the training action of the medical staff to be trained is.
According to one embodiment of the present invention, in step S107, an operation training report is generated according to the training evaluation score.
For example, according to the gesture evaluation score and the action evaluation score, the training evaluation score is determined, when the training evaluation score is greater than or equal to the set threshold, an operation training report qualified in training can be generated, and when the training evaluation score is smaller than the set threshold, which actions or gestures are not standard can be analyzed and fed back to the medical staff to be trained, so that the medical staff to be trained can perform more targeted training.
According to one embodiment of the invention, the motion analysis model may be trained prior to use, the training step of the motion analysis model comprising: after a trainer wears a plurality of wearable sensing devices, acquiring the relative position relation among the wearable sensing devices under the condition that the trainer keeps a preset posture; inputting the relative position relation into a trained action analysis model to obtain prediction space vectors of a plurality of prediction action nodes of a preset operation action; recording sample space vectors of a plurality of action nodes in the process that a trainer executes the preset operation action; determining a loss function of the motion analysis model according to the prediction space vector and the sample space vector; and training the action analysis model through the loss function to obtain a trained action analysis model.
For example, the motion and posture of the trainer are used as the standard motion and posture, and the relative positional relationship capable of expressing the body shape of the trainer is input into the motion analysis model to obtain the prediction space vectors of a plurality of prediction motion nodes of the preset operation motion. In the process that a trainer executes a preset operation action, recording sample space vectors of a plurality of action nodes, wherein each action node is in a standard posture when the trainer performs a training action, and the sample space vectors have no error; comparing the prediction space vector with the error-free sample space vector to determine a loss function; and feeding back and adjusting the motion analysis model through the loss function to obtain the trained motion analysis model.
According to one embodiment of the invention, determining a loss function of the motion analysis model from the prediction space vector and the sample space vector comprises: determining a loss function of the motion analysis model according to equation (4)
(4)
Where N is the total number of action nodes,the total number of predicted action nodes obtained for the action analysis model,predictive space vector between the (t) th wearable sensing device and the (1) st wearable sensing device, which are the(s) th predictive action node, and the (1) st wearable sensing device is head sensing device, and the (1) st wearable sensing device is a head sensing device >Sample space vector between the (t) th wearable sensing device and the (1) st wearable sensing device, which are the(s) th action nodes, M is total number of wearable sensing devices, s is less than or equal to max (N,>) T is less than or equal to M, and s, t, N, & lt/EN & gt>And M is a positive integer, max is a maximum function, if is a conditional function.
In accordance with one embodiment of the present invention, in equation (4), the following two cases can be expressed by the form of a conditional function whenWhen the number of the action nodes is larger than the number of the predicted action nodes, the action analysis model obtains a small number of action nodes, and besides errors possibly exist between the predicted space vector and the sample space vector, the number of the action nodes also has errors. At this time, the value of the conditional function is。/>
According to one embodiment of the invention, whenIn other words, the number of predicted motion nodes is larger than the number of motion nodes, which means that the number of motion nodes obtained by the motion analysis model is large, except for the predicted space vector and the sample spaceBesides possible errors among the inter-vectors, the number of the action nodes also has errors, and meanwhile, the action analysis model also obtains redundant prediction space vectors of the action nodes, and the prediction space vectors are also prediction errors. Thus, in this case, the value of the conditional function is
According to one embodiment of the invention, whenWhen (I)>N and->Is used for the error of (a),averaging a model of a difference value between a prediction space vector obtained by the motion analysis model and a sample space vector without errors when a trainer executes a preset operation motion; when->At the last timeThe motion analysis model can be considered to have a prediction space vector of 0 for each motion node, and therefore +.>Is->The average value of the modulo of the difference between the prediction space vector and the sample space vector for the nth motion node is added to the average value of the modulo of the difference between the prediction space vector and the sample space vector for each motion node. Error of the number of the action nodes, prediction space vector of each action node and sample space can be calculatedMultiplying the average of the modes of the difference of the vectors to obtainAnd the loss function is reduced in the training process, so that the error of the number of the motion nodes is reduced, the modulus of the difference value between the prediction space vector of each motion node and the sample space vector is reduced, the prediction precision of the motion analysis model for the number of the motion nodes is improved, the prediction precision of the prediction space vector is improved, and the precision of the motion analysis model is improved.
According to one embodiment of the invention, whenWhen (I)>N and->Is used for the error of (a),averaging a model of a difference value between a prediction space vector obtained by the motion analysis model and a sample space vector without errors when a trainer executes a preset operation motion; when->When, last->The motion analysis model is considered to obtain a plurality of redundant prediction space vectors, which are prediction errors, and therefore, the motion analysis model is considered to be a +.>Is->To->And an average value of modes of the prediction space vectors of the motion nodes, wherein the average value is a prediction error. The sum of the two is the average value of the modes of the difference value of the prediction space vector and the sample space vector of each action node. The error of the number of motion nodes can be multiplied by the average of the modulo of the difference between the predicted space vector and the sample space vector for each motion node to obtain +.>And the loss function is reduced in the training process, so that the error of the number of the motion nodes is reduced, the modulus of the difference value between the prediction space vector of each motion node and the sample space vector is reduced, the prediction precision of the motion analysis model for the number of the motion nodes is improved, the prediction precision of the prediction space vector is improved, and the precision of the motion analysis model is improved.
According to the embodiment of the invention, the motion analysis model can be trained through the loss function, and the motion analysis model is obtained after the training is completed, so that the motion analysis model can accurately obtain the target space vector of each motion node based on the body type of medical staff to be trained.
In this way, the loss function of the motion analysis model is determined by predicting the number errors satisfied by the motion nodes and the errors between the prediction space vector and the sample space vector, so that the motion analysis model is enabled to reduce the errors between the prediction space vector and the sample space vector in the training process, the errors between the number of the prediction nodes and the number of the motion nodes are reduced, and the accuracy of the motion analysis model is improved more pertinently.
According to the medical care operation training method based on automatic evaluation, which is disclosed by the embodiment of the invention, the action nodes of medical care personnel to be trained when the medical care personnel execute preset operation actions can be accurately analyzed, the training space vectors corresponding to the action nodes are obtained, and the action evaluation score is determined by the target space vectors of the standard posture which are output by the action analysis model and matched with the body types of the medical care personnel to be trained, so that automatic evaluation can be performed during medical care operation training, the subjectivity of manual evaluation is avoided, the comprehensiveness, objectivity and accuracy of evaluation are improved, and the training effect is facilitated to be improved. When the training node is identified, according to the i-1 th motion vector of each key point and the i-th motion vector of each key point, determining the angle between the motion vectors as an action node identification parameter, judging whether the action direction of the medical staff to be trained changes according to the angle between the motion vectors, and determining a timestamp corresponding to the moment when the action direction of the medical staff to be trained changes, wherein the judgment accuracy of the training node can be improved through the operation of the motion vectors. When the gesture evaluation score is determined, cosine similarity calculation is carried out according to the target space vector of each action node and the training space vector of the training node with the same sequence number as the action node, and the gesture evaluation score is determined according to the average value of the cosine similarity, so that whether the gesture of the medical staff to be trained is standard at the action node or not is conveniently determined, and the accuracy, the scientificity and the objectivity of the gesture evaluation score are improved. When determining the action evaluation score, determining the action evaluation score according to the first action vector and the second action vector, and fully considering the difference between the training action and the standard action in the x, y and z directions and the difference between the training action and the standard action in the amplitude in the calculation process, thereby improving the accuracy, the scientificity and the objectivity of the action evaluation score. When the motion analysis model is trained, the loss function of the motion analysis model is determined through the quantity errors which are satisfied by the predicted motion nodes and the errors between the predicted space vector and the sample space vector, so that the errors of the predicted space vector and the sample space vector are reduced in the training process of the motion analysis model, the errors of the quantity of the predicted nodes and the quantity of the motion nodes are reduced, and the accuracy of the motion analysis model is improved more pertinently.
FIG. 2 schematically illustrates a block diagram of an automated assessment-based healthcare operation training system according to an embodiment of the present invention, the system comprising:
the relative position relation module is used for acquiring the relative position relation among the wearable sensing devices under the condition that the medical staff to be trained keeps a preset posture after the medical staff to be trained wears the plurality of wearable sensing devices, wherein the wearable sensing devices comprise head sensing devices, shoulder sensing devices, elbow sensing devices, wrist sensing devices, waist sensing devices, knee sensing devices and ankle sensing devices, and the relative position relation comprises space vectors among the space positions of the wearable sensing devices;
the target space vector module is used for inputting the relative position relation into the trained motion analysis model to obtain target space vectors of a plurality of motion nodes of the preset operation motion, wherein the target space vectors are used for representing space vectors among the space positions of each wearable sensing device under the condition that the body type of the medical staff to be trained keeps the standard posture corresponding to the motion nodes;
the training video module is used for shooting training videos of medical staff to be trained through cameras with multiple visual angles in the process that the medical staff to be trained executes the preset operation actions;
The timestamp module is used for determining a timestamp corresponding to each training node when the medical staff to be trained moves to according to the training video;
the training space vector module is used for determining training space vectors among the wearable sensing devices at the time corresponding to the time stamp according to the time stamp;
the training evaluation score module is used for determining training evaluation scores of preset operation actions according to the target space vector and the training space vector;
and the operation training report module is used for generating an operation training report according to the training evaluation score.
According to an embodiment of the present invention, there is provided a medical operation training apparatus based on automatic evaluation, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored by the memory to perform the automated assessment-based healthcare operation training method.
According to one embodiment of the present invention, a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, implement the automated assessment-based healthcare operation training method.
The present invention may be a method, apparatus, system, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for performing various aspects of the present invention.
It will be appreciated by persons skilled in the art that the embodiments of the invention described above and shown in the drawings are by way of example only and are not limiting. The objects of the present invention have been fully and effectively achieved. The functional and structural principles of the present invention have been shown and described in the examples and embodiments of the invention may be modified or practiced without departing from the principles described.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (9)

1. An automatic assessment-based medical care operation training method is characterized by comprising the following steps:
after the medical staff to be trained wears the plurality of wearable sensing devices, acquiring relative position relations among the wearable sensing devices under the condition that the medical staff to be trained keeps a preset posture, wherein the wearable sensing devices comprise head sensing devices, shoulder sensing devices, elbow sensing devices, wrist sensing devices, waist sensing devices, knee sensing devices and ankle sensing devices, and the relative position relations comprise space vectors among the space positions of the wearable sensing devices;
inputting a trained action analysis model relative to the position relation to obtain target space vectors of a plurality of action nodes of a preset operation action, wherein the target space vectors are used for representing space vectors between space positions of each wearable sensing device under the condition that the body type of the medical staff to be trained keeps the standard posture corresponding to the action nodes;
shooting training videos of medical staff to be trained through cameras with multiple visual angles in the process that the medical staff to be trained executes the preset operation actions;
Determining a time stamp corresponding to each training node when the medical staff to be trained moves to according to the training video;
determining training space vectors among the wearable sensing devices at the time corresponding to the time stamp according to the time stamp;
determining a training evaluation score of a preset operation action according to the target space vector and the training space vector;
and generating an operation training report according to the training evaluation score.
2. The automatic assessment-based healthcare operation training method according to claim 1, wherein determining, according to the training video, a timestamp corresponding to movement of the healthcare worker to be trained to each training node comprises:
determining a plurality of key points of the medical staff to be trained in each video frame of the training video of each view angle, wherein the key points comprise a head key point, a shoulder key point, an elbow key point, a wrist key point, a waist key point, a knee key point and an ankle key point;
respectively determining the position information of each key point in the ith-1 video frame of the training video of each view angle, the position information in the ith video frame of the training video of each view angle and the position information in the (i+1) th video frame of the training video of each view angle;
According to the position information of each key point in the ith-1 video frame of the training video of each view angle and the position information of each key point in the ith video frame of the training video of each view angle, the ith-1 motion vector of each key point in the training video of each view angle is obtained;
according to the position information of each key point in the ith video frame of the training video of each view angle and the position information of each key point in the (i+1) th video frame of the training video of each view angle, the ith motion vector of each key point in the training video of each view angle is obtained;
determining an action node identification parameter of an ith video frame according to the ith-1 motion vector of each key point and the ith motion vector of each key point;
judging whether the timestamp of the ith video frame is the moment corresponding to the training node according to the action node identification parameter of the ith video frame;
and if the time stamp of the ith video frame is the time corresponding to the training node, determining the time stamp of the ith video frame as the time stamp corresponding to the training node.
3. The automated assessment-based healthcare operation training method of claim 2, wherein determining the motion node identification parameter of the ith video frame from the ith-1 motion vector of each key point and the ith motion vector of each key point comprises:
According to the formula
Determining motion node identification parameters for an ith video frameWherein->I-1 th motion vector for the jth key point in the training video for the kth view,/v>The motion vector is the ith motion vector of the jth key point in the training video of the kth view angle, n is the number of key points, m is the number of cameras, j is more than or equal to 1 and less than or equal to n, k is more than or equal to 1 and less than or equal to m, k, j, m and n are positive integers, max is a maximum function, and arccos is an inverse cosine function.
4. The automated assessment-based healthcare operation training method of claim 1, wherein determining a training assessment score for a preset operation action from the target space vector and the training space vector comprises:
determining an attitude evaluation score according to the target space vector of each action node and the training space vector of the training node with the same sequence number as the action node;
determining first position coordinates of a plurality of wearable sensing devices corresponding to each action node according to the target space vector;
determining a first motion vector of each wearable sensing device between adjacent motion nodes according to the first position coordinates;
determining second position coordinates of a plurality of wearable sensing devices corresponding to each training node according to the training space vector;
Determining a second motion vector of each wearable sensing device between adjacent training nodes according to the second position coordinates;
determining an action evaluation score according to the first action vector and the second action vector;
and determining the training evaluation score according to the gesture evaluation score and the action evaluation score.
5. The automated assessment-based healthcare operation training method of claim 4, wherein determining the pose assessment score based on the target space vector for each action node and the training space vector for the training node having the same sequence number as the action node comprises:
according to the formula
Determining a pose evaluation scoreWherein->A target space vector between a t-th wearable sensing device and a 1-th wearable sensing device which are the s-th action nodes, wherein the 1-th wearable sensing device is a head sensing device,the training space vector between the t wearable sensing equipment and the 1 st wearable sensing equipment is the s training node, M is the total number of the wearable sensing equipment, N is the total number of action nodes, s is less than or equal to N, t is less than or equal to M, and s, t, N and M are all positive integers.
6. The automated assessment-based healthcare operation training method of claim 4, wherein determining an action assessment score from the first action vector and the second action vector comprises:
According to the formula
Determining action evaluation scoreWherein->For the coordinates of the t-th wearable sensing device in the x direction of the first motion vector between the s-th motion node and the s+1th motion node, < >>Is the y-direction coordinate of the first motion vector of the t-th wearable sensing device between the s-th motion node and the s+1th motion node,for the coordinates of the t-th wearable sensing device in the z direction of the first motion vector between the s-th motion node and the s+1th motion node, < >>For the coordinates of the t-th wearable sensing device in the x direction of the second motion vector between the s-th motion node and the s+1th training node, < >>For the coordinates of the t-th wearable sensing device in the y direction of the second motion vector between the s-th motion node and the s+1th training node, < >>The coordinate in the z direction of a second motion vector between the(s) th motion node and the (s+1) th training node of the (t) th wearable sensing device is represented by M, the total number of the wearable sensing devices is represented by N, the total number of the motion nodes is not greater than s and not greater than N, t is not greater than M, the s, t, N and M are all positive integers, and the max is a maximum function.
7. The automated assessment based healthcare operation training method of claim 1, wherein the training step of the action analysis model comprises:
After a trainer wears a plurality of wearable sensing devices, acquiring the relative position relation among the wearable sensing devices under the condition that the trainer keeps a preset posture;
inputting the relative position relation into a trained action analysis model to obtain prediction space vectors of a plurality of prediction action nodes of a preset operation action;
recording sample space vectors of a plurality of action nodes in the process that a trainer executes the preset operation action;
determining a loss function of the motion analysis model according to the prediction space vector and the sample space vector;
and training the action analysis model through the loss function to obtain a trained action analysis model.
8. The automated assessment-based healthcare operation training method of claim 7, wherein determining a loss function of the motion analysis model from the prediction space vector and the sample space vector comprises:
according to the formula
Determining a loss function of the motion analysis modelWherein N is the total number of action nodes, < ->Total number of predicted action nodes obtained for action analysis model, < >>A predictive spatial vector between a t-th wearable sensing device and a 1-th wearable sensing device which are s-th predictive action nodes, the 1-st wearable sensing device being a head sensing device, Sample space vector between the (t) th wearable sensing device and the (1) st wearable sensing device, which are the(s) th action nodes, M is total number of wearable sensing devices, s is less than or equal to max (N,>) T is less than or equal to M, and s, t, N, & lt/EN & gt>And M is a positive integer, max is a maximum function, if is a conditional function.
9. An automatic assessment-based medical care operation training system, comprising:
the relative position relation module is used for acquiring the relative position relation among the wearable sensing devices under the condition that the medical staff to be trained keeps a preset posture after the medical staff to be trained wears the plurality of wearable sensing devices, wherein the wearable sensing devices comprise head sensing devices, shoulder sensing devices, elbow sensing devices, wrist sensing devices, waist sensing devices, knee sensing devices and ankle sensing devices, and the relative position relation comprises space vectors among the space positions of the wearable sensing devices;
the target space vector module is used for inputting the relative position relation into the trained motion analysis model to obtain target space vectors of a plurality of motion nodes of the preset operation motion, wherein the target space vectors are used for representing space vectors among the space positions of each wearable sensing device under the condition that the body type of the medical staff to be trained keeps the standard posture corresponding to the motion nodes;
The training video module is used for shooting training videos of medical staff to be trained through cameras with multiple visual angles in the process that the medical staff to be trained executes the preset operation actions;
the timestamp module is used for determining a timestamp corresponding to each training node when the medical staff to be trained moves to according to the training video;
the training space vector module is used for determining training space vectors among the wearable sensing devices at the time corresponding to the time stamp according to the time stamp;
the training evaluation score module is used for determining training evaluation scores of preset operation actions according to the target space vector and the training space vector;
and the operation training report module is used for generating an operation training report according to the training evaluation score.
CN202410191159.XA 2024-02-21 2024-02-21 Medical care operation training method and system based on automatic evaluation Active CN117746305B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410191159.XA CN117746305B (en) 2024-02-21 2024-02-21 Medical care operation training method and system based on automatic evaluation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410191159.XA CN117746305B (en) 2024-02-21 2024-02-21 Medical care operation training method and system based on automatic evaluation

Publications (2)

Publication Number Publication Date
CN117746305A true CN117746305A (en) 2024-03-22
CN117746305B CN117746305B (en) 2024-04-19

Family

ID=90259572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410191159.XA Active CN117746305B (en) 2024-02-21 2024-02-21 Medical care operation training method and system based on automatic evaluation

Country Status (1)

Country Link
CN (1) CN117746305B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107754225A (en) * 2017-11-01 2018-03-06 河海大学常州校区 A kind of intelligent body-building coaching system
US20200105041A1 (en) * 2015-09-21 2020-04-02 TuringSense Inc. Method and system for providing real-time feedback in performing motions
CN111444890A (en) * 2020-04-30 2020-07-24 汕头市同行网络科技有限公司 Sports data analysis system and method based on machine learning
CN111803904A (en) * 2019-04-11 2020-10-23 上海天引生物科技有限公司 Dance teaching exercise device and method
CN113975775A (en) * 2021-10-25 2022-01-28 张衡 Wearable inertial body feeling ping-pong exercise training system and working method thereof
CN114005180A (en) * 2021-10-29 2022-02-01 华南师范大学 Motion scoring method and device for badminton
CN114511922A (en) * 2021-12-21 2022-05-17 深圳太极云软技术有限公司 Physical training posture recognition method, device, equipment and storage medium
US20230039882A1 (en) * 2020-01-16 2023-02-09 The University Of Toledo Artificial intelligence-based platform to optimize skill training and performance
WO2023108842A1 (en) * 2021-12-14 2023-06-22 成都拟合未来科技有限公司 Motion evaluation method and system based on fitness teaching training
WO2023174063A1 (en) * 2022-03-18 2023-09-21 华为技术有限公司 Background replacement method and electronic device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200105041A1 (en) * 2015-09-21 2020-04-02 TuringSense Inc. Method and system for providing real-time feedback in performing motions
CN107754225A (en) * 2017-11-01 2018-03-06 河海大学常州校区 A kind of intelligent body-building coaching system
CN111803904A (en) * 2019-04-11 2020-10-23 上海天引生物科技有限公司 Dance teaching exercise device and method
US20230039882A1 (en) * 2020-01-16 2023-02-09 The University Of Toledo Artificial intelligence-based platform to optimize skill training and performance
CN111444890A (en) * 2020-04-30 2020-07-24 汕头市同行网络科技有限公司 Sports data analysis system and method based on machine learning
CN113975775A (en) * 2021-10-25 2022-01-28 张衡 Wearable inertial body feeling ping-pong exercise training system and working method thereof
CN114005180A (en) * 2021-10-29 2022-02-01 华南师范大学 Motion scoring method and device for badminton
WO2023108842A1 (en) * 2021-12-14 2023-06-22 成都拟合未来科技有限公司 Motion evaluation method and system based on fitness teaching training
CN114511922A (en) * 2021-12-21 2022-05-17 深圳太极云软技术有限公司 Physical training posture recognition method, device, equipment and storage medium
WO2023174063A1 (en) * 2022-03-18 2023-09-21 华为技术有限公司 Background replacement method and electronic device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
RIASAT ISLAM等: "A Nonproprietary Movement Analysis System (MoJoXlab) Based on Wearable Inertial Measurement Units Applicable to Healthy Participants and Those With Anterior Cruciate Ligament Reconstruction Across a Range of Complex Tasks: Validation Study", 《JMIR MHEALTH UHEALTH》, vol. 8, no. 6, 16 June 2020 (2020-06-16), pages 1 - 16 *
Y. HUTABARAT等: "Recent Advances in Quantitative Gait Analysis Using Wearable Sensors: A Review", 《SENSORS JOURNAL》, vol. 21, no. 23, 31 October 2021 (2021-10-31), pages 26470 - 26487, XP011890675, DOI: 10.1109/JSEN.2021.3119658 *
何宽: "基于IMU和Kinect的人体动作捕捉系统的研发", 《中国优秀硕士学位论文全文数据库:信息科技辑》, no. 3, 15 March 2022 (2022-03-15), pages 1 - 83 *
赵帅: "基于视觉的脑卒中康复动作分析", 《中国优秀硕士学位论文全文数据库:医药卫生科技》, no. 5, 15 May 2022 (2022-05-15), pages 1 - 62 *

Also Published As

Publication number Publication date
CN117746305B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
US11182924B1 (en) System for estimating a three dimensional pose of one or more persons in a scene
US11348279B1 (en) System for estimating a three dimensional pose of one or more persons in a scene
US11521373B1 (en) System for estimating a three dimensional pose of one or more persons in a scene
US11403882B2 (en) Scoring metric for physical activity performance and tracking
Chatterjee et al. A quality prediction method for weight lifting activity
Lin et al. Segmenting human motion for automated rehabilitation exercise analysis
Ohri et al. On-device realtime pose estimation & correction
CN113486771A (en) Video motion uniformity evaluation method and system based on key point detection
Chen et al. Assembly torque data regression using sEMG and inertial signals
Tsai et al. Enhancing accuracy of human action Recognition System using Skeleton Point correction method
Mohammed et al. Recognition of yoga asana from real-time videos using blaze-pose
CN117746305B (en) Medical care operation training method and system based on automatic evaluation
Almasi et al. Investigating the Application of Human Motion Recognition for Athletics Talent Identification using the Head-Mounted Camera
CN116740618A (en) Motion video action evaluation method, system, computer equipment and medium
CN111002292B (en) Robot arm humanoid motion teaching method based on similarity measurement
Yang et al. Skeleton-based hand gesture recognition for assembly line operation
Walugembe et al. Gesture recognition in Leap Motion using LDA and SVM
Yabuki et al. Human motion classification and recognition using wholebody contact force
Hagelbäck et al. Variants of dynamic time warping and their performance in human movement assessment
Dash et al. A Inverse Kinematic Solution Of A 6-DOF Industrial Robot Using ANN
Talaa et al. Computer Vision-Based Approach for Automated Monitoring and Assessment of Gait Rehabilitation at Home.
WO2023223508A1 (en) Video processing device, video processing method, and program
CN113643788B (en) Method and system for determining feature points based on multiple image acquisition devices
Vazan et al. Augmenting Vision-Based Human Pose Estimation with Rotation Matrix
Elkess et al. Karate First Kata Performance Analysis and Evaluation with Computer Vision and Machine Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant