CN115131879A - Action evaluation method and device - Google Patents
Action evaluation method and device Download PDFInfo
- Publication number
- CN115131879A CN115131879A CN202211055338.8A CN202211055338A CN115131879A CN 115131879 A CN115131879 A CN 115131879A CN 202211055338 A CN202211055338 A CN 202211055338A CN 115131879 A CN115131879 A CN 115131879A
- Authority
- CN
- China
- Prior art keywords
- action
- target
- posture
- evaluation
- image frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B24/00—Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
- A63B24/0003—Analysing the course of a movement or motion sequences during an exercise or trainings sequence, e.g. swing for golf or tennis
- A63B24/0006—Computerised comparison for qualitative assessment of motion sequences or the course of a movement
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B71/00—Games or sports accessories not covered in groups A63B1/00 - A63B69/00
- A63B71/06—Indicating or scoring devices for games or players, or for other sports activities
- A63B71/0619—Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
- A63B71/0669—Score-keepers or score display devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B24/00—Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
- A63B24/0003—Analysing the course of a movement or motion sequences during an exercise or trainings sequence, e.g. swing for golf or tennis
- A63B24/0006—Computerised comparison for qualitative assessment of motion sequences or the course of a movement
- A63B2024/0012—Comparing movements or motion sequences with a registered reference
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B2230/00—Measuring physiological parameters of the user
- A63B2230/62—Measuring physiological parameters of the user posture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Physical Education & Sports Medicine (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses an action evaluation method and device. Firstly, starting an action evaluation mode; before the action evaluation mode is started, action data including mapping relation between actions and postures contained in the actions, judgment basis of the postures and evaluation rules of the postures are recorded in advance. Then, performing gesture recognition on a target object in the target video based on a judgment basis in the action data; determining a plurality of target actions matched with the gestures according to a plurality of gestures continuously recognized from the target video and mapping relations in the action data; and finally, obtaining an evaluation result of the target action executed by the target object according to the evaluation rule of the posture contained in the target action in the action data. And the target object can be identified and evaluated according to the target video only by inputting action data in advance. The scheme does not need to be matched with a professional coach video to construct an identification program of the personalized action, can improve the application range of the action scoring scheme, and better meets the evaluation requirement of the personalized action.
Description
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a method and an apparatus for evaluating an action.
Background
People often enrich their own experiences or enhance physical fitness, such as fitness, martial arts, dancing, yoga, etc., by performing certain sports and artistic activities in daily life. But scoring actions for these activities is more difficult to achieve.
Taking fitness as an example, in the initial stage, people generally follow a trainer to exercise some general fitness actions. To learn whether an action is performing nominally, there is currently a technique: for a video of a designated fitness trainer, a developer can write an action identification code according to the action of the trainer, when a user (a fitness student) needs to obtain evaluation of the action executed by the user, the user exercises according to the fitness action of the trainer, and a program can score the standard degree of the action. However, with the increase of experience, people may exercise their own personalized fitness activities and wish to obtain an evaluation.
If the evaluation of the personalized body-building action is realized according to the method, an identification program needs to be created in advance according to the video containing the body-building action. On one hand, a recognition program of the action in the video is specially created to realize the recognition of the action, and higher cost is consumed; on the other hand, if the corresponding clear video cannot be found, the video cannot be recorded in the identification program in advance. Therefore, the application range of the prior art for action scoring is narrow, and the evaluation requirements of more personalized actions are difficult to meet.
Disclosure of Invention
Based on the above problems, the application provides an action evaluation method and device to improve the application range of an action scoring scheme and better meet the evaluation requirement of personalized actions.
The embodiment of the application discloses the following technical scheme:
in a first aspect of the present application, there is provided an action evaluation method, including:
starting an action evaluation mode; before the action evaluation mode is started, action data is recorded in advance, and the action data comprises: mapping relation between the action and the posture contained in the action, judgment basis of the posture and evaluation rule of the posture;
performing gesture recognition on a target object in a target video based on a judgment basis in the motion data;
determining a plurality of gestures matched with target actions according to a plurality of gestures continuously recognized from the target video and mapping relations in the action data;
and obtaining an evaluation result of the target action executed by the target object according to an evaluation rule of the posture contained in the target action in the action data.
Optionally, the determining is based on one or a combination of conditions including: distance conditions or angle conditions for key point locations; the gesture recognition of the target object in the target video based on the judgment basis in the motion data comprises the following steps:
identifying and obtaining a plurality of key point positions of the target object from the image frame of the target video through a human body posture estimation technology based on a neural network architecture;
obtaining distance information and angle information according to a plurality of key point positions of the target object;
and when all conditions in the judgment basis for determining the target posture according to the distance information and the angle information are met, determining the posture of the target object in the image frame as the target posture.
Optionally, the plurality of gestures comprises: a first pose identified in a first image frame of the target video and a second pose identified in a second image frame of the target video; wherein a recording time of the first image frame is earlier than a recording time of the second image frame; the first image frame is the first image frame in the target video which is identified to the first gesture, and the second image frame is the first image frame in the target video which is identified to the second gesture; the target action in the action data comprises the first gesture before and the second gesture after; the motion data further comprises temporal conditions of the first and second poses in the target motion;
the determining the target actions matched with the plurality of gestures according to the plurality of gestures continuously recognized from the target video and the mapping relation in the action data comprises:
acquiring an interval between the recording time of the first image frame and the recording time of the second image frame;
when the interval satisfies the time condition, determining that the action that the first pose in the first image frame and the second pose in the second image frame jointly match is the target action;
determining that the second pose in the second image frame fails to match the first pose in the first image frame together for a full action when the interval does not satisfy the temporal condition.
Optionally, the target motion in the motion data comprises a first gesture and a second gesture;
the obtaining of the evaluation result of the target action executed by the target object according to the evaluation rule of the posture contained in the target action in the action data includes:
obtaining a first score of the first posture of the target object for executing the target action according to the image frame of the target object in the first posture and the evaluation rule of the first posture;
obtaining a second score of the second posture of the target object for executing the target action according to the image frame of the target object in the second posture and the evaluation rule of the second posture;
and obtaining an evaluation result of the target object executing the target action according to the first score and the second score.
Optionally, if the target object in the consecutive multi-frame image frames in the target video is in the first posture, the obtaining a first score of the first posture of the target object performing the target action according to the image frame of the target object in the first posture and an evaluation rule of the first posture includes:
obtaining an initial first score according to a previous frame image frame in the continuous multi-frame image frames and the evaluation rule of the first posture;
obtaining a new first score according to a subsequent frame of image frames in the continuous multi-frame image frames and the evaluation rule of the first posture;
and if the new first score is larger than the initial first score, covering the initial first score by using the new first score.
Optionally, entering the action data comprises:
inputting the name of the action, the number of postures contained in the action and the name of the posture contained in the action;
and recording the judgment basis and the evaluation rule of the gesture contained in the action.
Optionally, the evaluation rule of the gesture includes one or more of the following combinations:
the evaluation rule of the maintenance time of the posture, the distance evaluation rule of the key point position of the posture, or the evaluation rule of the angle formed by the key point position of the posture.
Optionally, the evaluation rule of the maintenance time of the posture comprises one or more of the following combination:
the key point locations constituting the attitude continuously satisfy a time evaluation rule of a target distance condition, or the key point locations constituting the attitude continuously satisfy a time evaluation rule of a target angle condition.
Optionally, the evaluation rule of the maintenance time of the posture includes:
the interval of the maintaining time of the posture and the time fraction coefficient corresponding to the interval.
Optionally, the distance information is expressed as a multiple of a distance from the wrist to a radius of the elbow.
In a second aspect of the present application, there is provided an action evaluation device including:
the mode starting module is used for starting the action evaluation mode; before the action evaluation mode is started, action data is recorded in advance, and the action data comprises: mapping relation between the action and the posture contained in the action, judgment basis of the posture and evaluation rule of the posture;
the gesture recognition module is used for carrying out gesture recognition on the target object in the target video based on the judgment basis in the action data;
the action determining module is used for determining target actions matched with a plurality of postures according to the plurality of postures continuously recognized from the target video and the mapping relation in the action data;
and the action evaluation module is used for obtaining an evaluation result of the target object executing the target action according to an evaluation rule of the posture contained in the target action in the action data.
Compared with the prior art, the method has the following beneficial effects:
according to the action evaluation method and device, firstly, an action evaluation mode is started; before the action evaluation mode is started, action data including mapping relation between actions and postures contained in the actions, judgment basis of the postures and evaluation rules of the postures are recorded in advance. Then, performing gesture recognition on a target object in the target video based on a judgment basis in the motion data; determining a plurality of target actions matched with the gestures according to a plurality of gestures continuously recognized from a target video and a mapping relation in the action data; and finally, obtaining an evaluation result of the target action executed by the target object according to the evaluation rule of the posture contained in the target action in the action data. And the action identification and evaluation can be carried out on the target object according to the target video only by inputting the action data in advance. The scheme does not need to be matched with a professional coach video to construct an identification program of the personalized action, can improve the application range of the action scoring scheme, and better meets the evaluation requirement of the personalized action.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the description below are only some embodiments of the present application, and for those skilled in the art, other drawings may be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart of an action evaluation method provided in an embodiment of the present application;
fig. 2 is a flowchart of entering action data according to an embodiment of the present application;
fig. 3 is a schematic distribution diagram of key points of a human body according to an embodiment of the present application;
FIG. 4 is a flow chart of gesture recognition provided by an embodiment of the present application;
FIG. 5 is a flow chart of a target action recognition provided by an embodiment of the present application;
FIG. 6 is a flowchart of another action evaluation method provided in the embodiments of the present application;
fig. 7 is a schematic structural diagram of an action evaluation device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another motion evaluation device according to an embodiment of the present application.
Detailed Description
As described above, yoga, boxing, body building, martial arts, dancing and other activities are increasingly popular, and many people want to score and evaluate the personalized actions performed by themselves. For personalized actions, because a suitable video creation action recognition program is not easy to find, action evaluation is difficult and cost is high. Therefore, the application range of the prior art for action scoring is narrow, and the evaluation requirements of more personalized actions are difficult to meet.
Through research, the inventor proposes a scheme of inputting action data in advance to realize action evaluation. In order to realize the automatic evaluation of the personalized motion, only motion data related to the personalized motion needs to be recorded in advance, such as the gesture contained in the motion, the judgment basis of the gesture and the evaluation rule of the gesture. On the basis of the above, the evaluation can be performed on the action performed by the target object in the video. The problem that an action recognition program needs to be specially created in the prior art is solved. The scheme can meet the evaluation requirement of personalized actions more easily.
In order to make those skilled in the art better understand the technical solutions of the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Method embodiment
Referring to fig. 1, the figure is a flowchart of an action evaluation method provided in an embodiment of the present application. The operation evaluation method shown in fig. 1 includes:
The motion assessment mode may be user selectable to turn on or off. Specifically, if the user selects to enable, the subsequent steps of the method are implemented for the target video (i.e., the video to be evaluated) corresponding to the user to perform action recognition and evaluation. If the user chooses not to enable the action evaluation mode or this mode is not turned on by default, even if the user provides a target video, the actions of the target object in the video do not need to be identified and scored. The user referred to herein is a user who triggers the start of the action evaluation mode, and may specifically be the target object (i.e., the object whose action is to be evaluated) itself, or a contact of the target object, such as a parent or a coach.
It is necessary to say that action data has been previously entered before the action evaluation mode is started. Here, for ease of understanding, embodiments of the present solution may be understood as being built to execute on an action evaluation system. The action evaluation system can receive a mode trigger instruction of a user, and can evaluate the action of the received target video after knowing that an action evaluation mode needs to be started according to the instruction. A storage medium may be included in the system for storing pre-entered action data. The operation of entering the action data may be performed by the target object or by the aforementioned user.
In an embodiment of the present application, the action data includes: mapping relation between the action and the posture contained in the action, judgment basis of the posture and evaluation rule of the posture. For example, a straight punch motion includes two gestures: go out and get up. Therefore, the motion data includes the mapping relationship between the straight punch motion and the two postures of punch and punch. In addition, the method also comprises a judgment basis of the punching posture, an evaluation rule of the punching posture and an evaluation rule of the punching posture.
And 102, performing gesture recognition on the target object in the target video based on the judgment basis in the motion data.
As described above, the motion data includes the basis for determining the posture included in the motion. It should be noted that the determination criterion of a gesture may include one condition or a combination of conditions. Generally, the pose of the human body can be determined by the key points, for example, a particular number of key points can determine a pose. The decision conditions are also related to the key points. For example, the pose is determined based on a distance condition including the keypoint location, and/or an angle condition of the keypoint location. For example, the determination of the posture of the right-handed straight punch includes: the angle formed by the wrist, the elbow and the shoulder on the right side is larger than the preset angle (the angle condition of three key points). The judgment basis of the boxing posture in the right-hand straight boxing action comprises the following steps: and after the left punch, the distance between the left punch and the right punch reaches a preset distance (a distance condition of two key point positions).
And performing gesture recognition on the target object in the target video based on the gesture judgment basis in the recorded action data so as to determine the specific gesture of the target object in the image frame of the video.
Video is made up of a plurality of image frames along a time sequence. Gesture recognition may be performed on each image frame of the target video according to the foregoing step 102, and based on the key point positions of the image frames, some image frames may recognize a valid gesture (i.e., a registered gesture), and some image frames may not recognize a valid gesture (e.g., a gesture does not reach the standard or a matching gesture is not found in the registered gesture).
As mentioned above, the mapping relationship between the gesture and the motion included in the motion is included in the motion data. If a plurality of different gestures identified by the link in the target video can be identified as gestures having a mapping relation with the same action, actions matched with the plurality of different gestures can be determined as target actions. For example, a punch gesture and a punch gesture are recognized, and a kick may be determined as a punch action.
And 104, obtaining an evaluation result of the target action executed by the target object according to an evaluation rule of the posture contained in the target action in the action data.
As mentioned above, the action data includes the evaluation rule of the gesture, so that the evaluation result of the target object executing the target action can be obtained based on the completion condition of the target object presented in the target video for each gesture in the action in combination with the corresponding evaluation rule. It should be particularly noted that different actions may include the same gesture or gestures, and the requirements for different gestures may be different, so that the evaluation rules for the same gesture included in different actions may also be different. When evaluating, the attitude evaluation is specifically performed based on the evaluation rule of the attitude in the target action to which the attitude belongs, and further the evaluation of the corresponding action is obtained. In addition, the same gesture may be included in the same action, for example, an action includes a first gesture, a second gesture, and a third gesture in chronological order, where the first gesture is the same as the third gesture, but the angle requirements or the distance requirements for the key points of the first gesture are different. Therefore, the evaluation rules of the same posture in the same action may be the same or different. The evaluation rules and the judgment conditions in the action data can be set and recorded according to the actual personalized action requirement.
According to the technical scheme, the action identification and evaluation can be performed on the target object according to the target video only by inputting the action data in advance. The scheme does not need to be matched with a professional coach video to construct an identification program of the personalized action, can improve the application range of the action scoring scheme, and better meets the evaluation requirement of the personalized action.
It should be noted that fig. 1 is only an exemplary flowchart of an action evaluation method provided in the embodiment of the present application, and steps 102 to 104 are sequentially performed in fig. 1. In practical applications, step 104 may also be executed during the execution of step 103, or the gesture in the motion is evaluated first, the motion corresponding to the gesture is continuously matched, and finally, the target motion is determined and the evaluation result of the target motion can also be obtained. Therefore, the order of executing steps 103 and 104 is not limited in this application.
As mentioned above, it is possible for the action data to be entered in advance before the action evaluation. A flow of entering action data is described below with reference to fig. 2. Fig. 2 is a flowchart of entering action data according to an embodiment of the present application, where entering the action data includes:
For example, the action name is entered: and (5) directly punching a right hand. The number of postures contained in the action is 2, and the postures are a punch posture and a punch-receiving posture respectively. When the gesture is input, the precedence relationship of the gestures included in the action can be specifically input, for example, the names of the two gestures of punching and punching are input according to the time sequence.
The explanation continues with the example above:
inputting judgment basis and evaluation rule of the posture of the punch; and inputting a judgment basis and an evaluation rule of the boxing closing posture. It should be noted that the determination criterion of one gesture may include one or more determination conditions. The evaluation rules may be multifaceted, for example, evaluation rules relating to distance or angle to a keypoint. In other example implementations, the evaluation rules may also relate to a time dimension. For example, for some particular poses, the pose maintenance time is also considered when evaluating its completeness or standard. Thus, the evaluation rules for the pose may include a combination of one or more of the following: the evaluation rule of the maintenance time of the posture, the distance evaluation rule of the key point position of the posture, or the evaluation rule of the angle formed by the key point position of the posture.
For example, the judgment of the punch is based on the angle, and the angle formed by three points of the right wrist, the right elbow and the right shoulder is more than 150 degrees. Thus, the evaluation rules may include: the angle is more than 170 degrees, and the corresponding fraction is 0.5; greater than 160 degrees and less than or equal to 170 degrees, corresponding to a score of 0.4; greater than 150 ° and less than or equal to 160 °, corresponding to a score of 0.2.
The judgment basis of the punch receiving is that the right punch is behind the left punch, namely the judgment basis is the distance, namely the difference between the X-axis coordinate of the left punch and the X-axis coordinate of the right punch (the camera is arranged on the right side of the person, the right side of the image is in the positive direction of the X axis, and the upper side of the image is in the positive direction of the Y axis) is less than or equal to 0, and the corresponding score is 0.2. In addition, the evaluation rule of the holding time is added, namely, after reaching the boxing standard, the time is more than 1 second, and the corresponding score is 0.5.
In the embodiment of the present application, the evaluation rule of the maintenance time of the posture includes one or more of the following combinations: and the key point positions forming the gesture continuously meet the time evaluation rule of the target distance condition, or the key point positions forming the gesture continuously meet the time evaluation rule of the target angle condition. The target distance condition is a distance condition that requires a certain period of time to be maintained, and the target angle condition is an angle condition that requires a certain period of time to be satisfied, and these conditions are related to the posture determination condition.
The evaluation rule of the maintenance time of the posture can comprise two parts: the interval of the maintaining time of the posture and the time fraction coefficient corresponding to the interval. For example, if the time when the key point locations constituting the posture continuously satisfy the target distance condition is within a first time interval, the corresponding time fraction coefficient is a first coefficient; and if the time when the key point positions forming the posture continuously meet the target distance condition is in a second time interval, the corresponding time fraction coefficient is a second coefficient. When the posture is scored, the first coefficient or the second coefficient may be multiplied based on the actual situation (the interval in which the maintenance time of the posture is located) on the basis of the score of the posture. Resulting in the final score for the pose.
The recognition of the aforementioned poses depends on the keypoints of the target object in the image frame. These key points can be identified and determined by a number of relatively sophisticated identification techniques. In an alternative implementation, the key point locations may be implemented by a human body posture estimation technique based on a neural network architecture. Alternatively, the technique is MediaPipe dose. MediaPipe is a framework for constructing a machine learning pipeline for processing time series data of video, audio, etc. A convolutional neural network: is a kind of feed forward Neural Networks (fed forward Neural Networks) containing convolution calculation and having a deep structure, and is one of the representative algorithms of deep learning (deep learning). Convolutional Neural Networks have a feature learning (representation learning) capability, and can perform Shift-Invariant classification (Shift-Invariant classification) on input information according to a hierarchical structure thereof, and are also called Shift-Invariant Artificial Neural Networks (SIANN). BlazePose: a lightweight convolutional neural network architecture is used for human body posture estimation and can be used for real-time inference on mobile equipment. During the reasoning process, the network generates 33 body key points for a person. MediaPipe pos is a solution that infers 33 3D landmarks and background segmentation masks over the entire body from RGB video frames based on blazepos, and 33 key points can refer to fig. 3. Fig. 3 is a schematic distribution diagram of key points of a human body according to an embodiment of the present application.
The reference numerals of the 33 key points in fig. 3 mean:
0 represents a nose; 1 represents the inner side of the left eye; 2 represents the left eye; 3 represents the outer side of the left eye; 4 represents the inner right eye; 5 represents the right eye; 6 represents the lateral right eye; 7 represents the left ear; 8 represents the right ear; 9 represents the left mouth corner; 10 represents the right mouth corner; 11 represents the left shoulder; 12 represents the right shoulder; 13 represents the left elbow; 14 represents the right elbow; 15 represents the left wrist; 16 represents the right wrist; 17 represents the left little finger; 18 represents the right little finger; 19 represents the left index finger; 20 represents the right index finger; 21 represents the left thumb; 22 represents the right thumb; 23 represents the left hip; 24 represents the right hip; 25 represents the left knee; 26 represents the right knee; 27 represents the left ankle; 28 represents the right ankle; 29 represents the left heel; 30 represents the right heel; 31 represents the left index finger; and 32 represents the right index finger. In the embodiment of the present application, the key point locations are locations in the image frame that can reflect the posture of the target object, and may be artificially specified according to the human bone features and the facial features.
An implementation of gesture recognition of a target object in a target video based on the decision criterion in the motion data is described below. FIG. 4 is a flow chart of gesture recognition.
In practical applications, only a few key points may be identified, not all 33 key points as shown in fig. 3, considering that the target object appearing in the image frame may be due to a specific gesture.
And 402, obtaining distance information and angle information according to the plurality of key point positions of the target object.
Distance information may be derived from two keypoint locations in the image frame (e.g., the distance between the left and right punches). The angle information may be obtained from a triangle that can be enclosed by three key points in the image frame (e.g., a triangle enclosed by the wrist, elbow, and shoulder on the same side).
The camera and the video have the characteristics of large and small distances, so that distance information can be reflected by relative distance instead of absolute distance in order to improve calculation precision or motion evaluation accuracy during distance calculation. For example, the right finger may be short if the target object is standing 100 pixels closer to the camera, and the forearm may be long if standing far. In an alternative implementation, the distance information is expressed as a multiple of the distance from the wrist to the radius of the elbow when calculating the distance. Corresponding to the unit of distance information measured in terms of the distance from the wrist to the radius of the elbow.
And 403, when all conditions in the judgment basis for determining the target posture according to the distance information and the angle information are met, determining the posture of the target object in the image frame as the target posture.
If the criterion of the target posture only includes a distance condition, if the distance information obtained in step 402 satisfies the condition, the posture can be determined as the target posture;
if the determination criterion of the target pose only includes an angle condition, if the angle information obtained in step 402 satisfies the condition, the pose may be determined to be the target pose;
if the determination criterion of the target pose includes a distance condition and an angle condition, the pose may be determined to be the target pose only if the distance information obtained in step 402 satisfies the distance condition and the angle information obtained in step 402 satisfies the angle condition.
As mentioned above, the time for the gesture to be maintained can also be taken into account in the recognition of the motion. Furthermore, the time interval between the occurrence of two gestures may also be considered. In an alternative implementation, the plurality of gestures includes: identifying a first pose in a first image frame of the target video and a second pose in a second image frame of the target video; wherein a recording time of the first image frame is earlier than a recording time of the second image frame; the first image frame is the first image frame in the target video which is identified to the first gesture, and the second image frame is the first image frame in the target video which is identified to the second gesture; the target motion in the motion data comprises the first pose before and the second pose after.
The motion data further includes temporal conditions of the first pose and the second pose in the target motion. For example, the target object is required to complete the first pose for the first time and the second pose for the first time to be shorter or longer than a specific time, and the time condition is satisfied, so that the two poses can be considered to be combined into one target motion. FIG. 5 illustrates an example process flow for identifying a target action. As shown in fig. 5, the process of determining the target actions matched with the plurality of poses according to the plurality of poses continuously recognized from the target video and the mapping relation in the action data includes:
The interval may be derived from the recording time of the frame. In addition, the time interval of two frames can also be obtained according to the ordinal number of the frame.
As an example, the time condition is more than 8 seconds.
For example, if the interval is 10 seconds and the time condition is satisfied, the action matched by the two postures is the target action.
Step 504, when the interval does not satisfy the time condition, determining that the second pose in the second image frame and the first pose in the first image frame together match a complete action.
For example, if the interval is 3 seconds and the time condition is not satisfied, a complete motion cannot be matched according to the poses in the two images. On the basis, each image frame which is continuous after the second image frame needs to be continuously identified so as to obtain the image frame of the posture which meets the time condition. Thereby identifying the target action.
If the target action in the action data comprises a first gesture and a second gesture, the application further provides an example implementation mode for obtaining the evaluation result of the target action. For example: the obtaining, according to an evaluation rule of a posture included in the target action in the action data, an evaluation result of the target object executing the target action includes:
obtaining a first score of the first posture of the target object for executing the target action according to the image frame of the target object in the first posture and the evaluation rule of the first posture; and obtaining a second score of the second posture of the target object for executing the target action according to the image frame of the target object in the second posture and the evaluation rule of the second posture. And obtaining an evaluation result of the target action executed by the target object according to the first score and the second score.
That is, the overall evaluation of the target action takes into account the scores of the poses that make up the action. For example, the first score and the second score are added to obtain the total score as the evaluation result of the target action. In one particular example, a full right straight punch motion scores 1 point, with scores including 0.5 points for punch gestures and 0.5 points for punch actions.
Furthermore, if the maintaining time of the gesture is considered, if the evaluation rule involves setting a corresponding time score coefficient for the interval of the maintaining time, the coefficient and the score can be multiplied to obtain the final score of the gesture. And adding the final scores of different postures forming one action to obtain an evaluation result of the action.
It should be noted that, for one pose, the degree of completion or the standard degree may be insufficient in the previous image or images, but the degree of completion or the standard degree tends to increase in the subsequent image or images. If the pose score is based only on previous image frames, it may overlook the true achievable completeness or standard. Therefore, the application provides an implementation mode of covering the gesture scores in continuous multi-frame images so as to record the score of the most standard gesture.
Taking a first pose as an example, if the target object in the continuous multi-frame image frames in the target video is in the first pose, the obtaining a first score of the first pose where the target object executes the target action according to the image frame where the target object is in the first pose and an evaluation rule of the first pose may specifically include:
obtaining an initial first score according to a previous frame image frame in the continuous multi-frame image frames and the evaluation rule of the first posture; obtaining a new first score according to a subsequent frame of image frames in the continuous multi-frame image frames and the evaluation rule of the first posture; and if the new first score is larger than the initial first score, covering the initial first score by using the new first score.
Fig. 6 presents a flow of an action evaluation method. As shown in fig. 6, motion evaluation is performed synchronously during the process of capturing the video of the target object. And under the condition that the acquisition is not finished, acquiring the information (point location pixel coordinates in the image) of key point locations of the body of the target object in each frame of image. And if the current frame accords with the posture judgment basis, judging whether repeated postures exist or not. And if the repeated posture exists and the posture score of the current frame reaches the highest score in the repeated posture, covering the highest score of the previous posture and recording the maintaining time of the posture in an accumulated mode. If the repeated gesture does not exist, the gesture recognized in the current frame is the gesture recognized in the target video for the first time. If other postures which can be matched do not exist at present and a complete action cannot be recognized, recording the score of the posture as the highest score and recording the maintenance time of the posture; if other postures which can be matched with the current posture to form a complete action exist at present, the postures are matched into the action, the scores of all the postures forming the action are recorded, and the time stamp is recorded, so that the score of the whole action is obtained. A timestamp may also be embodied in the action evaluation result. The timestamp may include the first time and the end time of a gesture to complete an action.
Examples are as follows: assuming that the user punches a fist on the 6 th, 7 th, 8 th and 9 th frames of the camera and the angle is between 160 DEG and 170 DEG, the punch gesture scores 0.4 minute at the moment but the punch gesture is 0.5 minute when the 10 th frame is more than 170 DEG, and the score is covered because the punch gesture is not matched with the punch gesture until the moment and a complete right straight punch motion occurs. Similarly, if the 100 th frame of picture acquires the user's punch-receiving gesture, the score is 0.2, and the acquisition is continued, and if the 130 th frame of picture is acquired, the user still receives the punch-receiving gesture, and the time is more than 1 second from the 100 th frame of picture, the punch-receiving gesture score is 0.5. Until the punch-receiving posture condition is not satisfied. The user's complete right straight punch motion can be recorded with a score of 1 (punch 0.5+ punch 0.5).
Based on the action evaluation method provided by the foregoing embodiment, correspondingly, the application further provides an action evaluation device. The following description is made with reference to the embodiments.
The operation evaluation device shown in fig. 7 includes: a mode initiation module 701, a gesture recognition module 702, a motion determination module 703 and a motion evaluation module 704.
A mode starting module 701, configured to start an action evaluation mode; before the action evaluation mode is started, action data is recorded in advance, and the action data comprises: mapping relation between the action and the posture contained in the action, judgment basis of the posture and evaluation rule of the posture;
a gesture recognition module 702, configured to perform gesture recognition on a target object in a target video based on the determination criterion in the motion data;
an action determining module 703, configured to determine, according to a plurality of gestures continuously recognized from the target video and a mapping relationship in the action data, a target action matched with the plurality of gestures;
and an action evaluation module 704, configured to obtain an evaluation result of the target object executing the target action according to an evaluation rule of a posture included in the target action in the action data.
And the action identification and evaluation can be carried out on the target object according to the target video only by inputting the action data in advance. The scheme does not need to be matched with a professional coach video to construct an identification program of the personalized action, can improve the application range of the action scoring scheme, and better meets the evaluation requirement of the personalized action.
Optionally, the determining is based on one or a combination of conditions including: distance conditions or angle conditions for key point locations; the gesture recognition module 702 includes:
the key point position identification unit is used for identifying and obtaining a plurality of key point positions of the target object from an image frame of the target video through a human body posture estimation technology based on a neural network architecture;
the information acquisition unit is used for acquiring distance information and angle information according to the plurality of key point positions of the target object;
and the attitude determination unit is used for determining the attitude of the target object in the image frame as the target attitude when all conditions in the judgment basis for determining the target attitude according to the distance information and the angle information are met.
Optionally, the plurality of gestures comprises: identifying a first pose in a first image frame of the target video and a second pose in a second image frame of the target video; wherein a recording time of the first image frame is earlier than a recording time of the second image frame; the first image frame is the first image frame in the target video which is identified to the first gesture, and the second image frame is the first image frame in the target video which is identified to the second gesture; the target action in the action data comprises the first gesture before and the second gesture after; the motion data further comprises temporal conditions of the first and second poses in the target motion;
the action determining module 703 includes:
an interval acquisition unit configured to acquire an interval between a recording time of the first image frame and a recording time of the second image frame;
an action determining unit, configured to determine, when the interval satisfies the time condition, that an action in which the first pose in the first image frame and the second pose in the second image frame are jointly matched is the target action; determining that the second pose in the second image frame fails to match the first pose in the first image frame together for a full action when the interval does not satisfy the temporal condition.
Optionally, the target motion in the motion data comprises a first gesture and a second gesture;
the action evaluation module 704 includes:
the score acquisition unit is used for acquiring a first score of the first posture of the target object for executing the target action according to the image frame of the target object in the first posture and the evaluation rule of the first posture; the system is further used for obtaining a second score of the second posture of the target object for executing the target action according to the image frame of the target object in the second posture and the evaluation rule of the second posture;
and the evaluation result acquisition unit is used for acquiring an evaluation result of the target object executing the target action according to the first score and the second score.
Optionally, if the target object in the consecutive multi-frame image frames in the target video is in the first pose, the score obtaining unit includes:
the initial score acquisition subunit is configured to obtain an initial first score according to a previous frame of the consecutive multiple frames of image frames and the evaluation rule of the first posture;
the new score obtaining subunit is configured to obtain a new first score according to a subsequent frame of the consecutive multiple frames of image frames and the evaluation rule of the first posture;
and the score covering subunit is used for covering the initial first score by using the new first score if the new first score is larger than the initial first score.
Another motion estimation device shown in fig. 8 includes, in addition to the above modules: a logging module 705; the logging module 705 is configured to: inputting the name of the action, the number of postures contained in the action and the name of the posture contained in the action; and recording the judgment basis and the evaluation rule of the gesture contained in the action.
Optionally, the evaluation rule of the gesture includes one or more of the following combinations:
the evaluation rule of the maintenance time of the posture, the distance evaluation rule of the key point position of the posture, or the evaluation rule of the angle formed by the key point position of the posture.
Optionally, the evaluation rule of the maintaining time of the gesture comprises one or more of the following combination:
and the key point positions forming the gesture continuously meet the time evaluation rule of the target distance condition, or the key point positions forming the gesture continuously meet the time evaluation rule of the target angle condition.
Optionally, the evaluation rule of the maintenance time of the posture includes:
the interval of the maintaining time of the posture and the time fraction coefficient corresponding to the interval.
Optionally, the distance information is expressed as a multiple of a distance from the wrist to a radius of the elbow.
It should be noted that, in the present specification, all the embodiments are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, it is relatively simple to describe, and reference may be made to some descriptions of the method embodiment for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts suggested as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only one specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (11)
1. An action evaluation method, comprising:
starting an action evaluation mode; before the action evaluation mode is started, action data is recorded in advance, and the action data comprises: mapping relation between the action and the posture contained in the action, judgment basis of the posture and evaluation rule of the posture;
performing gesture recognition on a target object in a target video based on the judgment basis in the action data;
determining a plurality of target actions matched with the gestures according to a plurality of gestures continuously recognized from the target video and the mapping relation in the action data;
and obtaining an evaluation result of the target action executed by the target object according to an evaluation rule of the posture contained in the target action in the action data.
2. The method according to claim 1, wherein the decision criterion comprises one or a combination of the following conditions: distance conditions or angle conditions for key point locations; the gesture recognition of the target object in the target video based on the judgment basis in the motion data comprises the following steps:
identifying and obtaining a plurality of key point positions of the target object from the image frame of the target video through a human body posture estimation technology based on a neural network architecture;
obtaining distance information and angle information according to a plurality of key point positions of the target object;
and when all conditions in the judgment basis for determining the target posture according to the distance information and the angle information are met, determining the posture of the target object in the image frame as the target posture.
3. The method of claim 1, wherein the plurality of gestures comprise: a first pose identified in a first image frame of the target video and a second pose identified in a second image frame of the target video; wherein a recording time of the first image frame is earlier than a recording time of the second image frame; the first image frame is the first image frame in the target video which is identified to the first gesture, and the second image frame is the first image frame in the target video which is identified to the second gesture; the target action in the action data comprises the first gesture before and the second gesture after; the motion data further comprises temporal conditions of the first and second poses in the target motion;
the determining the target actions matched with the plurality of gestures according to the plurality of gestures continuously recognized from the target video and the mapping relation in the action data comprises:
acquiring an interval between the recording time of the first image frame and the recording time of the second image frame;
when the interval meets the time condition, determining that the action of the first pose in the first image frame and the second pose in the second image frame which are matched together is the target action;
determining that the second pose in the second image frame fails to match the first pose in the first image frame together for a full action when the interval does not satisfy the temporal condition.
4. The method of claim 2, wherein the target action in the action data comprises a first gesture and a second gesture;
the obtaining, according to an evaluation rule of a posture included in the target action in the action data, an evaluation result of the target object executing the target action includes:
obtaining a first score of the first posture of the target object for executing the target action according to the image frame of the target object in the first posture and the evaluation rule of the first posture;
obtaining a second score of the second posture of the target object for executing the target action according to the image frame of the target object in the second posture and the evaluation rule of the second posture;
and obtaining an evaluation result of the target object executing the target action according to the first score and the second score.
5. The method of claim 4, wherein if the target object is in the first pose among a plurality of consecutive image frames in the target video, the obtaining a first score of the first pose in which the target object performs the target action according to the image frame in which the target object is in the first pose and a rating rule of the first pose comprises:
obtaining an initial first score according to a previous frame image frame in the continuous multi-frame image frames and the evaluation rule of the first posture;
obtaining a new first score according to a subsequent frame of image frames in the continuous multi-frame image frames and the evaluation rule of the first posture;
and if the new first score is larger than the initial first score, covering the initial first score by using the new first score.
6. The method of claim 1, wherein entering action data comprises:
inputting an action name, the number of gestures included in the action and the name of the gesture included in the action;
and recording the judgment basis and evaluation rule of the gesture contained in the action.
7. The method of claim 1, wherein the evaluation rules for the pose comprise one or more of the following in combination:
the evaluation rule of the maintenance time of the posture, the distance evaluation rule of the key point position of the posture, or the evaluation rule of the angle formed by the key point position of the posture.
8. The method of claim 7, wherein the evaluation rule of the maintaining time of the posture comprises one or more of the following:
and the key point positions forming the gesture continuously meet the time evaluation rule of the target distance condition, or the key point positions forming the gesture continuously meet the time evaluation rule of the target angle condition.
9. The method of claim 7, wherein the evaluation rule of the maintenance time of the posture comprises:
the interval of the maintaining time of the posture and the time fraction coefficient corresponding to the interval.
10. The method of claim 2, wherein the distance information is expressed as a multiple of a distance from the wrist to a radius of the elbow.
11. An operation evaluation device, comprising:
the mode starting module is used for starting the action evaluation mode; before the action evaluation mode is started, action data is recorded in advance, and the action data comprises: mapping relation between the action and the posture contained in the action, judgment basis of the posture and evaluation rule of the posture;
the gesture recognition module is used for carrying out gesture recognition on the target object in the target video based on the judgment basis in the action data;
the action determining module is used for determining target actions matched with a plurality of postures according to the plurality of postures continuously recognized from the target video and the mapping relation in the action data;
and the action evaluation module is used for obtaining an evaluation result of the target object executing the target action according to an evaluation rule of the posture contained in the target action in the action data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211055338.8A CN115131879B (en) | 2022-08-31 | 2022-08-31 | Action evaluation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211055338.8A CN115131879B (en) | 2022-08-31 | 2022-08-31 | Action evaluation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115131879A true CN115131879A (en) | 2022-09-30 |
CN115131879B CN115131879B (en) | 2023-01-06 |
Family
ID=83387521
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211055338.8A Active CN115131879B (en) | 2022-08-31 | 2022-08-31 | Action evaluation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115131879B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117078976A (en) * | 2023-10-16 | 2023-11-17 | 华南师范大学 | Action scoring method, action scoring device, computer equipment and storage medium |
WO2024085095A1 (en) * | 2022-10-21 | 2024-04-25 | ソニーグループ株式会社 | Information processing method, information processing device, and computer-readable non-transitory storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103390174A (en) * | 2012-05-07 | 2013-11-13 | 深圳泰山在线科技有限公司 | Physical education assisting system and method based on human body posture recognition |
CN110111368A (en) * | 2019-05-07 | 2019-08-09 | 山东广域科技有限责任公司 | A kind of detecting and tracking method of the similar mobile target based on human body attitude identification |
CN111754571A (en) * | 2019-03-28 | 2020-10-09 | 北京沃东天骏信息技术有限公司 | Gesture recognition method and device and storage medium thereof |
CN112597933A (en) * | 2020-12-29 | 2021-04-02 | 咪咕互动娱乐有限公司 | Action scoring method and device and readable storage medium |
CN112784786A (en) * | 2021-01-29 | 2021-05-11 | 联想(北京)有限公司 | Human body posture recognition method and device |
CN113392746A (en) * | 2021-06-04 | 2021-09-14 | 北京格灵深瞳信息技术股份有限公司 | Action standard mining method and device, electronic equipment and computer storage medium |
CN114639168A (en) * | 2022-03-25 | 2022-06-17 | 中国人民解放军国防科技大学 | Method and system for running posture recognition |
-
2022
- 2022-08-31 CN CN202211055338.8A patent/CN115131879B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103390174A (en) * | 2012-05-07 | 2013-11-13 | 深圳泰山在线科技有限公司 | Physical education assisting system and method based on human body posture recognition |
CN111754571A (en) * | 2019-03-28 | 2020-10-09 | 北京沃东天骏信息技术有限公司 | Gesture recognition method and device and storage medium thereof |
CN110111368A (en) * | 2019-05-07 | 2019-08-09 | 山东广域科技有限责任公司 | A kind of detecting and tracking method of the similar mobile target based on human body attitude identification |
CN112597933A (en) * | 2020-12-29 | 2021-04-02 | 咪咕互动娱乐有限公司 | Action scoring method and device and readable storage medium |
CN112784786A (en) * | 2021-01-29 | 2021-05-11 | 联想(北京)有限公司 | Human body posture recognition method and device |
CN113392746A (en) * | 2021-06-04 | 2021-09-14 | 北京格灵深瞳信息技术股份有限公司 | Action standard mining method and device, electronic equipment and computer storage medium |
CN114639168A (en) * | 2022-03-25 | 2022-06-17 | 中国人民解放军国防科技大学 | Method and system for running posture recognition |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024085095A1 (en) * | 2022-10-21 | 2024-04-25 | ソニーグループ株式会社 | Information processing method, information processing device, and computer-readable non-transitory storage medium |
CN117078976A (en) * | 2023-10-16 | 2023-11-17 | 华南师范大学 | Action scoring method, action scoring device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN115131879B (en) | 2023-01-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115131879B (en) | Action evaluation method and device | |
CN108256433B (en) | Motion attitude assessment method and system | |
WO2021051579A1 (en) | Body pose recognition method, system, and apparatus, and storage medium | |
CN107909060A (en) | Gymnasium body-building action identification method and device based on deep learning | |
CN110428486B (en) | Virtual interaction fitness method, electronic equipment and storage medium | |
US20100208038A1 (en) | Method and system for gesture recognition | |
Chaudhari et al. | Yog-guru: Real-time yoga pose correction system using deep learning methods | |
KR102594938B1 (en) | Apparatus and method for comparing and correcting sports posture using neural network | |
CN110688929B (en) | Human skeleton joint point positioning method and device | |
KR102320960B1 (en) | Personalized home training behavior guidance and correction system | |
CN114067358A (en) | Human body posture recognition method and system based on key point detection technology | |
CN110298220B (en) | Action video live broadcast method, system, electronic equipment and storage medium | |
CN107018330A (en) | A kind of guidance method and device of taking pictures in real time | |
CN113255522B (en) | Personalized motion attitude estimation and analysis method and system based on time consistency | |
Limcharoen et al. | View-independent gait recognition using joint replacement coordinates (jrcs) and convolutional neural network | |
CN110633004A (en) | Interaction method, device and system based on human body posture estimation | |
CN113705540A (en) | Method and system for recognizing and counting non-instrument training actions | |
CN111967407B (en) | Action evaluation method, electronic device, and computer-readable storage medium | |
CN118380096A (en) | Rehabilitation training interaction method and device based on algorithm tracking and virtual reality | |
US20220273984A1 (en) | Method and device for recommending golf-related contents, and non-transitory computer-readable recording medium | |
CN115331314A (en) | Exercise effect evaluation method and system based on APP screening function | |
CN113409651A (en) | Live broadcast fitness method and system, electronic equipment and storage medium | |
US20220138966A1 (en) | Repetition counting and classification of movements systems and methods | |
KR20230086874A (en) | Rehabilitation training system using 3D body precision tracking technology | |
CN114513694A (en) | Scoring determination method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |