CN115131879B - Action evaluation method and device - Google Patents

Action evaluation method and device Download PDF

Info

Publication number
CN115131879B
CN115131879B CN202211055338.8A CN202211055338A CN115131879B CN 115131879 B CN115131879 B CN 115131879B CN 202211055338 A CN202211055338 A CN 202211055338A CN 115131879 B CN115131879 B CN 115131879B
Authority
CN
China
Prior art keywords
action
target
posture
image frame
evaluation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211055338.8A
Other languages
Chinese (zh)
Other versions
CN115131879A (en
Inventor
王硕
闵博
孙成新
王金明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Feihu Information Technology Tianjin Co Ltd
Original Assignee
Feihu Information Technology Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Feihu Information Technology Tianjin Co Ltd filed Critical Feihu Information Technology Tianjin Co Ltd
Priority to CN202211055338.8A priority Critical patent/CN115131879B/en
Publication of CN115131879A publication Critical patent/CN115131879A/en
Application granted granted Critical
Publication of CN115131879B publication Critical patent/CN115131879B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0003Analysing the course of a movement or motion sequences during an exercise or trainings sequence, e.g. swing for golf or tennis
    • A63B24/0006Computerised comparison for qualitative assessment of motion sequences or the course of a movement
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0669Score-keepers or score display devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0003Analysing the course of a movement or motion sequences during an exercise or trainings sequence, e.g. swing for golf or tennis
    • A63B24/0006Computerised comparison for qualitative assessment of motion sequences or the course of a movement
    • A63B2024/0012Comparing movements or motion sequences with a registered reference
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2230/00Measuring physiological parameters of the user
    • A63B2230/62Measuring physiological parameters of the user posture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an action evaluation method and device. Firstly, starting an action evaluation mode; before the action evaluation mode is started, action data including mapping relation between actions and postures contained in the actions, judgment basis of the postures and evaluation rules of the postures are recorded in advance. Then, performing gesture recognition on a target object in the target video based on a judgment basis in the action data; determining a plurality of target actions matched with the gestures according to a plurality of gestures continuously recognized from a target video and a mapping relation in the action data; and finally, obtaining an evaluation result of the target action executed by the target object according to the evaluation rule of the posture contained in the target action in the action data. And the action identification and evaluation can be carried out on the target object according to the target video only by inputting the action data in advance. The scheme does not need to be matched with a professional coach video to construct an identification program of the personalized action, can improve the application range of the action scoring scheme, and better meets the evaluation requirement of the personalized action.

Description

Action evaluation method and device
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a method and an apparatus for evaluating an action.
Background
People often enrich their own experiences or enhance physical fitness, such as fitness, martial arts, dancing, yoga, etc., by performing certain sports and artistic activities in daily life. But scoring actions for these activities is more difficult to achieve.
Taking fitness as an example, in the initial stage, people generally follow a trainer to exercise some general fitness actions. To learn whether an action is performing nominally, there is currently a technique: aiming at the video of a designated fitness trainer, a developer can write a motion identification code according to the action of the trainer, when a user (a fitness student) needs to evaluate the motion performed by the user, the user exercises according to the fitness action of the trainer, and a program can grade the standard degree of the motion. However, with the increase of experience, people may exercise their own personalized fitness activities and wish to obtain an evaluation.
If the evaluation of the personalized body-building action is realized according to the method, an identification program needs to be created in advance according to the video containing the body-building action. On one hand, a recognition program of the action in the video is specially created to realize the recognition of the action, and higher cost is consumed; on the other hand, if the corresponding clear video cannot be found, the clear video cannot be recorded in the identification program in advance. Therefore, the application range of the prior art for action scoring is narrow, and the evaluation requirements of more personalized actions are difficult to meet.
Disclosure of Invention
Based on the above problems, the application provides an action evaluation method and device to improve the application range of an action scoring scheme and better meet the evaluation requirement of personalized actions.
The embodiment of the application discloses the following technical scheme:
in a first aspect of the present application, there is provided an action evaluation method, including:
starting an action evaluation mode; before the action evaluation mode is started, action data is recorded in advance, and the action data comprises: mapping relation between the action and the posture contained in the action, judgment basis of the posture and evaluation rule of the posture;
performing gesture recognition on a target object in a target video based on the judgment basis in the action data;
determining a plurality of target actions matched with the gestures according to a plurality of gestures continuously recognized from the target video and the mapping relation in the action data;
and obtaining an evaluation result of the target action executed by the target object according to an evaluation rule of the posture contained in the target action in the action data.
Optionally, the determining is based on one or a combination of conditions including: distance conditions or angle conditions for key point locations; the gesture recognition of the target object in the target video based on the judgment basis in the motion data comprises the following steps:
identifying and obtaining a plurality of key point positions of the target object from the image frame of the target video through a human body posture estimation technology based on a neural network architecture;
obtaining distance information and angle information according to the plurality of key point positions of the target object;
and when all conditions in the judgment basis for determining the target posture according to the distance information and the angle information are met, determining the posture of the target object in the image frame as the target posture.
Optionally, the plurality of gestures comprises: a first pose identified in a first image frame of the target video and a second pose identified in a second image frame of the target video; wherein a recording time of the first image frame is earlier than a recording time of the second image frame; the first image frame is an image frame of which the first posture is recognized in the target video, and the second image frame is an image frame of which the second posture is recognized in the target video; the target action in the action data comprises the first gesture before and the second gesture after; the motion data further comprises temporal conditions of the first and second poses in the target motion;
the determining the target actions matched with the gestures according to the gestures continuously recognized from the target video and the mapping relation in the action data comprises the following steps:
acquiring an interval between the recording time of the first image frame and the recording time of the second image frame;
when the interval meets the time condition, determining that the action of the first pose in the first image frame and the second pose in the second image frame which are matched together is the target action;
determining that the second pose in the second image frame fails to match the first pose in the first image frame together for a full action when the interval does not satisfy the temporal condition.
Optionally, the target motion in the motion data comprises a first gesture and a second gesture;
the obtaining, according to an evaluation rule of a posture included in the target action in the action data, an evaluation result of the target object executing the target action includes:
obtaining a first score of the first posture of the target object for executing the target action according to the image frame of the target object in the first posture and the evaluation rule of the first posture;
obtaining a second score of the second posture of the target object for executing the target action according to the image frame of the target object in the second posture and the evaluation rule of the second posture;
and obtaining an evaluation result of the target action executed by the target object according to the first score and the second score.
Optionally, if the target object in the consecutive multi-frame image frames in the target video is in the first posture, the obtaining a first score of the first posture of the target object performing the target action according to the image frame of the target object in the first posture and an evaluation rule of the first posture includes:
obtaining an initial first score according to a previous frame image frame in the continuous multi-frame image frames and the evaluation rule of the first posture;
obtaining a new first score according to a subsequent frame of image frames in the continuous multi-frame image frames and the evaluation rule of the first posture;
and if the new first score is larger than the initial first score, covering the initial first score by using the new first score.
Optionally, entering the action data comprises:
inputting the name of the action, the number of postures contained in the action and the name of the posture contained in the action;
and recording the judgment basis and the evaluation rule of the gesture contained in the action.
Optionally, the evaluation rule of the gesture includes one or more of the following combinations:
the evaluation rule of the maintaining time of the gesture, the distance evaluation rule of the key point of the gesture, or the evaluation rule of the angle formed by the key point of the gesture.
Optionally, the evaluation rule of the maintenance time of the posture comprises one or more of the following combination:
and the key point positions forming the gesture continuously meet the time evaluation rule of the target distance condition, or the key point positions forming the gesture continuously meet the time evaluation rule of the target angle condition.
Optionally, the evaluation rule of the maintenance time of the posture includes:
the interval of the maintaining time of the posture and the time fraction coefficient corresponding to the interval.
Optionally, the distance information is expressed as a multiple of a distance from the wrist to a radius of the elbow.
In a second aspect of the present application, there is provided an action evaluation device including:
the mode starting module is used for starting the action evaluation mode; before the action evaluation mode is started, action data is recorded in advance, and the action data comprises: mapping relation between the action and the posture contained in the action, judgment basis of the posture and evaluation rule of the posture;
the gesture recognition module is used for carrying out gesture recognition on the target object in the target video based on the judgment basis in the action data;
the action determining module is used for determining target actions matched with a plurality of postures according to the plurality of postures continuously recognized from the target video and the mapping relation in the action data;
and the action evaluation module is used for obtaining an evaluation result of the target object executing the target action according to an evaluation rule of the posture contained in the target action in the action data.
Compared with the prior art, the method has the following beneficial effects:
according to the action evaluation method and device, firstly, an action evaluation mode is started; before the action evaluation mode is started, action data including mapping relation between actions and postures contained in the actions, judgment basis of the postures and evaluation rules of the postures are recorded in advance. Then, performing gesture recognition on a target object in the target video based on a judgment basis in the action data; determining a plurality of target actions matched with the gestures according to a plurality of gestures continuously recognized from the target video and mapping relations in the action data; and finally, obtaining an evaluation result of the target action executed by the target object according to the evaluation rule of the posture contained in the target action in the action data. And the target object can be identified and evaluated according to the target video only by inputting action data in advance. The scheme does not need to be matched with a professional coach video to construct an identification program of the personalized action, can improve the application range of the action scoring scheme, and better meets the evaluation requirement of the personalized action.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the description below are only some embodiments of the present application, and for those skilled in the art, other drawings may be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart of an action evaluation method provided in an embodiment of the present application;
fig. 2 is a flowchart for entering action data according to an embodiment of the present application;
fig. 3 is a schematic distribution diagram of key points of a human body according to an embodiment of the present application;
FIG. 4 is a flow chart of gesture recognition provided by an embodiment of the present application;
FIG. 5 is a flow chart of a target action recognition provided by an embodiment of the present application;
FIG. 6 is a flow chart of another method for evaluating actions according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an action evaluation device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of another motion evaluation device according to an embodiment of the present application.
Detailed Description
As described above, yoga, boxing, body building, martial arts, dancing and other activities are increasingly popular, and many people want to score and evaluate the personalized actions performed by themselves. For personalized actions, because a suitable video creation action recognition program is not easy to find, action evaluation is difficult and cost is high. Therefore, the application range of the prior art scheme for action scoring is narrow, and the evaluation requirements of more personalized actions are difficult to meet.
The inventor provides a scheme for inputting action data in advance through research so as to realize action evaluation. In order to realize the automatic evaluation of the personalized motion, only motion data related to the personalized motion needs to be recorded in advance, such as the gesture contained in the motion, the judgment basis of the gesture and the evaluation rule of the gesture. On the basis of the above, the evaluation can be performed on the action performed by the target object in the video. The problem that an action recognition program needs to be specially created in the prior art is solved. The scheme can meet the evaluation requirement of personalized actions more easily.
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Method embodiment
Referring to fig. 1, the figure is a flowchart of an action evaluation method provided in an embodiment of the present application. The operation evaluation method shown in fig. 1 includes:
step 101, an operation evaluation mode is started.
The motion assessment mode may be user selectable to turn on or off. Specifically, if the user selects to enable, the subsequent steps of the method are implemented for the target video (i.e., the video to be evaluated) corresponding to the user to perform action recognition and evaluation. If the user chooses not to enable the action evaluation mode or this mode is not turned on by default, even if the user provides a target video, the actions of the target object in the video do not need to be identified and scored. The user referred to herein is a user that triggers the start of the action evaluation mode, and may specifically be the target object (i.e., the object whose action is to be evaluated) itself, or a contact of the target object, such as a parent or a coach.
It is necessary to say that action data has been previously entered before the action evaluation mode is started. Here, for ease of understanding, embodiments of the present solution may be understood as being built to execute on an action evaluation system. The action evaluation system can receive a mode trigger instruction of a user, and action evaluation can be performed on the received target video after the action evaluation system knows that an action evaluation mode needs to be started according to the instruction. A storage medium may be included in the system for storing pre-entered motion data. The operation of entering the action data may be performed by the target object or by the aforementioned user.
In an embodiment of the present application, the action data includes: mapping relation between the action and the posture contained in the action, judgment basis of the posture and evaluation rule of the posture. For example, a straight punch motion includes two gestures: go out and get up. Therefore, the motion data includes the mapping relationship between the straight punch motion and the two postures of punching and punching. In addition, the method also comprises a judgment basis of the punching posture, an evaluation rule of the punching posture and an evaluation rule of the punching posture.
And 102, recognizing the gesture of the target object in the target video based on the judgment basis in the motion data.
As described above, the motion data includes the basis for determining the posture included in the motion. It should be noted that the determination criterion of a gesture may include one condition or a combination of conditions. Generally, the pose of the human body can be determined by the key points, for example, a particular number of key points can determine a pose. The decision conditions are also related to the key points. For example, the pose is determined based on a distance condition including the keypoint location, and/or an angle condition of the keypoint location. For example, the determination of the posture of the right-handed straight punch includes: the angle formed by the wrist, the elbow and the shoulder on the right side is larger than the preset angle (the angle condition of the three key points). The judgment basis of the boxing posture in the straight boxing action of the right hand comprises the following steps: and after the left punch, the distance between the left punch and the right punch reaches a preset distance (a distance condition of two key point positions).
And performing gesture recognition on the target object in the target video based on the gesture judgment basis in the recorded action data so as to determine the specific gesture of the target object in the image frame of the video.
Step 103, determining target actions matched with the multiple gestures according to the multiple gestures continuously recognized from the target video and the mapping relation in the action data.
Video is made up of a plurality of image frames along a time sequence. Pose recognition may be performed for each image frame of the target video according to the foregoing step 102, and based on its keypoints in each image frame, some image frames may recognize a valid pose (i.e., a registered pose), while some image frames may not recognize a valid pose (e.g., a pose that does not meet the criteria or a matching pose that is not found in the registered pose).
As mentioned above, the mapping relationship between the gesture contained in the motion and the motion is included in the motion data. If a plurality of different gestures identified by the link in the target video can be identified as gestures having a mapping relation with the same action, actions matched with the plurality of different gestures can be determined as target actions. For example, a punch gesture and a punch gesture are recognized, and a punch may be determined as a punch action.
And 104, obtaining an evaluation result of the target action executed by the target object according to an evaluation rule of the posture contained in the target action in the action data.
As mentioned above, the action data includes the evaluation rule of the gesture, so that the evaluation result of the target object executing the target action can be obtained based on the completion condition of the target object presented in the target video for each gesture in the action in combination with the corresponding evaluation rule. It should be particularly noted that different actions may include the same gesture or gestures, and the requirements for different gestures may be different, so that the evaluation rules for the same gesture included in different actions may also be different. When evaluating, the attitude evaluation is specifically performed based on the evaluation rule of the attitude in the target action to which the attitude belongs, and further the evaluation of the corresponding action is obtained. In addition, the same action may also include a posture that is different before and after the requirement, for example, an action includes a first posture, a second posture and a third posture in chronological order, where the first posture is the same as the third posture, but the angle requirement or the distance requirement for the key point of the first posture is different from that of the third posture. Therefore, the evaluation rules of the same gesture in the same motion may be the same or different. The evaluation rules and the judgment conditions in the action data can be set and recorded according to the actual personalized action requirement.
According to the technical scheme, the action identification and evaluation can be performed on the target object according to the target video only by inputting the action data in advance. The scheme does not need to be matched with a professional coach video to construct a personalized action recognition program, can improve the application range of the action scoring scheme, and better meets the evaluation requirement of personalized actions.
It should be noted that fig. 1 is only an exemplary flowchart of an action evaluation method provided in the embodiment of the present application, and steps 102 to 104 in fig. 1 are sequentially executed. In practical applications, step 104 may also be executed during the execution of step 103, or the gesture in the motion is evaluated first, the motion corresponding to the gesture is continuously matched, and finally, the evaluation result of the target motion can also be obtained while the target motion is determined. Therefore, the order of executing steps 103 and 104 is not limited in this application.
As mentioned above, it is possible for the action data to be entered in advance before the action evaluation. A flow of entering action data is described below with reference to fig. 2. Fig. 2 is a flowchart of entering action data provided in an embodiment of the present application, where entering the action data includes:
step 201, inputting the name of the action, the number of gestures included in the action and the name of the gesture included in the action.
For example, the action name is entered: and (5) directly punching a fist on the right hand. The number of postures contained in the action is 2, and the postures are a punch posture and a punch-receiving posture respectively. When the gesture is input, the precedence relationship of the gestures included in the action can be specifically input, for example, the names of the two gestures of punching and punching are input according to the time sequence.
Step 202, inputting judgment basis and evaluation rule of the gesture contained in the action.
The explanation continues with the example above:
inputting judgment basis and evaluation rule of the boxing posture; and inputting a judgment basis and an evaluation rule of the boxing closing posture. It should be noted that the determination criterion of one gesture may include one or more determination conditions. The evaluation rules may be multifaceted, for example, those relating to distance or angle to key points. In other example implementations, the evaluation rules may also relate to a time dimension. For example, for some particular poses, the pose maintenance time is also considered when evaluating its completeness or standard. Thus, the evaluation rules for the pose may include a combination of one or more of the following: the evaluation rule of the maintenance time of the posture, the distance evaluation rule of the key point position of the posture, or the evaluation rule of the angle formed by the key point position of the posture.
For example, the basis for determining the punch is the angle, and the angle formed by three points of the right wrist, the right elbow and the right shoulder is more than 150 degrees. Thus, the evaluation rules may include: the angle is more than 170 degrees, and the corresponding fraction is 0.5; more than 160 degrees and less than or equal to 170 degrees, and the corresponding fraction is 0.4 min; greater than 150 ° and less than or equal to 160 °, corresponding to a score of 0.2.
The judgment basis of the punch receiving is that the right punch is behind the left punch, namely the judgment basis is the distance, namely the difference between the X-axis coordinate of the left punch and the X-axis coordinate of the right punch (the camera is arranged on the right side of the person, the right side of the image is in the positive direction of the X axis, and the upper side of the image is in the positive direction of the Y axis) is less than or equal to 0, and the corresponding score is 0.2. In addition, the evaluation rule of the holding time is added, namely, after reaching the boxing standard, the time is more than 1 second, and the corresponding score is 0.5.
In the embodiment of the present application, the evaluation rule of the maintenance time of the gesture includes one or more of the following combinations: and the key point positions forming the gesture continuously meet the time evaluation rule of the target distance condition, or the key point positions forming the gesture continuously meet the time evaluation rule of the target angle condition. The target distance condition is a distance condition that requires a certain period of time to be maintained, and the target angle condition is an angle condition that requires a certain period of time to be satisfied, and these conditions are related to the posture determination condition.
The evaluation rule of the maintenance time of the posture can comprise two parts: the interval of the maintaining time of the posture and the time fraction coefficient corresponding to the interval. For example, if the time when the key point locations constituting the gesture continuously satisfy the target distance condition is within a first time interval, the corresponding time fraction coefficient is a first coefficient; and if the time when the key point positions forming the posture continuously meet the target distance condition is in a second time interval, the corresponding time fraction coefficient is a second coefficient. When the posture is scored, the first coefficient or the second coefficient may be multiplied based on the actual situation (the interval in which the maintenance time of the posture is located) on the basis of the score of the posture thereof. The final score for the pose is obtained.
The recognition of the aforementioned poses depends on the keypoints of the target object in the image frame. These key points can be identified and determined by a number of relatively sophisticated identification techniques. In an alternative implementation, the key points may be implemented by a human body posture estimation technology based on a neural network architecture. Alternatively, the technique is MediaPipe dose. MediaPipe is a framework for constructing a machine learning pipeline for processing time series data of video, audio, etc. A convolutional neural network: is a kind of feed forward Neural Networks (fed forward Neural Networks) containing convolution calculation and having a deep structure, and is one of the representative algorithms of deep learning (deep learning). Convolutional Neural Networks have a representation learning (representation learning) capability, and are capable of performing Shift-Invariant classification (Shift-Invariant classification) on input information according to their hierarchical structure, and are therefore also called "Shift-Invariant Artificial Neural Networks (SIANN). BlazePose: a lightweight convolutional neural network architecture is used for human body posture estimation and can be used for real-time inference on mobile equipment. In the inference process, the network generates 33 body key points for a person. MediaPipe position is a solution to infer 33 3D landmarks and background segmentation masks over the entire body from RGB video frames based on blazeposition, and 33 key points can refer to fig. 3. Fig. 3 is a schematic distribution diagram of key points of a human body according to an embodiment of the present application.
The reference numerals of the 33 key points in fig. 3 mean:
0 represents a nose; 1 represents the inner side of the left eye; 2 represents the left eye; 3 represents the outer side of the left eye; 4 represents the inner right eye; 5 represents the right eye; 6 represents the lateral right eye; 7 represents the left ear; 8 represents the right ear; 9 represents the left mouth corner; 10 represents the right mouth corner; 11 represents the left shoulder; 12 represents the right shoulder; 13 represents the left elbow; 14 represents the right elbow; 15 represents the left wrist; 16 represents the right wrist; 17 represents the left little finger; 18 represents the right little finger; 19 represents the left index finger; 20 represents the right index finger; 21 represents the left thumb; 22 represents the right thumb; 23 represents the left hip; 24 represents the right hip; 25 represents the left knee; 26 represents the right knee; 27 represents the left ankle; 28 represents the right ankle; 29 represents the left heel; 30 represents the right heel; 31 represents the left index finger; and 32 represents the right index finger. In the embodiment of the present application, the key point locations are locations in the image frame that can reflect the posture of the target object, and may be artificially specified according to the human bone features and the facial features.
An implementation of gesture recognition of a target object in a target video based on the decision criterion in the motion data is described below. FIG. 4 is a flow chart of gesture recognition.
Step 401, identifying and obtaining a plurality of key point positions of the target object from the image frame of the target video through a human body posture estimation technology based on a neural network architecture.
In practical applications, only a few key points may be identified, not all 33 key points as shown in fig. 3, considering that the target object appearing in the image frame may be due to a specific gesture.
Step 402, obtaining distance information and angle information according to the plurality of key point positions of the target object.
Distance information may be derived from two keypoint locations in the image frame (e.g., the distance between the left and right punches). The angle information may be obtained from a triangle that can be enclosed by three key points in the image frame (e.g., a triangle enclosed by the wrist, elbow, and shoulder on the same side).
The camera and the video have the characteristics of large and small distances, so that distance information can be reflected by relative distance instead of absolute distance in order to improve calculation precision or motion evaluation accuracy during distance calculation. For example, the right finger may be short if the target object is standing 100 pixels closer to the camera, and the forearm may be long if standing far. In an alternative implementation, the distance information is expressed as a multiple of the distance from the wrist to the radius of the elbow when calculating the distance. Corresponding to the unit of distance information measured in terms of the distance from the wrist to the radius of the elbow.
And 403, when all conditions in the judgment basis for determining the target posture according to the distance information and the angle information are met, determining the posture of the target object in the image frame as the target posture.
If the criterion of the target posture only includes a distance condition, if the distance information obtained in step 402 satisfies the condition, the posture can be determined as the target posture;
if the criterion of the target posture only includes an angle condition, if the angle information obtained in step 402 satisfies the condition, the posture can be determined as the target posture;
if the determination criterion of the target pose includes a distance condition and an angle condition, the pose may be determined to be the target pose only if the distance information obtained in step 402 satisfies the distance condition and the angle information obtained in step 402 satisfies the angle condition.
As mentioned above, the time for the gesture to be maintained can also be taken into account in the recognition of the motion. Furthermore, the time interval between the occurrence of two gestures may also be considered. In an alternative implementation, the plurality of gestures includes: identifying a first pose in a first image frame of the target video and a second pose in a second image frame of the target video; wherein a recording time of the first image frame is earlier than a recording time of the second image frame; the first image frame is the first image frame in the target video which is identified to the first gesture, and the second image frame is the first image frame in the target video which is identified to the second gesture; the target motion in the motion data comprises the first pose before and the second pose after.
The motion data further includes temporal conditions of the first pose and the second pose in the target motion. For example, the target object is required to complete the first pose for the first time and the second pose for the first time to be shorter or longer than a specific time, and the time condition is satisfied, so that the two poses can be considered to be combined into one target motion. FIG. 5 illustrates an example process flow for identifying a target action. As shown in fig. 5, the process of determining the target actions matched with the plurality of poses according to the plurality of poses continuously recognized from the target video and the mapping relation in the action data includes:
step 501, obtaining an interval between the recording time of the first image frame and the recording time of the second image frame.
The interval can be derived from the recording time of the frame. In addition, the time interval of two frames can also be obtained according to the ordinal number of the frame.
Step 502, judging whether the interval meets the time condition of the target action, and if so, entering step 503; if not, go to step 504.
As an example, the time condition is more than 8 seconds.
Step 503, when the interval satisfies the time condition, determining that the action in which the first pose in the first image frame and the second pose in the second image frame are matched together is the target action.
For example, if the interval is 10 seconds and the time condition is satisfied, the two posture-matched motions are the target motions.
Step 504, when the interval does not satisfy the time condition, determining that the second pose in the second image frame and the first pose in the first image frame together match a complete action.
For example, if the interval is 3 seconds and the time condition is not satisfied, a complete motion cannot be matched according to the poses in the two images. On the basis of this, it is also necessary to continue to recognize the successive image frames following the second image frame in order to obtain the image frame of the posture satisfying the time condition. Thereby identifying the target action.
If the target action in the action data comprises a first gesture and a second gesture, the application further provides an example implementation mode for obtaining the evaluation result of the target action. For example: the obtaining, according to an evaluation rule of a posture included in the target action in the action data, an evaluation result of the target object executing the target action includes:
obtaining a first score of the first posture of the target object for executing the target action according to the image frame of the target object in the first posture and the evaluation rule of the first posture; and obtaining a second score of the second posture of the target object for executing the target action according to the image frame of the target object in the second posture and the evaluation rule of the second posture. And obtaining an evaluation result of the target object executing the target action according to the first score and the second score.
That is, the overall evaluation of the target action takes into account the scores of the poses that make up the action. For example, the first score and the second score are added to obtain the total score as the evaluation result of the target action. In one particular example, a full right straight punch motion scores 1 point, with scores including 0.5 points for punch gestures and 0.5 points for punch actions.
Furthermore, if the maintaining time of the gesture is taken into consideration, if the evaluation rule involves setting a corresponding time score coefficient for the interval of the maintaining time, the coefficient may be multiplied by the score to obtain the final score of the gesture. And adding the final scores of different postures forming one action to obtain an evaluation result of the action.
It should be noted that, for one pose, the degree of completion or the standard degree may be insufficient in the previous frame or frames of images, but the degree of completion or the standard degree tends to increase in the next frame or frames of images. If the pose score is based only on previous image frames, it may overlook the true achievable completeness or standard. Therefore, the application provides an implementation mode of covering the gesture score in continuous multi-frame images so as to record the score of the most standard gesture.
Taking a first pose as an example, if the target object in the continuous multi-frame image frames in the target video is in the first pose, the obtaining a first score of the first pose of the target object executing the target action according to the image frame of the target object in the first pose and an evaluation rule of the first pose may specifically include:
obtaining an initial first score according to a previous frame image frame in the continuous multi-frame image frames and the evaluation rule of the first posture; obtaining a new first score according to a subsequent frame of image frames in the continuous multi-frame image frames and the evaluation rule of the first posture; and if the new first score is larger than the initial first score, covering the initial first score by using the new first score.
Fig. 6 presents a flow of an action evaluation method. As shown in fig. 6, motion evaluation is performed synchronously during the process of capturing the video of the target object. And under the condition that the acquisition is not finished, acquiring the information (point pixel coordinates in the image) of key point positions of the body of the target object in each frame of image. And if the current frame accords with the posture judgment basis, judging whether repeated postures exist or not. And if the repeated posture exists and the posture score of the current frame reaches the highest score in the repeated posture, covering the highest score of the previous posture and recording the maintaining time of the posture in an accumulated mode. If the repeated gesture does not exist, the gesture recognized in the current frame is the gesture recognized in the target video for the first time. If no other postures capable of being matched exist at present and a complete action cannot be recognized, recording the score of the posture as the highest score and recording the maintenance time of the posture; if other postures which can be matched with the posture of the current frame to form a complete action exist at present, matching the posture into the action, recording the scores of all the postures forming the action, and recording the timestamp to obtain the score of the whole action. A timestamp may also be embodied in the action evaluation result. The timestamp may include the first time and the end time of a certain gesture to complete the action.
Examples are as follows: assuming that the punch gesture of the camera is in the 6 th, 7 th, 8 th and 9 th frames and the angle is between 160 DEG and 170 DEG, the score of the punch gesture is 0.4 minute at this time but the score of the punch gesture is 0.5 minute when the 10 th frame appears to be more than 170 DEG, and the score is covered because the punch gesture is not matched with the punch gesture until this moment and a complete right straight punch motion appears. Similarly, if the 100 th frame of picture acquires the user's punch-receiving gesture, the score is 0.2, and the acquisition is continued, and if the 130 th frame of picture is acquired, the user still receives the punch-receiving gesture, and the time is more than 1 second from the 100 th frame of picture, the punch-receiving gesture score is 0.5. Until the punch-receiving posture condition is not satisfied. The user's complete right straight punch motion can be recorded with a score of 1 (punch 0.5+ punch 0.5).
Based on the action evaluation method provided by the foregoing embodiment, correspondingly, the application further provides an action evaluation device. The following description is given with reference to the examples.
The operation evaluation device shown in fig. 7 includes: a mode initiation module 701, a gesture recognition module 702, a motion determination module 703 and a motion evaluation module 704.
A mode starting module 701, configured to start an action evaluation mode; before the action evaluation mode is started, action data is recorded in advance, and the action data comprises: mapping relation between the action and the posture contained in the action, judgment basis of the posture and evaluation rule of the posture;
a gesture recognition module 702, configured to perform gesture recognition on a target object in a target video based on the determination criterion in the motion data;
an action determining module 703, configured to determine, according to a plurality of gestures continuously recognized from the target video and a mapping relationship in the action data, a target action matched with the plurality of gestures;
and an action evaluation module 704, configured to obtain an evaluation result of the target object executing the target action according to an evaluation rule of a posture included in the target action in the action data.
And the action identification and evaluation can be carried out on the target object according to the target video only by inputting the action data in advance. The scheme does not need to be matched with a professional coach video to construct an identification program of the personalized action, can improve the application range of the action scoring scheme, and better meets the evaluation requirement of the personalized action.
Optionally, the determining is based on one or a combination of conditions including: distance conditions or angle conditions for key point locations; the gesture recognition module 702 includes:
the key point position identification unit is used for identifying and obtaining a plurality of key point positions of the target object from an image frame of the target video through a human body posture estimation technology based on a neural network architecture;
the information acquisition unit is used for acquiring distance information and angle information according to the plurality of key point positions of the target object;
and the posture determining unit is used for determining the posture of the target object in the image frame as the target posture when all conditions in judgment basis for determining the target posture according to the distance information and the angle information are met.
Optionally, the plurality of gestures comprises: identifying a first pose in a first image frame of the target video and a second pose in a second image frame of the target video; wherein a recording time of the first image frame is earlier than a recording time of the second image frame; the first image frame is the first image frame in the target video which is identified to the first gesture, and the second image frame is the first image frame in the target video which is identified to the second gesture; the target action in the action data comprises the first gesture before and the second gesture after; the motion data further comprises temporal conditions of the first and second poses in the target motion;
the action determining module 703 includes:
an interval acquisition unit configured to acquire an interval between a recording time of the first image frame and a recording time of the second image frame;
an action determining unit configured to determine, as the target action, an action in which the first posture in the first image frame and the second posture in the second image frame are matched together when the interval satisfies the time condition; determining that the second pose in the second image frame fails to match the first pose in the first image frame together for a full action when the interval does not satisfy the temporal condition.
Optionally, the target motion in the motion data comprises a first gesture and a second gesture;
the action evaluation module 704 includes:
the score acquisition unit is used for acquiring a first score of the first posture of the target object for executing the target action according to the image frame of the target object in the first posture and the evaluation rule of the first posture; the evaluation rule of the second posture is used for obtaining a second score of the second posture of the target object for executing the target action according to the image frame of the target object in the second posture and the evaluation rule of the second posture;
and the evaluation result acquisition unit is used for acquiring an evaluation result of the target object executing the target action according to the first score and the second score.
Optionally, if the target object in the consecutive multi-frame image frames in the target video is in the first pose, the score obtaining unit includes:
the initial score acquisition subunit is used for acquiring an initial first score according to a previous frame image frame in the continuous multi-frame image frames and the evaluation rule of the first posture;
the new score acquisition subunit is used for acquiring a new first score according to a subsequent frame image frame in the continuous multi-frame image frames and the evaluation rule of the first posture;
and the score covering subunit is used for covering the initial first score by using the new first score if the new first score is larger than the initial first score.
Another motion estimation device shown in fig. 8 includes, in addition to the above modules: a logging module 705; the logging module 705 is configured to: inputting the name of the action, the number of postures contained in the action and the name of the posture contained in the action; and recording the judgment basis and the evaluation rule of the gesture contained in the action.
Optionally, the evaluation rule of the gesture includes one or more of the following combinations:
the evaluation rule of the maintenance time of the posture, the distance evaluation rule of the key point position of the posture, or the evaluation rule of the angle formed by the key point position of the posture.
Optionally, the evaluation rule of the maintenance time of the posture comprises one or more of the following combination:
the key point locations constituting the attitude continuously satisfy a time evaluation rule of a target distance condition, or the key point locations constituting the attitude continuously satisfy a time evaluation rule of a target angle condition.
Optionally, the evaluation rule of the maintenance time of the posture includes:
the interval of the maintaining time of the posture and the time fraction coefficient corresponding to the interval.
Optionally, the distance information is expressed as a multiple of a distance from the wrist to a radius of the elbow.
It should be noted that, in this specification, each embodiment is described in a progressive manner, and the same and similar parts between the embodiments are referred to each other, and each embodiment focuses on differences from other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, it is relatively simple to describe, and reference may be made to some descriptions of the method embodiment for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts suggested as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement without inventive effort.
The above description is only one specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (7)

1. An action evaluation method, comprising:
starting an action evaluation mode; before the action evaluation mode is started, action data is recorded in advance, and the action data comprises: mapping relation between the action and the posture contained in the action, judgment basis of the posture and evaluation rule of the posture;
performing gesture recognition on a target object in a target video based on the judgment basis in the action data;
determining a plurality of target actions matched with the gestures according to a plurality of gestures continuously recognized from the target video and the mapping relation in the action data;
obtaining an evaluation result of the target object executing the target action according to an evaluation rule of the posture contained in the target action in the action data;
the plurality of gestures include: a first pose identified in a first image frame of the target video and a second pose identified in a second image frame of the target video; wherein a recording time of the first image frame is earlier than a recording time of the second image frame; the first image frame is an image frame of which the first posture is recognized in the target video, and the second image frame is an image frame of which the second posture is recognized in the target video; the target motion in the motion data comprises the first gesture before and the second gesture after; the motion data further comprises temporal conditions of the first and second poses in the target motion;
the determining the target actions matched with the plurality of gestures according to the plurality of gestures continuously recognized from the target video and the mapping relation in the action data comprises:
acquiring an interval between the recording time of the first image frame and the recording time of the second image frame;
when the interval meets the time condition, determining that the action of the first pose in the first image frame and the second pose in the second image frame which are matched together is the target action;
determining that the second pose in the second image frame fails to match the first pose in the first image frame together for a full action when the interval does not satisfy the temporal condition;
the evaluation rule of the posture comprises one or more of the following combinations:
the evaluation rule of the maintaining time of the gesture, the distance evaluation rule of the key point positions of the gesture or the evaluation rule of the angle formed by the key point positions of the gesture;
the evaluation rule of the maintaining time of the posture comprises one or more of the following combination:
the key point positions forming the gesture continuously meet the time evaluation rule of a target distance condition, or the key point positions forming the gesture continuously meet the time evaluation rule of a target angle condition;
the evaluation rule of the maintenance time of the posture includes:
the interval of the maintaining time of the posture and the time fraction coefficient corresponding to the interval.
2. The method of claim 1, wherein the decision criterion comprises one or a combination of the following conditions: distance conditions or angle conditions for key point locations; the gesture recognition of the target object in the target video based on the judgment basis in the motion data comprises the following steps:
identifying and obtaining a plurality of key point positions of the target object from the image frame of the target video through a human body posture estimation technology based on a neural network architecture;
obtaining distance information and angle information according to a plurality of key point positions of the target object;
and when all conditions in a judgment basis for determining the target posture according to the distance information and the angle information are met, determining the posture of the target object in the image frame as the target posture.
3. The method of claim 2, wherein the target action in the action data comprises a first gesture and a second gesture;
the obtaining of the evaluation result of the target action executed by the target object according to the evaluation rule of the posture contained in the target action in the action data includes:
obtaining a first score of the first posture of the target object for executing the target action according to the image frame of the target object in the first posture and an evaluation rule of the first posture;
obtaining a second score of the second posture of the target object for executing the target action according to the image frame of the target object in the second posture and the evaluation rule of the second posture;
and obtaining an evaluation result of the target object executing the target action according to the first score and the second score.
4. The method of claim 3, wherein if the target object is in the first pose among a plurality of consecutive image frames in the target video, the obtaining a first score of the first pose in which the target object performs the target action according to the image frame in which the target object is in the first pose and a rating rule of the first pose comprises:
obtaining an initial first score according to a previous frame image frame in the continuous multi-frame image frames and the evaluation rule of the first posture;
obtaining a new first score according to a subsequent frame image frame in the continuous multi-frame image frames and the evaluation rule of the first posture;
and if the new first score is larger than the initial first score, covering the initial first score by using the new first score.
5. The method of claim 1, wherein entering action data comprises:
inputting the name of the action, the number of postures contained in the action and the name of the posture contained in the action;
and recording the judgment basis and evaluation rule of the gesture contained in the action.
6. The method of claim 2, wherein the distance information is expressed as a multiple of a distance from the wrist to a radius of the elbow.
7. An operation evaluation device, comprising:
the mode starting module is used for starting the action evaluation mode; before the action evaluation mode is started, action data is recorded in advance, and the action data comprises: mapping relation between the action and the posture contained in the action, judgment basis of the posture and evaluation rule of the posture;
the gesture recognition module is used for carrying out gesture recognition on the target object in the target video based on the judgment basis in the action data;
the action determining module is used for determining target actions matched with a plurality of postures according to the plurality of postures continuously recognized from the target video and the mapping relation in the action data;
the action evaluation module is used for obtaining an evaluation result of the target object executing the target action according to an evaluation rule of the posture contained in the target action in the action data;
the plurality of gestures include: identifying a first pose in a first image frame of the target video and a second pose in a second image frame of the target video; wherein a recording time of the first image frame is earlier than a recording time of the second image frame; the first image frame is an image frame of which the first posture is recognized in the target video, and the second image frame is an image frame of which the second posture is recognized in the target video; the target action in the action data comprises the first gesture before and the second gesture after; the motion data further comprises temporal conditions of the first and second poses in the target motion;
the action determination module includes:
an interval acquisition unit configured to acquire an interval between a recording time of the first image frame and a recording time of the second image frame;
an action determining unit, configured to determine, when the interval satisfies the time condition, that an action in which the first pose in the first image frame and the second pose in the second image frame are jointly matched is the target action; determining that the second pose in the second image frame fails to match the first pose in the first image frame together for a full action when the interval does not satisfy the temporal condition;
the evaluation rule of the posture comprises one or more of the following combinations:
the evaluation rule of the maintaining time of the gesture, the distance evaluation rule of the key point positions of the gesture or the evaluation rule of the angle formed by the key point positions of the gesture;
the evaluation rule of the maintaining time of the gesture comprises one or more of the following combination:
the key point positions forming the gesture continuously meet the time evaluation rule of a target distance condition, or the key point positions forming the gesture continuously meet the time evaluation rule of a target angle condition;
the evaluation rule of the maintenance time of the posture comprises the following steps:
the interval of the maintaining time of the posture and the time fraction coefficient corresponding to the interval.
CN202211055338.8A 2022-08-31 2022-08-31 Action evaluation method and device Active CN115131879B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211055338.8A CN115131879B (en) 2022-08-31 2022-08-31 Action evaluation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211055338.8A CN115131879B (en) 2022-08-31 2022-08-31 Action evaluation method and device

Publications (2)

Publication Number Publication Date
CN115131879A CN115131879A (en) 2022-09-30
CN115131879B true CN115131879B (en) 2023-01-06

Family

ID=83387521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211055338.8A Active CN115131879B (en) 2022-08-31 2022-08-31 Action evaluation method and device

Country Status (1)

Country Link
CN (1) CN115131879B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024085095A1 (en) * 2022-10-21 2024-04-25 ソニーグループ株式会社 Information processing method, information processing device, and computer-readable non-transitory storage medium
CN117078976B (en) * 2023-10-16 2024-01-30 华南师范大学 Action scoring method, action scoring device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390174A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Physical education assisting system and method based on human body posture recognition
CN111754571A (en) * 2019-03-28 2020-10-09 北京沃东天骏信息技术有限公司 Gesture recognition method and device and storage medium thereof
CN112784786A (en) * 2021-01-29 2021-05-11 联想(北京)有限公司 Human body posture recognition method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111368B (en) * 2019-05-07 2023-04-07 山东广域科技有限责任公司 Human body posture recognition-based similar moving target detection and tracking method
CN112597933B (en) * 2020-12-29 2023-10-20 咪咕互动娱乐有限公司 Action scoring method, device and readable storage medium
CN113392746A (en) * 2021-06-04 2021-09-14 北京格灵深瞳信息技术股份有限公司 Action standard mining method and device, electronic equipment and computer storage medium
CN114639168B (en) * 2022-03-25 2023-06-13 中国人民解放军国防科技大学 Method and system for recognizing running gesture

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390174A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Physical education assisting system and method based on human body posture recognition
CN111754571A (en) * 2019-03-28 2020-10-09 北京沃东天骏信息技术有限公司 Gesture recognition method and device and storage medium thereof
CN112784786A (en) * 2021-01-29 2021-05-11 联想(北京)有限公司 Human body posture recognition method and device

Also Published As

Publication number Publication date
CN115131879A (en) 2022-09-30

Similar Documents

Publication Publication Date Title
CN115131879B (en) Action evaluation method and device
CN106650687B (en) Posture correction method based on depth information and skeleton information
WO2021051579A1 (en) Body pose recognition method, system, and apparatus, and storage medium
CN112184705B (en) Human body acupuncture point identification, positioning and application system based on computer vision technology
CN107909060A (en) Gymnasium body-building action identification method and device based on deep learning
CN110428486B (en) Virtual interaction fitness method, electronic equipment and storage medium
US20100208038A1 (en) Method and system for gesture recognition
CN110688929B (en) Human skeleton joint point positioning method and device
CN110298220B (en) Action video live broadcast method, system, electronic equipment and storage medium
KR102594938B1 (en) Apparatus and method for comparing and correcting sports posture using neural network
CN107018330A (en) A kind of guidance method and device of taking pictures in real time
CN109274883B (en) Posture correction method, device, terminal and storage medium
CN110298218B (en) Interactive fitness device and interactive fitness system
CN110633004B (en) Interaction method, device and system based on human body posture estimation
CN109308437B (en) Motion recognition error correction method, electronic device, and storage medium
CN113255522B (en) Personalized motion attitude estimation and analysis method and system based on time consistency
KR102320960B1 (en) Personalized home training behavior guidance and correction system
CN113705540A (en) Method and system for recognizing and counting non-instrument training actions
Limcharoen et al. View-independent gait recognition using joint replacement coordinates (jrcs) and convolutional neural network
CN112422946A (en) Intelligent yoga action guidance system based on 3D reconstruction
CN111383735A (en) Unmanned body-building analysis method based on artificial intelligence
KR102356685B1 (en) Home training providing system based on online group and method thereof
CN111967407B (en) Action evaluation method, electronic device, and computer-readable storage medium
CN113409651B (en) Live broadcast body building method, system, electronic equipment and storage medium
CN115331314A (en) Exercise effect evaluation method and system based on APP screening function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant