CN111967407A - Action evaluation method, electronic device, and computer-readable storage medium - Google Patents

Action evaluation method, electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN111967407A
CN111967407A CN202010843303.5A CN202010843303A CN111967407A CN 111967407 A CN111967407 A CN 111967407A CN 202010843303 A CN202010843303 A CN 202010843303A CN 111967407 A CN111967407 A CN 111967407A
Authority
CN
China
Prior art keywords
action
skeleton
user
teaching
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010843303.5A
Other languages
Chinese (zh)
Other versions
CN111967407B (en
Inventor
盛志胤
潘伟
沐俊星
袁峰
魏金文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Interactive Entertainment Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Interactive Entertainment Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Interactive Entertainment Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202010843303.5A priority Critical patent/CN111967407B/en
Publication of CN111967407A publication Critical patent/CN111967407A/en
Application granted granted Critical
Publication of CN111967407B publication Critical patent/CN111967407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip

Abstract

The embodiment of the invention relates to the technical field of Internet and discloses an action evaluation method, electronic equipment and a computer readable storage medium. The operation evaluation method includes: acquiring first skeleton characteristic information of a user in a current video frame; the current video frame comprises actions of the user under the guidance of a preset teaching video; acquiring second skeleton characteristic information corresponding to a teaching action in the teaching video; and evaluating the action made by the user according to the first skeleton characteristic information and the second skeleton characteristic information, so that the real-time performance of evaluation can be improved, and the learning experience of the user is improved.

Description

Action evaluation method, electronic device, and computer-readable storage medium
Technical Field
The embodiment of the invention relates to the technical field of internet, in particular to an action evaluation method, electronic equipment and a computer-readable storage medium.
Background
At present, short video tutorials for fitness are introduced in internet fitness APP in market competition, and aiming at the contents, the AI technology based on video understanding is considered to be introduced, so that real-time intelligent guidance of user fitness is realized, unmanned fitness guidance is achieved, and fitness of the whole population is promoted. At present, a fitness coach can perform fitness teaching on a user in a fitness video mode, wherein when the user performs fitness under the guidance of the fitness video, if the user needs to evaluate fitness actions, a fixed number of multi-frame images, for example, 5 frames of images are obtained each time, then the continuous 5 frames of images of the user are compared with the continuous 5 frames of images of the coach, whether actions similar to the coach actions appear in the continuous 5 frames of images of the user is determined, and therefore the fitness actions of the user are scored.
However, the inventors found that at least the following problems exist in the related art: because continuous multi-frame images are required to be acquired every time, evaluation is carried out based on the multi-frame images, so that evaluation delay is inevitably caused, the learning experience of a user is influenced, and the body-building guidance effect is poor.
Disclosure of Invention
An object of embodiments of the present invention is to provide an action evaluation method, an electronic device, and a computer-readable storage medium, which can improve the real-time performance of evaluation, thereby improving the learning experience of a user.
In order to solve the above-described technical problem, an embodiment of the present invention provides an action evaluation method including: acquiring first skeleton characteristic information of a user in a current video frame; the current video frame comprises actions of the user under the guidance of a preset teaching video; acquiring second skeleton characteristic information corresponding to a teaching action in the teaching video; and evaluating the action made by the user according to the first skeleton characteristic information and the second skeleton characteristic information.
An embodiment of the present invention also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the above-described action evaluation method.
Embodiments of the present invention also provide a computer-readable storage medium storing a computer program that, when executed by a processor, implements the above-described action evaluation method.
Compared with the prior art, the method and the device for obtaining the first skeleton characteristic information of the user in the current video frame are obtained; the current video frame comprises actions of a user under the guidance of a preset teaching video; acquiring second skeleton characteristic information corresponding to a teaching action in a teaching video; and evaluating the action made by the user according to the first skeleton characteristic information and the second skeleton characteristic information. That is to say, by combining the first skeleton feature information of the user in the current video frame and the second skeleton feature information corresponding to the teaching action in the teaching video, the action made by the user is tracked and identified in real time, and the action made by the user under the guidance of the teaching video is evaluated, so that the evaluation delay caused by the fact that continuous multi-frame images are required to be obtained and evaluation is performed based on the continuous multi-frame images in the prior art is avoided. The embodiment of the invention is beneficial to improving the real-time performance of evaluation, thereby improving the learning experience of a user.
In addition, obtain the second skeleton characteristic information that teaching action in the teaching video corresponds, include: determining the position information of each skeleton key point of a teacher in the teaching video; determining action angles corresponding to the teaching actions according to the position information of each skeleton key point of the teacher; and acquiring second skeleton characteristic information corresponding to the teaching action in the teaching video according to the action angle corresponding to the teaching action. The position information of the key points of the frameworks of the teacher is considered to be possibly changed when the teacher does different teaching actions, and the position information of the key points of the frameworks of the teacher is possibly different but the angles between the key points of the different frameworks are similar when people with different heights and body types do the same action. Therefore, in the embodiment, the action angle corresponding to the teaching action can be accurately measured by using the angle between the key points of different frameworks, and the second framework characteristic information obtained according to the action angle corresponding to the teaching action is beneficial to accurately measuring the teaching action and embodies the characteristics of the teaching action.
In addition, acquiring first skeleton feature information of the user in the current video frame includes: inputting the current video frame into a pre-trained sensor Flow Lite neural network model for generating a skeleton point diagram, and outputting the skeleton point diagram of the user in the current video frame; and determining first skeleton characteristic information of the user in the current video frame according to the skeleton point diagram of the user. Because the sensor Flow Lite neural network model belongs to a lightweight neural network model, the method can be suitable for the processing performance of terminals such as mobile phones and the like, and is favorable for enabling the evaluation method of the embodiment of the invention to be directly executed at the terminal side, thereby reducing the economic cost.
In addition, the determining first skeleton feature information of the user in the current video frame according to the skeleton point diagram of the user includes: filtering the skeleton point diagram by adopting a preset filtering algorithm to obtain thermodynamic diagrams corresponding to all skeleton key points of the user; and determining first skeleton characteristic information of the user in the current video frame according to the thermodynamic diagrams corresponding to all skeleton key points of the user. The framework graph is filtered by adopting a preset filtering algorithm, so that noise information is eliminated, and the determined first framework characteristic information is more stable and accurate.
Drawings
One or more embodiments are illustrated by the corresponding figures in the drawings, which are not meant to be limiting.
Fig. 1 is a flowchart of an action evaluation method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram showing the distribution of skeletal key points of a human body according to a first embodiment of the present invention;
FIG. 3 is a schematic view of an action angle corresponding to the action according to the first embodiment of the present invention;
fig. 4 is a flowchart of an action evaluation method according to a second embodiment of the present invention;
fig. 5 is a flowchart of an action evaluation method according to a third embodiment of the present invention;
FIG. 6 is a schematic diagram of 3 key actions according to a third embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
A first embodiment of the present invention relates to an action evaluation method applied to an electronic device; the electronic device can be a terminal or a server such as a mobile phone and a tablet computer. The application scenario of the present embodiment may be: the user makes corresponding actions under the guidance of the teaching video, and the electronic device evaluates the actions made by the user, such as: the user makes body-building actions under the guidance of the body-building teaching video, and the electronic equipment evaluates the body-building actions made by the user; the athlete makes a training action under the guidance of the pre-competition training teaching video, and the electronic equipment evaluates the training action made by the athlete; the doctor makes the operation action under the guidance of the operation action teaching video, and the electronic equipment evaluates the operation action made by the doctor.
The following describes in detail the implementation details of the operation evaluation method according to the present embodiment, and the following is only provided for the sake of understanding and is not essential to the implementation of the present embodiment.
As shown in fig. 1, the flowchart of the operation evaluation method according to the present embodiment may include:
step 101: and acquiring first skeleton characteristic information of a user in the current video frame.
And the current video frame comprises the action of the user under the guidance of a preset teaching video. The current video frame can be a video frame of a user in the learning process, which is shot by a terminal playing the teaching video. For example, when the user starts the camera to exercise, the electronic device can shoot the exercise action of the user in real time to obtain the current video frame of the user during exercise.
In an example, the electronic device as an execution subject of this embodiment may be a terminal, where the terminal may be a terminal for playing a teaching video, and the terminal may obtain first skeleton feature information of a user in a current video frame according to a current video frame shot by the terminal.
In another example, the execution subject electronic device of this embodiment may be a server, and the terminal may send the captured current video frame to the server, and the server may obtain the first skeleton feature information of the user in the current video frame.
In one example, the manner of obtaining the first skeleton feature information of the user in the current video frame may be as follows: the method comprises the steps of firstly obtaining position information of each skeleton key point of a user in a current video frame. The position information of the skeleton key point can be understood as a two-dimensional coordinate of the skeleton key point in the current video frame. In a specific implementation, a schematic distribution diagram of skeleton key points of a human body may refer to fig. 2, where each skeleton key point corresponds to its own number, for example, from number 0 to number 15 of each skeleton key point in fig. 2, the number and name of each skeleton key point may refer to table 1:
TABLE 1
Numbering Name (R) Numbering Name (R) Numbering Name (R)
0 Right ankle 6 Pelvis 12 Right shoulder
1 Right knee 7 Chest cavity 13 Left shoulder
2 Right hip 8 Cervical vertebrae 14 Left elbow
3 Left hip 9 Head top 15 Left wrist
4 Left knee 10 Right wrist
5 Left ankle 11 Elbow of right hand
And then, determining an action angle corresponding to the action performed by the user according to the position information of each skeleton key point of the user. For example, after the position information of each skeleton key point is obtained, a corresponding action angle is generated by using every three skeleton key points as a group. It can be understood that three action angles can be obtained by connecting every two skeleton key points with each other, and a plurality of corresponding action angles can be generated by grouping every three skeleton key points of 16 skeleton key points. For easy understanding, referring to fig. 3, the teaching action performed by the teacher is shown in the middle of fig. 3, the action performed by the teacher is shown in the upper right of fig. 3, point a on the upper right represents a skeleton key point named right knee, point B represents a skeleton key point named left hip, point C represents a skeleton key point named left knee, and point D represents a skeleton key point named left ankle. In the diagram, the angle ≦ formed among the points a, B and D and the angle ≦ ABC formed among the points a, B and C can be understood as the action angle corresponding to the action performed by the user. In one example, the position information of each skeleton key point may be a position coordinate, and a corresponding action angle may be generated according to the position coordinate of each skeleton key point. Referring to fig. 3, assuming that coordinates of point a are (x1, y1), coordinates of point B are (x2, y2), and coordinates of point C are (x3, y3), ═ ABC can be calculated as follows:
the following vectors are sequentially calculated according to the coordinates of the A, B, C three points:
vector AB ═ x2-x1, y2-y1, vector BC ═ x3-x2, y3-y2, vector AC ═ x3-x1, y3-y1)
The size of the & lt ABC can be obtained by a simultaneous equation set according to a calculation formula of the product of the following two vectors:
vector AB ═ vector BC ═ AB | BC | _ cos · ABC;
vector AB × vector BC [ (x2-x1) (x3-x2), (y2-y1) (y3-y2) ];
the calculation mode of angles between other skeleton key points can refer to the calculation mode of ≈ ABC, and is not repeated here for avoiding repetition.
It should be noted that, two action angles are listed in fig. 3 for convenience of description, and in a specific implementation, the number of the action angles corresponding to the action made by the user is not limited to two. In a specific implementation, the position information of each skeleton key point can be stored in a file form and displayed in a current video frame in a centralized manner, and morphological data are reserved to form data assets.
And then, acquiring first skeleton characteristic information of the user in the current video frame according to a plurality of action angles corresponding to actions performed by the user. For example, a plurality of action angles formed among a plurality of skeleton key points that can best represent the characteristics of an action performed by a user may be selected from a plurality of action angles to form a first action angle sequence, and the first action angle sequence may be used as first skeleton feature information.
It should be noted that, in the present embodiment, the schematic distribution diagram of the skeleton key points of the human body is only an example in fig. 2, and the present invention is not limited to this.
In one example, the manner of obtaining the first skeleton feature information of the user in the current video frame may be as follows: inputting a current video frame into a pre-trained sensor Flow Lite neural network model for generating a skeleton point diagram, and outputting the skeleton point diagram of a user in the current video frame; according to the skeleton point diagram of the user, first skeleton characteristic information of the user in the current video frame is determined. The position information of each skeleton key point of the user is marked in the skeleton point diagram of the user, and the first skeleton feature information of the user in the current video frame can be determined according to the position information of each skeleton key point in the skeleton point diagram of the user. For example, the position information of a plurality of skeleton key points that can best represent the characteristics of the action performed by the user may be selected from the position information of each skeleton key point in the skeleton point diagram as the first skeleton feature information. In a specific implementation, the original size (for example, 640 × 480) of the current video frame may be adjusted to an input size suitable for the sensor Flow Lite neural network model, and then the self-trained sensor Flow Lite neural network model is introduced through the sensor Flow Lite engine, so as to output the skeleton point diagram of the user in the current video frame.
In one example, the input size for the sensor Flow Lite neural network model may be: 1 × 224 × 3; wherein, 1 is the video frame number, 224 × 224 is the length and width of the video frame, and 3 is the RGB three channels. The output size of the sensor Flow Lite neural network model can be: 1*112*112*14. In the present embodiment, the input size and the output size are only exemplified, but the present invention is not limited to this, and specific values of the input size and the output size may be set according to actual needs.
The Tensor Flow Lite neural network model belongs to a lightweight neural network model, can be suitable for the processing performance of terminals such as mobile phones and the like, is favorable for the evaluation method of the embodiment of the invention to be directly executed at the terminal side, reduces the economic cost, and is also favorable for extracting the more accurate and rapid skeleton characteristic information of the moving human body based on the Tensor Flow Lite neural network model.
Step 102: and acquiring second skeleton characteristic information corresponding to the teaching action in the teaching video.
In one example, second skeleton characteristic information corresponding to the teaching action in the teaching video may be pre-stored in the electronic device, so that the electronic device may obtain the pre-stored second skeleton characteristic information. That is, the electronic device has previously acquired and stored the second skeleton feature information corresponding to the teaching action in the teaching video. In a specific implementation, the teaching action in the teaching video and the corresponding second skeleton feature information thereof can be solidified in a JSON file format and stored in the electronic device so as to be read in real time. In a specific implementation, a whole set of teaching video usually includes a series of teaching actions, and therefore, the electronic device can pre-store second skeleton feature information corresponding to each teaching action in the teaching video.
In another example, if the electronic device does not have the second skeleton feature information pre-stored therein, the second skeleton feature information corresponding to the teaching action in the teaching video can be obtained in real time.
In one example, the manner of acquiring the second skeleton feature information corresponding to the teaching action in the teaching video may be as follows:
firstly, determining the position information of each skeleton key point of a teacher in a teaching video, wherein the position information of the skeleton key points can be understood as two-dimensional coordinates of the skeleton key points in the teaching video. In one example, a teaching video may be input into a previously trained sensor Flow Lite neural network model for generating a skeleton point diagram frame by frame, the skeleton point diagram of a learner in the teaching video may be output, and then, position information of each skeleton key point of the learner may be determined according to the skeleton point diagram of the learner.
Then, determining action angles corresponding to teaching actions according to the position information of each skeleton key point of the teacher; for example, after obtaining the position information of each skeleton key point of the teacher, the corresponding action angle is generated by using every three skeleton key points as a group. It can be understood that three action angles can be obtained by connecting every two skeleton key points, and a plurality of corresponding action angles can be generated by grouping every three skeleton key points of 14 skeleton key points. The teacher may be a fitness coach in the fitness teaching video, a training person in the athlete pre-competition training teaching video, a guiding physician in the doctor operation teaching video, etc., but the specific implementation is not limited thereto.
And then, acquiring second skeleton characteristic information corresponding to the teaching action in the teaching video according to the plurality of action angles corresponding to the teaching action. For example, a plurality of action angles formed among a plurality of skeleton key points which can represent the characteristics of teaching actions most can be selected from a plurality of action angles to form a second action angle sequence, and the second action angle sequence can be used as second skeleton feature information.
According to the mode, the electronic equipment can acquire the second skeleton characteristic information corresponding to a series of teaching actions in the teaching video, then, the series of teaching actions and the corresponding second skeleton characteristic information can be stored according to actual needs, and when other users start to study conveniently, the electronic equipment can directly acquire the second skeleton characteristic information corresponding to the teaching actions in the teaching video.
The position information of the key points of the frameworks of the teacher is considered to be possibly changed when the teacher does different teaching actions, and the position information of the key points of the frameworks of the teacher is possibly different but the angles between the key points of the different frameworks are similar when people with different heights and body types do the same action. Therefore, in the embodiment, the action angle corresponding to the teaching action can be accurately measured by using the angle between the key points of different frameworks, and the second framework characteristic information obtained according to the action angle corresponding to the teaching action is beneficial to accurately measuring the teaching action and embodies the characteristics of the teaching action.
Step 103: and evaluating the action made by the user according to the first skeleton characteristic information and the second skeleton characteristic information.
The evaluation of the action made by the user can be understood as: and scoring the action made by the user under the guidance of the teaching video, wherein the closer the action made by the user is to the teaching action in the teaching video, the higher the score is, and the farther the difference between the action made by the user and the teaching action in the teaching video is, the lower the score is.
In one example, the similarity between the action made by the user and the teaching action can be determined according to the first skeleton characteristic information and the second skeleton characteristic information, and then the action made by the user can be evaluated according to the similarity. In a specific implementation, the higher the similarity, the better the evaluation of the action made by the user, and the lower the similarity, the worse the evaluation of the action made by the user. Among them, the good and bad of the evaluation can be expressed as the height of the score, i.e., the better the evaluation, the higher the score, and the worse the evaluation, the lower the score. In specific implementation, the scored height can be displayed on a picture of a teaching video, so that the user can conveniently check the scored height.
In one example, the first skeletal feature information includes a first sequence of motion angles corresponding to a motion made by a user, and the second skeletal feature information includes a second sequence of motion angles corresponding to a tutorial motion. The manner of determining the similarity of the action made by the user to the tutorial action may be as follows: and calculating Euclidean distance according to the first action angle sequence and the second action angle sequence, and determining the similarity between the action made by the user and the teaching action according to the Euclidean distance.
In a specific implementation, the first action angle sequence can be understood as an angle vector formed by action angles formed among key points of each skeleton of a user when the user makes an action; the method can also be understood as an angle vector formed by action angles which are selected from action angles formed among all skeleton key points of a user and meet a first preset requirement. The first preset requirement may be set according to actual needs, and this embodiment is not particularly limited in this respect. The second action angle sequence can be understood as an angle vector formed by action angles formed among all skeleton key points of a teacher when the teacher in the teaching video makes teaching actions; the angle vector can be further understood as an angle vector formed by action angles which meet a second preset requirement and are selected from action angles formed among all skeleton key points of the teacher. The second preset requirement may be set according to actual needs, and this embodiment is not particularly limited to this. The number of the action angles in the first action angle sequence and the number of the action angles in the second action angle sequence can be the same, the Euclidean distance between the two angle sequences can be conveniently calculated, the larger the Euclidean distance is, the larger the difference between the two angle sequences is, namely, the larger the difference between the action of the user and the action of the coach is, the smaller the similarity is; the smaller the Euclidean distance is, the smaller the difference between the two angle sequences is, namely the smaller the difference between the action of the user and the action of the coach is, and the greater the similarity is.
In an example, the pictures of the teaching video played by the terminal may be as shown in fig. 3, the pictures of the teaching video may include action pictures of a teacher and may also include action pictures of a user, and the sizes of the action pictures of the teacher and the action pictures of the user may be adjusted according to actual needs, which is not specifically limited in this embodiment. In specific implementation, the screen of the teaching video can also display data such as scores for users, training times of users, training duration and the like.
In one example, the guidance opinions can also be fed back to the user according to the evaluation result of the action of the user, so that the user can be helped to learn better. Meanwhile, the action pictures of the user can be stored in real time, and the action pictures can be conveniently checked at any time when needed subsequently.
The above examples in the present embodiment are only for convenience of understanding, and do not limit the technical aspects of the present invention.
Compared with the prior art, the method and the device have the advantages that the action of the user under the guidance of the teaching video is evaluated by combining the first skeleton characteristic information of the user in the current video frame and the second skeleton characteristic information corresponding to the teaching action in the teaching video, so that the evaluation delay caused by the fact that continuous multi-frame images are required to be obtained and evaluation is carried out based on the continuous multi-frame images in the prior art is avoided. The embodiment of the invention is beneficial to improving the real-time performance of evaluation, thereby improving the learning experience of a user.
In addition, in the prior art, the evaluation delay is not only shown in the fact that continuous multi-frame images are acquired for analysis each time, but also shown in the fact that the terminal needs to send the acquired user fitness video to the cloud server, the cloud server identifies the fitness action of the user by adopting an AI technology to obtain an evaluation result, and then returns the evaluation result to the terminal, and the evaluation delay also exists in the process. Moreover, the fitness videos are uploaded to the cloud server, and the AI technology is adopted for identification and evaluation, so that the cloud server is introduced, and therefore the economic cost is high. When the evaluation method in the embodiment is applied to the terminal, action evaluation can be directly completed on the terminal side, uploading of the cloud server is not needed, and the cloud server is waited for feedback evaluation, so that the real-time performance of evaluation is further improved, and the economic cost is low because the cloud server is not introduced. It should be noted that, in this embodiment, if the skeleton point diagram of the user is output in combination with the sensor Flow Lite neural network model, the first skeleton feature information is further obtained according to the skeleton point diagram, and the characteristic that the sensor Flow Lite neural network model has a lightweight class can also be utilized, so that the method can be better adapted to the processing performance of terminals such as mobile phones, and further facilitates that the evaluation method of this embodiment can be directly executed at the terminal side, thereby reducing the economic cost.
The embodiment of the invention combines a self-adaptive neural network model and a low-cost terminal to realize more accurate teaching action guidance, thereby assisting in realizing national fitness. The neural network model assists in motion guidance, so that the fitness efficiency is improved, and the fitness guidance standard is unified; by comparing the teaching action with the teaching action in real time, the accuracy of the guidance suggestion is improved, and the reliability is improved; the action video of the user is stored in real time, the form data is reserved, and the data assets are formed and are convenient to view at any time in the follow-up process.
A second embodiment of the present invention relates to an operation evaluation method. The following describes in detail the implementation details of the operation evaluation method according to the present embodiment, and the following is only provided for the sake of understanding and is not essential to the implementation of the present embodiment.
As shown in fig. 4, the flowchart of the operation evaluation method according to the present embodiment may include:
step 401: and inputting the current video frame into a pre-trained sensor Flow Lite neural network model for generating a skeleton point diagram, and outputting the skeleton point diagram of the user in the current video frame.
The sensor Flow Lite neural network model can be obtained by training a plurality of action pictures marked with human skeleton key points in advance. The skeleton point diagram of the user may be marked with position information of each skeleton key point of the user, and the position information may be two-dimensional coordinates.
In specific implementation, different sensor Flow Lite neural network models can be trained in a targeted manner aiming at different application scenes so as to output a skeleton point diagram of a user in a current video frame in a targeted manner, and the accuracy of the skeleton point diagram of the user in the output current video frame is improved. In one example, the current video frame is a video frame taken in a scene of a user's fitness, and the motion picture marked with the human skeleton key points may be a fitness motion picture. In another example, the current video frame is a video frame taken in a scene that an athlete trains before a race, and the motion picture labeled with the human skeleton key points may be a motion picture trained by the athlete. In an example, the current video frame is a video frame shot in a scene of performing operation guidance on a doctor, and the motion picture marked with the human skeleton key points may be a motion picture of performing an operation on the doctor.
Step 402: and filtering the skeleton point diagram by adopting a preset filtering algorithm to obtain a thermodynamic diagram corresponding to each skeleton key point of the user.
The preset filtering algorithm may be set according to actual needs, for example, the selectable filtering algorithm includes any one of the following: mean filtering, median filtering, gaussian filtering. In one example, a mean filtering algorithm can be selected in consideration of the processing performance of the terminal side, the mean filtering algorithm belongs to a lightweight algorithm, the processing performance of terminals such as mobile phones can be well adapted, the algorithm is simple and fast to execute, and the real-time performance is high.
In a specific implementation, the step of filtering the skeleton point diagram by using a preset filtering algorithm may be understood as: traversing each skeleton key point in the skeleton point diagram, eliminating noise information in the skeleton point diagram by adopting a preset filtering algorithm, determining the probability (also called confidence) of each position of the traversed skeleton key point in the skeleton point diagram, and acquiring a thermodynamic diagram corresponding to the traversed skeleton key point according to the probability of each position of the traversed skeleton key point in the skeleton point diagram. For example, if 14 skeleton key points are marked in the skeleton point diagram, 14 thermodynamic diagrams corresponding to the 14 skeleton key points respectively can be obtained. Each thermodynamic diagram may be labeled with a corrected position of the skeleton key point, for example, the thermodynamic diagram corresponding to the skeleton key point 1 is labeled with the corrected position of the skeleton key point 1. Wherein the corrected position can be understood as: and determining the positions of all skeleton key points after filtering the skeleton point diagram by adopting a preset filtering algorithm.
Step 403: and determining first skeleton characteristic information of the user in the current video frame according to the thermodynamic diagrams corresponding to all skeleton key points of the user.
Specifically, a target thermodynamic diagram can be obtained according to the thermodynamic diagrams corresponding to each skeleton key point of the user; and the corrected positions of the key points of each skeleton are summarized on the target thermodynamic diagram. For example, the thermodynamic diagrams corresponding to the skeleton key points of the user may include 14 thermodynamic diagrams corresponding to 14 skeleton key points, and then 14 skeleton key points in the 14 thermodynamic diagrams may be depicted in the same diagram, which is the above-mentioned target thermodynamic diagram. Then, according to the target thermodynamic diagram, first skeleton feature information of the user in the current video frame is determined.
In one example, according to the target thermodynamic diagram, the manner of determining the first skeleton feature information of the user in the current video frame may be as follows: determining the position information of each skeleton key point of a user according to the target thermodynamic diagram, determining an action angle corresponding to the action made by the user according to the position information of each skeleton key point of the user, and determining first skeleton characteristic information of the user in the current video frame according to the action angle corresponding to the action made by the user. The implementation manner of the action angle corresponding to the action performed by the user is determined according to the position information of each skeleton key point of the user, and the implementation manner of the first skeleton feature information of the user in the current video frame is determined according to the action angle corresponding to the action performed by the user.
Step 404: and acquiring second skeleton characteristic information corresponding to the teaching action in the teaching video.
Step 405: and evaluating the action made by the user according to the first skeleton characteristic information and the second skeleton characteristic information.
Steps 404 to 405 are substantially the same as steps 102 to 103 in the first embodiment, and are not repeated herein to avoid repetition.
In addition, in step 404, if the second skeleton feature information of the instructor in the teaching video is determined in the process of acquiring the second skeleton feature information corresponding to the teaching action in the teaching video, reference may be made to the manner of determining the first skeleton feature information of the user in the current video frame in this embodiment. For example, the teaching video is input into a previously trained sensor Flow Lite neural network model for generating a skeleton point diagram frame by frame, and the skeleton point diagram of the teacher in the teaching video is output frame by frame. And then, filtering the skeleton point diagram of the teacher by adopting a preset filtering algorithm to obtain a thermodynamic diagram corresponding to each skeleton key point of the teacher. And then, according to the thermodynamic diagrams corresponding to all skeleton key points of the teacher, determining second skeleton feature information of the teacher in the teaching video frame by frame, namely second skeleton feature information corresponding to teaching actions in the teaching video.
The above examples in the present embodiment are only for convenience of understanding, and do not limit the technical aspects of the present invention.
Compared with the prior art, the method and the device have the advantages that the framework graph is filtered by adopting the preset filtering algorithm, noise information is eliminated, the determined first framework characteristic information and the determined second framework characteristic information are more stable and accurate, and the stability and the accuracy of the evaluation of the action of the user are further improved.
A third embodiment of the present invention relates to an operation evaluation method. The following describes in detail the implementation details of the operation evaluation method according to the present embodiment, and the following is only provided for the sake of understanding and is not essential to the implementation of the present embodiment.
As shown in fig. 5, the flowchart of the operation evaluation method according to the present embodiment may include:
step 501: and acquiring first skeleton characteristic information of a user in the current video frame.
Step 502: and determining the position information of each skeleton key point of the teacher in the teaching video.
The implementation manners of steps 501 to 502 may refer to the related descriptions in the first embodiment or the second embodiment, and are not repeated herein.
Step 503: and determining action angles corresponding to a plurality of key actions in the teaching action according to the position information of each skeleton key point of the teacher.
The teaching action in this embodiment includes a plurality of preset key actions, for example, according to the characteristics of each teaching action, a plurality of key actions may be selected, for example, a teaching action may include 3 key actions as shown in fig. 6, and the teaching action may be described as: standing, jumping, standing. According to the position information of each skeleton key point when the teacher does different key actions, the action angles corresponding to a plurality of key actions in the teaching action can be determined. In particular implementations, each key action may correspond to multiple action angles.
Step 504: and determining a plurality of angle difference values between any two adjacent key actions according to action angles corresponding to the key actions respectively.
For example, the action angle ═ ABD ═ x1, the angle ABC ═ y1, the action angle ═ ABD ═ x2, the angle ABC ═ y2, corresponding to the key action 1, the multiple angle difference values between the key action 1 and the key action 2 include: the difference value x1-x2 of angle ABD and the difference value y1-y2 of angle ABC; wherein the difference values x1-x2 and y1-y2 can be absolute values. With reference to the above example, a plurality of angular discrepancy values between any two adjacent critical actions in a tutorial action may be determined.
Step 505: and determining key skeleton characteristic information for representing any two adjacent key actions according to the angle difference values.
In one example, the plurality of action angle difference values may be sorted from large to small, and the first n action angle difference values are selected; wherein n is a natural number greater than 1. And then selecting the action angle forming the difference values of the first n action angles from the action angles respectively corresponding to the plurality of key actions. And then, determining key skeleton characteristic information for representing any two adjacent key actions according to the selected action angle.
Assuming that n is 15, referring to the operation angle ═ ABD ═ x1, ═ ABC ═ y1 corresponding to the key operation 1, the operation angle ═ ABD ═ x2 corresponding to the key operation 2, and ═ ABC ═ y 2. Assuming that x1-x2 is at Top15, it is considered that key action 1 and key action 2 can be distinguished by the size of ═ ABC, then ═ ABD ═ x1 can be used as part of the key skeleton feature information corresponding to key action 1, and ═ ABD ═ x2 can be used as part of the key skeleton feature information corresponding to key action 2. Subsequently, in the process of identifying the user action, assuming that the key skeleton feature information corresponding to the key action 1 only includes ═ ABD ═ x1, if the size of the action angle ═ ABD corresponding to the action made by the user is close to x1, it can be considered that the similarity between the action made by the user and the key action 1 is high. Similarly, assuming that the key skeleton feature information corresponding to the key action 2 only includes ═ ABD ═ x2, if the size of the action angle ═ ABD corresponding to the action made by the user is close to x2, it can be considered that the degree of similarity between the action made by the user and the key action 2 is high.
Step 506: and determining second skeleton characteristic information corresponding to the teaching action in the teaching video according to the key skeleton characteristic information.
For example, a union set of key skeleton feature information used for representing any two adjacent key actions may be obtained as second skeleton feature information corresponding to the teaching action. For the teaching action shown in fig. 6, the corresponding second skeleton feature information may include: the key skeleton feature information corresponding to the key action 1, the key skeleton feature information corresponding to the key action 2 and the key skeleton feature information corresponding to the key action 3.
Step 507: and evaluating the action made by the user according to the first skeleton characteristic information and the second skeleton characteristic information.
The first skeleton feature information may include a first action angle sequence corresponding to an action made by a user, the second skeleton feature information may include a second action angle sequence corresponding to an action made by a learner, and the second action angle sequence may include a key action angle sequence corresponding to a plurality of key actions. The key action angle sequence may be understood as an angle vector composed of a plurality of key action angles.
In one example, a euclidean distance between the first motion angle sequence and each of the key motion angle sequences may be calculated, and a determination may be made as to whether the motion made by the user is one of a plurality of key motions in the tutorial motion based on the euclidean distance between the first motion angle sequence and each of the key motion angle sequences. The larger the Euclidean distance is, the smaller the similarity between the action made by the user and the key action is; the smaller the euclidean distance, the greater the similarity between the actions the user makes and the key actions. The greater the similarity, the better the rating and the higher the corresponding score, whereas the smaller the similarity, the worse the rating and the lower the corresponding score. If the Euclidean distance between the first action angle sequence and each key action angle sequence is finally determined to be larger, which indicates that the action performed by the user is different from each key action performed by the teacher, feedback information such as 'no action', 'action nonstandard' and the like can be fed back to the user.
In one example, for a tutorial action that includes multiple key actions, a key action for scoring may be specified. If the actions taken by the user are identified as key actions for scoring, the actions taken by the user are scored. The method for identifying whether the action made by the user is a key action for scoring may be: determining the similarity between the action made by the user and the key action for scoring, if the similarity is greater than a preset threshold value, identifying that the action made by the user is the key action for scoring, and then scoring the action made by the user according to the similarity between the action made by the user and the key action for scoring, wherein the greater the similarity is, the higher the score is, the smaller the similarity is, and the lower the score is. The preset threshold may be set according to actual needs, and this embodiment is not particularly limited to this. By specifying the key action for scoring, noise interference of other actions is facilitated, and the scoring stability is improved.
In one example, for a teaching action including a plurality of key actions, the similarity between each key action and the action made by the user can be sequentially determined, each key action is scored according to the similarity, and the score of each key action is fed back to the user. Or selecting the highest one of the scores of the key actions to be fed back to the user so as to ensure the robustness of the score.
The above examples in the present embodiment are only for convenience of understanding, and do not limit the technical aspects of the present invention.
Compared with the prior art, the teaching action in the embodiment comprises a plurality of preset key actions, a plurality of angle difference values between any two adjacent key actions are determined according to action angles respectively corresponding to the key actions, key skeleton characteristic information used for representing any two adjacent key actions is determined according to the plurality of angle difference values, second skeleton characteristic information corresponding to the teaching action in the teaching video is determined according to the key skeleton characteristic information, the selection of the key actions in the teaching action is facilitated according to the characteristics of each teaching action, so that the second skeleton characteristic information corresponding to the teaching action is determined by combining the key skeleton characteristic information used for representing any two adjacent key actions in the key actions, the key actions in the teaching action are fully considered, and the more comprehensive measurement of the characteristics of the teaching action is facilitated, therefore, the accuracy of the subsequent evaluation of the user action is further improved.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A fourth embodiment of the present invention relates to an electronic device, as shown in fig. 7, including at least one processor 701; and, a memory 702 communicatively coupled to the at least one processor 701; the memory 702 stores instructions executable by the at least one processor 701, and the instructions are executed by the at least one processor 701 to enable the at least one processor 701 to execute the action evaluation method according to the first to third embodiments.
The memory 702 and the processor 701 are coupled by a bus, which may comprise any number of interconnecting buses and bridges that couple one or more of the various circuits of the processor 701 and the memory 702. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 701 is transmitted over a wireless medium through an antenna, which receives the data and transmits the data to the processor 701.
The processor 701 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory 702 may be used for storing data used by the processor 701 in performing operations.
A fifth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. An action evaluation method, comprising:
acquiring first skeleton characteristic information of a user in a current video frame; the current video frame comprises actions of the user under the guidance of a preset teaching video;
acquiring second skeleton characteristic information corresponding to a teaching action in the teaching video;
and evaluating the action made by the user according to the first skeleton characteristic information and the second skeleton characteristic information.
2. The action evaluation method according to claim 1, wherein the acquiring of the second skeleton feature information corresponding to the teaching action in the teaching video includes:
determining the position information of each skeleton key point of a teacher in the teaching video;
determining action angles corresponding to the teaching actions according to the position information of each skeleton key point of the teacher;
and acquiring second skeleton characteristic information corresponding to the teaching action in the teaching video according to the action angle corresponding to the teaching action.
3. The action evaluation method according to claim 2, wherein the teaching action comprises a plurality of preset key actions, and the action angles corresponding to the teaching action comprise action angles corresponding to the key actions respectively; the number of action angles corresponding to each key action is multiple;
according to the action angle that the teaching action corresponds, acquire the second skeleton characteristic information that the teaching action in the teaching video corresponds, include:
determining a plurality of angle difference values between any two adjacent key actions according to action angles corresponding to the key actions respectively;
determining key skeleton characteristic information for representing any two adjacent key actions according to the angle difference values;
and determining second skeleton characteristic information corresponding to the teaching action in the teaching video according to the key skeleton characteristic information.
4. The action evaluation method according to claim 3, wherein the determining key skeleton feature information for characterizing any two adjacent key actions according to the plurality of action angle difference values comprises:
sorting the action angle difference values from large to small, and selecting the first n action angle difference values; wherein n is a natural number greater than 1;
selecting an action angle forming the difference values of the first n action angles from action angles respectively corresponding to the plurality of key actions;
and determining key skeleton characteristic information for representing any two adjacent key actions according to the selected action angle.
5. The method for motion estimation according to claim 1, wherein the obtaining of the first skeleton feature information of the user in the current video frame includes:
inputting the current video frame into a pre-trained sensor Flow Lite neural network model for generating a skeleton point diagram, and outputting the skeleton point diagram of the user in the current video frame;
and determining first skeleton characteristic information of the user in the current video frame according to the skeleton point diagram of the user.
6. The method according to claim 5, wherein the determining first skeleton feature information of the user in the current video frame according to the skeleton point diagram of the user comprises:
filtering the skeleton point diagram by adopting a preset filtering algorithm to obtain thermodynamic diagrams corresponding to all skeleton key points of the user;
and determining first skeleton characteristic information of the user in the current video frame according to the thermodynamic diagrams corresponding to all skeleton key points of the user.
7. The action evaluation method according to claim 1, wherein the evaluating the action made by the user based on the first skeletal feature information and the second skeletal feature information includes:
determining similarity between the action made by the user and the teaching action according to the first skeleton characteristic information and the second skeleton characteristic information;
and evaluating the action made by the user according to the similarity.
8. The action evaluation method according to claim 7, wherein the first skeleton feature information includes a first action angle sequence corresponding to the action made by the user, and the second skeleton feature information includes a second action angle sequence corresponding to the teaching action;
the determining the similarity between the action made by the user and the teaching action according to the first skeleton feature information and the second skeleton feature information includes:
calculating Euclidean distance according to the first action angle sequence and the second action angle sequence;
and determining the similarity between the action made by the user and the teaching action according to the Euclidean distance.
9. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of action assessment as claimed in any one of claims 1 to 8.
10. A computer-readable storage medium storing a computer program, wherein the computer program is executed by a processor to implement the action evaluation method according to any one of claims 1 to 8.
CN202010843303.5A 2020-08-20 2020-08-20 Action evaluation method, electronic device, and computer-readable storage medium Active CN111967407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010843303.5A CN111967407B (en) 2020-08-20 2020-08-20 Action evaluation method, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010843303.5A CN111967407B (en) 2020-08-20 2020-08-20 Action evaluation method, electronic device, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN111967407A true CN111967407A (en) 2020-11-20
CN111967407B CN111967407B (en) 2023-10-20

Family

ID=73389229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010843303.5A Active CN111967407B (en) 2020-08-20 2020-08-20 Action evaluation method, electronic device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN111967407B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342439A (en) * 2021-06-11 2021-09-03 北京字节跳动网络技术有限公司 Display method, display device, electronic equipment and storage medium
CN114241595A (en) * 2021-11-03 2022-03-25 橙狮体育(北京)有限公司 Data processing method and device, electronic equipment and computer storage medium
CN114268849A (en) * 2022-01-29 2022-04-01 北京卡路里信息技术有限公司 Video processing method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016140591A (en) * 2015-02-03 2016-08-08 国立大学法人 鹿児島大学 Motion analysis and evaluation device, motion analysis and evaluation method, and program
CN108615055A (en) * 2018-04-19 2018-10-02 咪咕动漫有限公司 A kind of similarity calculating method, device and computer readable storage medium
CN108764120A (en) * 2018-05-24 2018-11-06 杭州师范大学 A kind of human body specification action evaluation method
CN109635644A (en) * 2018-11-01 2019-04-16 北京健康有益科技有限公司 A kind of evaluation method of user action, device and readable medium
CN110728220A (en) * 2019-09-30 2020-01-24 上海大学 Gymnastics auxiliary training method based on human body action skeleton information
CN110782482A (en) * 2019-10-21 2020-02-11 深圳市网心科技有限公司 Motion evaluation method and device, computer equipment and storage medium
CN110796077A (en) * 2019-10-29 2020-02-14 湖北民族大学 Attitude motion real-time detection and correction method
CN111144217A (en) * 2019-11-28 2020-05-12 重庆邮电大学 Motion evaluation method based on human body three-dimensional joint point detection
CN111652078A (en) * 2020-05-11 2020-09-11 浙江大学 Yoga action guidance system and method based on computer vision

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016140591A (en) * 2015-02-03 2016-08-08 国立大学法人 鹿児島大学 Motion analysis and evaluation device, motion analysis and evaluation method, and program
CN108615055A (en) * 2018-04-19 2018-10-02 咪咕动漫有限公司 A kind of similarity calculating method, device and computer readable storage medium
CN108764120A (en) * 2018-05-24 2018-11-06 杭州师范大学 A kind of human body specification action evaluation method
CN109635644A (en) * 2018-11-01 2019-04-16 北京健康有益科技有限公司 A kind of evaluation method of user action, device and readable medium
CN110728220A (en) * 2019-09-30 2020-01-24 上海大学 Gymnastics auxiliary training method based on human body action skeleton information
CN110782482A (en) * 2019-10-21 2020-02-11 深圳市网心科技有限公司 Motion evaluation method and device, computer equipment and storage medium
CN110796077A (en) * 2019-10-29 2020-02-14 湖北民族大学 Attitude motion real-time detection and correction method
CN111144217A (en) * 2019-11-28 2020-05-12 重庆邮电大学 Motion evaluation method based on human body three-dimensional joint point detection
CN111652078A (en) * 2020-05-11 2020-09-11 浙江大学 Yoga action guidance system and method based on computer vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIAHUI YU等: "skeleton-base human activity analysis using deep neural networks with adaptive representation transformation", 《2021 6TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONIC(ICARM)》, pages 278 - 282 *
应响: "基于Openpose的姿态匹配研究及其在体育教学中的应用", 《CNKI优秀硕士学位论文全文库 社会科学Ⅱ辑》, no. 03, pages 130 - 1470 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342439A (en) * 2021-06-11 2021-09-03 北京字节跳动网络技术有限公司 Display method, display device, electronic equipment and storage medium
CN114241595A (en) * 2021-11-03 2022-03-25 橙狮体育(北京)有限公司 Data processing method and device, electronic equipment and computer storage medium
CN114268849A (en) * 2022-01-29 2022-04-01 北京卡路里信息技术有限公司 Video processing method and device

Also Published As

Publication number Publication date
CN111967407B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN109191588B (en) Motion teaching method, motion teaching device, storage medium and electronic equipment
CN108734104B (en) Body-building action error correction method and system based on deep learning image recognition
CN111967407B (en) Action evaluation method, electronic device, and computer-readable storage medium
US20210406525A1 (en) Facial expression recognition method and apparatus, electronic device and storage medium
US11069144B2 (en) Systems and methods for augmented reality body movement guidance and measurement
CN110675474B (en) Learning method for virtual character model, electronic device, and readable storage medium
CN109815776B (en) Action prompting method and device, storage medium and electronic device
CN110427900B (en) Method, device and equipment for intelligently guiding fitness
CN110838353B (en) Action matching method and related product
CN103100193A (en) Image processing device, image processing method, and program
WO2017161734A1 (en) Correction of human body movements via television and motion-sensing accessory and system
CN107930048B (en) Space somatosensory recognition motion analysis system and motion analysis method
CN110782482A (en) Motion evaluation method and device, computer equipment and storage medium
KR20220028654A (en) Apparatus and method for providing taekwondo movement coaching service using mirror dispaly
CN114022512A (en) Exercise assisting method, apparatus and medium
CN113409651B (en) Live broadcast body building method, system, electronic equipment and storage medium
CN110298279A (en) A kind of limb rehabilitation training householder method and system, medium, equipment
CN113516064A (en) Method, device, equipment and storage medium for judging sports motion
CN114333046A (en) Dance action scoring method, device, equipment and storage medium
CN113505662A (en) Fitness guidance method, device and storage medium
Xie et al. Visual feedback for core training with 3d human shape and pose
CN114495169A (en) Training data processing method, device and equipment for human body posture recognition
KR20220058790A (en) Exercise posture analysis method using dual thermal imaging camera, guide method for posture correction and computer program implementing the same method
US20230377224A1 (en) Method, device, and non-transitory computer-readable recording medium for displaying graphic object on image
CN115131879A (en) Action evaluation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant