CN111967407B - Action evaluation method, electronic device, and computer-readable storage medium - Google Patents

Action evaluation method, electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN111967407B
CN111967407B CN202010843303.5A CN202010843303A CN111967407B CN 111967407 B CN111967407 B CN 111967407B CN 202010843303 A CN202010843303 A CN 202010843303A CN 111967407 B CN111967407 B CN 111967407B
Authority
CN
China
Prior art keywords
action
skeleton
user
teaching
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010843303.5A
Other languages
Chinese (zh)
Other versions
CN111967407A (en
Inventor
盛志胤
潘伟
沐俊星
袁峰
魏金文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Interactive Entertainment Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Interactive Entertainment Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Interactive Entertainment Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202010843303.5A priority Critical patent/CN111967407B/en
Publication of CN111967407A publication Critical patent/CN111967407A/en
Application granted granted Critical
Publication of CN111967407B publication Critical patent/CN111967407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip

Abstract

The embodiment of the invention relates to the technical field of Internet, and discloses an action evaluation method, electronic equipment and a computer readable storage medium. The action evaluation method comprises the following steps: acquiring first skeleton feature information of a user in a current video frame; the current video frame comprises actions which are made by the user under the guidance of a preset teaching video; acquiring second skeleton feature information corresponding to teaching actions in the teaching video; and evaluating the actions made by the user according to the first framework characteristic information and the second framework characteristic information, so that the real-time performance of the evaluation can be improved, and the learning experience of the user is improved.

Description

Action evaluation method, electronic device, and computer-readable storage medium
Technical Field
The embodiment of the invention relates to the technical field of Internet, in particular to an action evaluation method, electronic equipment and a computer readable storage medium.
Background
Currently, the internet exercise APP introduces exercise short video courses in market competition, and for these contents, the AI technology based on video understanding is considered to be introduced, so that real-time intelligent guidance of user exercise is realized, unmanned exercise guidance is achieved, and the body building of the whole people is promoted. At present, a body-building coach can conduct body-building teaching on a user in a body-building video mode, when the user exercises under the guidance of the body-building video, if the user needs to evaluate the body-building action of the user, a fixed number of multi-frame images are usually obtained each time, such as 5 frames of images are obtained each time, then the continuous 5 frames of images of the user are compared with the continuous 5 frames of images of the coach, whether the action similar to the action of the coach occurs in the continuous 5 frames of images of the user is determined, and therefore the body-building action of the user is scored.
However, the inventors found that there are at least the following problems in the related art: because continuous multi-frame images are acquired each time, evaluation can be performed based on the multi-frame images, evaluation delay can be avoided, learning experience of a user is affected, and body-building guiding effect is poor.
Disclosure of Invention
An object of an embodiment of the present invention is to provide an action evaluation method, an electronic device, and a computer-readable storage medium, so that the real-time performance of evaluation can be improved, thereby improving the learning experience of a user.
In order to solve the above technical problems, an embodiment of the present invention provides an action evaluation method, including: acquiring first skeleton feature information of a user in a current video frame; the current video frame comprises actions which are made by the user under the guidance of a preset teaching video; acquiring second skeleton feature information corresponding to teaching actions in the teaching video; and evaluating the action made by the user according to the first framework characteristic information and the second framework characteristic information.
The embodiment of the invention also provides electronic equipment, which comprises: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the action evaluation method described above.
The embodiment of the invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the action evaluation method described above.
Compared with the prior art, the method and the device for obtaining the first skeleton feature information of the user in the current video frame are achieved; the current video frame comprises actions made by a user under the guidance of a preset teaching video; acquiring second skeleton characteristic information corresponding to teaching actions in the teaching video; and evaluating actions made by the user according to the first framework characteristic information and the second framework characteristic information. That is, by combining the first skeleton feature information of the user in the current video frame with the second skeleton feature information corresponding to the teaching action in the teaching video, the action made by the user is tracked and identified in real time, and the action made by the user under the guidance of the teaching video is evaluated, so that the evaluation delay caused by the fact that continuous multi-frame images need to be acquired in the prior art to evaluate based on the continuous multi-frame images is avoided. The embodiment of the invention is beneficial to improving the real-time performance of evaluation, thereby improving the learning experience of users.
In addition, obtaining second skeleton feature information corresponding to teaching actions in the teaching video includes: determining position information of key points of each skeleton of a learner in the teaching video; determining an action angle corresponding to the teaching action according to the position information of each skeleton key point of the learner; and acquiring second skeleton characteristic information corresponding to the teaching action in the teaching video according to the action angle corresponding to the teaching action. Considering that the position information of the key points of different frameworks may change when a learner does different teaching actions, the position information of the key points of the frameworks may be different, but the angles between the key points of different frameworks are similar when people with different heights and body types do the same action. Therefore, in the embodiment, the angles among the key points of different frameworks are utilized, so that the action angles corresponding to the teaching actions can be accurately measured, and the second framework characteristic information obtained according to the action angles corresponding to the teaching actions is beneficial to accurately measuring the teaching actions and shows the characteristics of the teaching actions.
In addition, obtaining first skeleton feature information of a user in a current video frame includes: inputting the current video frame into a pre-trained Tensor Flow Lite neural network model for generating a skeleton point diagram, and outputting the skeleton point diagram of a user in the current video frame; and determining first skeleton characteristic information of the user in the current video frame according to the skeleton point diagram of the user. Because the Tensor Flow Lite neural network model belongs to a lightweight neural network model, the method can be suitable for the processing performance of terminals such as mobile phones, and is beneficial to enabling the evaluation method of the embodiment of the invention to be directly executed on the terminal side, thereby reducing the economic cost.
In addition, the determining, according to the skeleton point diagram of the user, the first skeleton feature information of the user in the current video frame includes: filtering the skeleton point diagram by adopting a preset filtering algorithm to obtain thermodynamic diagrams corresponding to the skeleton key points of the user; and determining first skeleton characteristic information of the user in the current video frame according to thermodynamic diagrams corresponding to the skeleton key points of the user. The framework chart is filtered by adopting a preset filtering algorithm, so that noise information can be eliminated, and the determined first framework characteristic information is more stable and accurate.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings.
Fig. 1 is a flowchart of an action evaluation method according to a first embodiment of the present invention;
fig. 2 is a schematic distribution diagram of skeleton key points of a human body according to a first embodiment of the present invention;
fig. 3 is a schematic diagram of motion angles corresponding to a motion according to a first embodiment of the present invention;
fig. 4 is a flowchart of an action evaluation method according to a second embodiment of the present invention;
fig. 5 is a flowchart of an action evaluation method according to a third embodiment of the present invention;
FIG. 6 is a schematic diagram of 3 key actions according to a third embodiment of the present application;
fig. 7 is a schematic structural view of an electronic device according to a fourth embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the following detailed description of the embodiments of the present application will be given with reference to the accompanying drawings. However, those of ordinary skill in the art will understand that in various embodiments of the present application, numerous technical details have been set forth in order to provide a better understanding of the present application. However, the claimed application may be practiced without these specific details and with various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not be construed as limiting the specific implementation of the present application, and the embodiments can be mutually combined and referred to without contradiction.
The first embodiment of the application relates to an action evaluation method which is applied to electronic equipment; the electronic device may be a terminal or a server such as a mobile phone, a tablet computer, etc. The application scenario of this embodiment may be: the user makes corresponding actions under the guidance of the teaching video, and the electronic equipment evaluates the actions made by the user, such as: the user makes body-building actions under the guidance of the body-building teaching video, and the electronic equipment evaluates the body-building actions made by the user; the athlete makes training actions under the guidance of the training teaching video before the competition, and the electronic equipment evaluates the training actions made by the athlete; the doctor makes operation action under the guidance of the operation action teaching video, and the electronic equipment evaluates the operation action made by the doctor and other scenes.
The implementation details of the action evaluation method according to the present embodiment are specifically described below, and the following description is merely provided for convenience of understanding, and is not essential for implementing the present embodiment.
As shown in fig. 1, the flowchart of the operation evaluation method in the present embodiment may include:
step 101: and acquiring first skeleton characteristic information of the user in the current video frame.
The current video frame comprises actions made by a user under the guidance of a preset teaching video. The current video frame may be a video frame of a user in a learning process, which is shot by a terminal playing the teaching video. For example, when a user starts a camera to perform exercise, the electronic device can shoot exercise actions of the user in real time, and acquire a current video frame of the user during exercise.
In an example, the executing body electronic device of the embodiment may be a terminal, where the terminal may be a terminal for playing a teaching video, and the terminal may obtain, according to a current video frame that is captured, first skeleton feature information of a user in the current video frame.
In another example, the executing body electronic device of the present embodiment may be a server, and the terminal may send the photographed current video frame to the server, and the server obtains the first skeleton feature information of the user in the current video frame.
In one example, the manner of obtaining the first skeleton feature information of the user in the current video frame may be as follows: position information of each skeleton key point of a user in a current video frame is acquired first. The position information of the skeleton key point can be understood as two-dimensional coordinates of the skeleton key point in the current video frame. In a specific implementation, reference may be made to fig. 2 for a schematic distribution diagram of skeleton key points of a human body, where each skeleton key point corresponds to a respective number, and for example, from number 0 to number 15 in fig. 2, the number and name of each skeleton key point may be referred to as shown in table 1:
TABLE 1
Numbering device Name of the name Numbering device Name of the name Numbering device Name of the name
0 Right ankle 6 Pelvis 12 Right shoulder
1 Right knee 7 Chest cavity 13 Left shoulder
2 Right buttocks 8 Cervical vertebra 14 Left elbow
3 Left buttocks 9 Overhead head 15 Left wrist
4 Left knee 10 Right wrist
5 Left ankle 11 Right elbow
And then, determining an action angle corresponding to the action made by the user according to the position information of each skeleton key point of the user. For example, after the position information of each skeleton key point is obtained, the corresponding action angle is generated by taking every three skeleton key points as a group. It can be understood that every three skeleton key points are connected in pairs, three action angles can be obtained, and every three skeleton key points are grouped into 16 skeleton key points, so that a plurality of corresponding action angles can be generated. For easy understanding, reference may be made to fig. 3, in which the teaching action made by the learner is shown in the middle of fig. 3, the action made by the user under the guidance of the learner is shown in the upper right, the point a on the upper right represents the skeleton key point named right knee, the point B represents the skeleton key point named left hip, the point C represents the skeleton key point named left knee, and the point D represents the skeleton key point named left ankle. In the figure, the angle ABD formed among the points A, B and D and the angle ABC formed among the points A, B and C can be understood as the action angle corresponding to the action done by the user. In one example, the position information of each skeleton key point may be a position coordinate, and then a corresponding action angle may be generated according to the position coordinate of each skeleton key point. Referring to fig. 3, assuming point a coordinates of (x 1, y 1), point B coordinates of (x 2, y 2), and point C coordinates of (x 3, y 3), the ++abc can be calculated as follows:
The following vectors are calculated in turn from the coordinates of the three points A, B, C:
vector ab= (x 2-x1, y2-y 1), vector bc= (x 3-x2, y3-y 2), vector ac= (x 3-x1, y3-y 1)
The magnitude of the ≡ABC can be obtained by simultaneous equations according to a calculation formula of the product of the following two vectors:
vector AB vector bc= |ab |bc|cos ++abc;
vector AB vector bc= [ (x 2-x 1) (x 3-x 2), (y 2-y 1) (y 3-y 2) ];
the calculation mode of the angles between other skeleton key points can refer to the calculation mode of the ++ABC, and in order to avoid repetition, the description is omitted here.
It should be noted that, in fig. 3, two motion angles are listed for convenience of description, and in a specific implementation, the number of motion angles corresponding to the motion made by the user is not limited to two. In a specific implementation, the position information of each skeleton key point can be stored in a file form and displayed in a centralized manner in a current video frame, and form data assets are formed by retaining morphological data.
And then, according to a plurality of action angles corresponding to actions made by the user, acquiring first skeleton feature information of the user in the current video frame. For example, a plurality of action angles formed among a plurality of skeleton key points which can most represent characteristics of actions performed by a user can be selected from a plurality of action angles to form a first action angle sequence, and the first action angle sequence can be used as first skeleton feature information.
In the present embodiment, the schematic distribution of skeleton key points of the human body is only shown in fig. 2, which is not a limitation in the specific implementation.
In one example, the manner of obtaining the first skeleton feature information of the user in the current video frame may be as follows: inputting the current video frame into a pre-trained Tensor Flow Lite neural network model for generating a skeleton point diagram, and outputting the skeleton point diagram of a user in the current video frame; and determining first skeleton characteristic information of the user in the current video frame according to the skeleton point diagram of the user. The position information of each skeleton key point of the user is marked in the skeleton point diagram of the user, and the first skeleton feature information of the user in the current video frame can be determined according to the position information of each skeleton key point in the skeleton point diagram of the user. For example, among the position information of each skeleton key point in the skeleton point diagram, the position information of a plurality of skeleton key points which can most represent the characteristics of the action performed by the user can be selected as the first skeleton feature information. In a specific implementation, the current video frame may be first adjusted from the original size (such as 640×480, etc.) to an input size suitable for the Tensor Flow Lite neural network model, and then the Tensor Flow Lite neural network model is introduced into the self-training Tensor Flow Lite neural network model through the Tensor Flow Lite engine, so as to output the skeleton point diagram of the user in the current video frame.
In one example, the input size for a Tensor Flow Lite neural network model may be: 1 x 224 x 3; where 1 is the video frame number, 224×224 is the length and width of the video frame, and 3 is the RGB three channels. The output size of the Tensor Flow Lite neural network model may be: 1*112*112*14. In this embodiment, the input size and the output size listed above are merely examples, and the specific values of the input size and the output size may be set according to actual needs without being limited thereto in a specific implementation.
The Tensor Flow Lite neural network model belongs to a lightweight neural network model, can be suitable for the processing performance of terminals such as mobile phones, is beneficial to the fact that the evaluation method of the embodiment of the invention can be directly executed on the terminal side, reduces the economic cost, and is also beneficial to extracting more accurate and rapid skeleton characteristic information of a human body in motion based on the Tensor Flow Lite neural network model.
Step 102: and acquiring second skeleton characteristic information corresponding to the teaching action in the teaching video.
In one example, the second skeleton feature information corresponding to the teaching action in the teaching video may be pre-stored in the electronic device, so that the electronic device may obtain the pre-stored second skeleton feature information. That is, the electronic device has previously acquired and stored the second skeleton feature information corresponding to the teaching action in the teaching video. In a specific implementation, the teaching action in the teaching video and the corresponding second skeleton feature information thereof can be solidified in a JSON file format and stored in the electronic device so as to be read in real time. In a specific implementation, a whole set of teaching videos generally comprises a series of teaching actions, so that second skeleton feature information corresponding to each teaching action in the teaching videos can be prestored in the electronic device.
In another example, the electronic device does not store the second skeleton feature information in advance, so that the second skeleton feature information corresponding to the teaching action in the teaching video can be obtained in real time.
In an example, the manner of obtaining the second skeleton feature information corresponding to the teaching action in the teaching video may be as follows:
first, position information of each skeleton key point of a learner in a teaching video is determined, wherein the position information of the skeleton key point can be understood as two-dimensional coordinates of the skeleton key point in the teaching video. In one example, the teaching video may be input into a pre-trained Tensor Flow Lite neural network model for generating skeleton point diagrams frame by frame, the skeleton point diagrams of the trainee in the teaching video are output, and then the position information of each skeleton key point of the trainee is determined according to the skeleton point diagrams of the trainee.
Then, determining an action angle corresponding to the teaching action according to the position information of each skeleton key point of the learner; for example, after the position information of each skeleton key point of the learner is obtained, the corresponding action angle is generated by taking every three skeleton key points as a group. It can be understood that every three skeleton key points are connected in pairs, three action angles can be obtained, and every 14 skeleton key points are grouped, so that a plurality of corresponding action angles can be generated. The teaching staff can be a body-building coach in a body-building teaching video, a training person in a training teaching video before a sportsman performs a game, a guiding doctor in a doctor operation action teaching video, and the like, but the specific implementation is not limited to the above.
And then, according to a plurality of action angles corresponding to the teaching actions, acquiring second skeleton characteristic information corresponding to the teaching actions in the teaching video. For example, a plurality of action angles formed among a plurality of skeleton key points which can represent the characteristics of teaching actions most can be selected from a plurality of action angles to form a second action angle sequence, and the second action angle sequence can be used as second skeleton characteristic information.
According to the mode, the electronic equipment can acquire the second skeleton feature information corresponding to a series of teaching actions in the teaching video, then, the series of teaching actions and the second skeleton feature information corresponding to the series of teaching actions can be stored according to actual needs, and when other users are opened for learning, the electronic equipment can directly acquire the second skeleton feature information corresponding to the teaching actions in the teaching video.
Considering that the position information of the key points of different frameworks may change when a learner does different teaching actions, the position information of the key points of the frameworks may be different, but the angles between the key points of different frameworks are similar when people with different heights and body types do the same action. Therefore, in the embodiment, the angles among the key points of different frameworks are utilized, so that the action angles corresponding to the teaching actions can be accurately measured, and the second framework characteristic information obtained according to the action angles corresponding to the teaching actions is beneficial to accurately measuring the teaching actions and shows the characteristics of the teaching actions.
Step 103: and evaluating actions made by the user according to the first framework characteristic information and the second framework characteristic information.
Wherein, the evaluation of the action made by the user can be understood as: the action of the user under the guidance of the teaching video is scored, the closer the action of the user is to the teaching action in the teaching video, the higher the scoring is, and the farther the action of the user is different from the teaching action in the teaching video, the lower the scoring is.
In one example, the similarity between the action made by the user and the teaching action can be determined according to the first skeleton feature information and the second skeleton feature information, and then the action made by the user is evaluated according to the similarity. In a specific implementation, the higher the similarity, the better the evaluation of the action made by the user, the lower the similarity, and the worse the evaluation of the action made by the user. Wherein, the good and bad of the evaluation can be expressed as the high and low of the score, namely, the better the evaluation is, the higher the score is, the lower the score is. In a specific implementation, the scoring height can be displayed on a picture of the teaching video, so that a user can conveniently check the scoring height.
In one example, the first skeletal feature information includes a first sequence of action angles corresponding to actions made by the user, and the second skeletal feature information includes a second sequence of action angles corresponding to teaching actions. The manner in which the similarity of the action made by the user to the teaching action is determined may be as follows: and calculating Euclidean distance according to the first action angle sequence and the second action angle sequence, and determining the similarity between the action made by the user and the teaching action according to the Euclidean distance.
In a specific implementation, the first action angle sequence can be understood as an angle vector formed by action angles formed among key points of each framework of the user when the user makes actions; and the angle vector can be also understood to be an angle vector formed by action angles meeting the first preset requirement, which are selected from action angles formed among the skeleton key points of the user. The first preset requirement may be set according to actual needs, which is not specifically limited in this embodiment. The second action angle sequence can be understood as an angle vector formed by action angles formed among key points of each framework of a learner when the learner in the teaching video performs teaching actions; and the angle vector can be also understood to be an angle vector formed by action angles meeting a second preset requirement selected from action angles formed among the skeleton key points of the teaching aid. The second preset requirement may be set according to actual needs, which is not specifically limited in this embodiment. The number of the action angles in the first action angle sequence and the number of the action angles in the second action angle sequence can be the same, so that the Euclidean distance between the two angle sequences can be conveniently calculated, and the larger the Euclidean distance is, the larger the difference between the two angle sequences is, namely the larger the action difference between the action of the user and the action of the coach is, the smaller the similarity is; the smaller the Euclidean distance is, the smaller the difference between the two angle sequences is, namely the smaller the difference between the action of the user and the action of the coach is, and the larger the similarity is.
In an example, the picture of the teaching video played by the terminal may be as shown in fig. 3, the picture of the teaching video may include a motion picture of a learner, and may also include a motion picture of a user, and the sizes of the motion picture of the learner and the motion picture of the user may be adjusted according to actual needs, however, this embodiment is not limited specifically. In a specific implementation, the data such as the score of the user, the training times of the user, the training time and the like can be displayed in the picture of the teaching video.
In one example, the guidance opinion can be fed back to the user according to the evaluation result of the action of the user, so that the user can learn better. Meanwhile, the action pictures of the user can be stored in real time, so that the user can check the action pictures at any time when the follow-up needs.
The above examples in this embodiment are all examples for easy understanding, and do not limit the technical configuration of the present invention.
Compared with the prior art, the method and the device have the advantages that through combining the first framework characteristic information of the user in the current video frame and the second framework characteristic information corresponding to the teaching action in the teaching video, the action of the user under the guidance of the teaching video is evaluated, and the problem that in the prior art, continuous multi-frame images are required to be acquired, evaluation can be performed based on the continuous multi-frame images, and evaluation delay is caused is avoided. The embodiment of the invention is beneficial to improving the real-time performance of evaluation, thereby improving the learning experience of users.
In addition, in the prior art, the evaluation delay is not only reflected in that continuous multi-frame images are acquired for analysis every time, but also in that the terminal needs to send the acquired user body-building video to the cloud server, the cloud server recognizes the body-building action of the user by adopting the AI technology, so that an evaluation result is obtained, and then the evaluation result is returned to the terminal, and the evaluation delay exists in the process. In addition, the cloud server is introduced in a mode of uploading the fitness video to the cloud server and carrying out identification and evaluation by adopting an AI technology, so that the economic cost is high. When the evaluation method in the embodiment is applied to the terminal, the action evaluation can be directly completed at the terminal side, the cloud server does not need to be uploaded, the cloud server is waited for feedback evaluation, the real-time performance of the evaluation is improved, and the economic cost is low because the cloud server is not introduced. It should be noted that, in this embodiment, if the skeleton diagram of the user is output in combination with the Tensor Flow Lite neural network model, so that the first skeleton feature information is further obtained according to the skeleton diagram, and the characteristic that the Tensor Flow Lite neural network model has a lightweight property can be further utilized, so that the method is better adapted to the processing performance of the mobile phone and other terminals, and further the method is convenient for the evaluation method of this embodiment to be directly executed on the terminal side, so that the economic cost is reduced.
The embodiment of the invention combines the self-adaptive neural network model and the low-cost terminal, so as to realize more accurate teaching action guidance and help realize the body building of the whole people. The neural network model is used for assisting action guidance, so that the body-building efficiency is improved, and the body-building guidance standard is unified; the accuracy of the instruction opinion is improved and the credibility is improved by comparing the instruction opinion with teaching actions in real time; and storing the action video of the user in real time, and reserving the form data to form data assets, so that the follow-up checking is convenient.
A second embodiment of the present invention relates to an operation evaluation method. The implementation details of the action evaluation method according to the present embodiment are specifically described below, and the following description is merely provided for convenience of understanding, and is not essential for implementing the present embodiment.
As shown in fig. 4, the flowchart of the operation evaluation method in the present embodiment may include:
step 401: inputting the current video frame into a pre-trained Tensor Flow Lite neural network model for generating a skeleton point diagram, and outputting the skeleton point diagram of a user in the current video frame.
The Tensor Flow Lite neural network model can be obtained by training in advance according to a plurality of action pictures marked with key points of human skeleton. The skeleton point diagram of the user can be marked with position information of each skeleton key point of the user, and the position information can be two-dimensional coordinates.
In specific implementation, different Tensor Flow Lite neural network models can be trained in a targeted manner according to different application scenes, so that the skeleton point diagram of the user in the current video frame is output in a targeted manner, and the accuracy of the skeleton point diagram of the user in the output current video frame is improved. In one example, the current video frame is a video frame photographed in a scene where the user exercises, and the motion picture marked with the key points of the human skeleton may be an exercise motion picture. In another example, the current video frame is a video frame taken in a scene where the athlete trains before the game, and the action picture marked with the key points of the human skeleton may be an action picture for training the athlete. In one example, the current video frame is a video frame taken in a scenario of performing operation guidance on a doctor, and the action picture marked with the key points of the human skeleton may be an action picture of performing operation on the doctor.
Step 402: and filtering the skeleton point diagram by adopting a preset filtering algorithm to obtain thermodynamic diagrams corresponding to the skeleton key points of the user.
The preset filtering algorithm may be set according to actual needs, for example, the selectable filtering algorithm includes any one of the following: mean filtering, median filtering, gaussian filtering. In one example, considering the processing performance of the terminal side, an average filtering algorithm can be selected, the average filtering algorithm belongs to a lightweight algorithm, the processing performance of the mobile phone and other terminals can be well adapted, the algorithm is simple to execute and has high speed and high real-time performance.
In specific implementation, filtering the skeleton point diagram by using a preset filtering algorithm can be understood as: traversing each skeleton key point in the skeleton point diagram, eliminating noise information in the skeleton key points by adopting a preset filtering algorithm, determining probability (also called confidence) of each position of the traversed skeleton key point in the skeleton point diagram, and acquiring a thermodynamic diagram corresponding to the traversed skeleton key point according to the probability of each position of the traversed skeleton key point in the skeleton point diagram. Each skeleton key point may correspond to a thermodynamic diagram, for example, 14 skeleton key points are marked in the skeleton point diagram, and then 14 thermodynamic diagrams corresponding to the 14 skeleton key points respectively may be obtained. Each thermodynamic diagram may be labeled with a corrected position of the skeleton key point, for example, the thermodynamic diagram corresponding to the skeleton key point 1 is labeled with the corrected position of the skeleton key point 1. Wherein the corrected position can be understood as: and filtering the skeleton point diagram by adopting a preset filtering algorithm to determine the positions of the key points of each skeleton.
Step 403: and determining first skeleton characteristic information of the user in the current video frame according to the thermodynamic diagram corresponding to each skeleton key point of the user.
Specifically, a target thermodynamic diagram can be obtained according to thermodynamic diagrams corresponding to key points of each skeleton of a user; wherein, the corrected positions of the key points of each skeleton are summarized on the target thermodynamic diagram. For example, the thermodynamic diagram corresponding to each skeleton key point of the user may include 14 thermodynamic diagrams corresponding to 14 skeleton key points, and 14 skeleton key points in the 14 thermodynamic diagrams may be drawn in the same diagram, which is the target thermodynamic diagram. Then, according to the target thermodynamic diagram, first skeleton feature information of the user in the current video frame is determined.
In one example, based on the target thermodynamic diagram, the manner in which the first skeletal feature information of the user in the current video frame is determined may be as follows: determining position information of each skeleton key point of a user according to the target thermodynamic diagram, determining an action angle corresponding to an action made by the user according to the position information of each skeleton key point of the user, and determining first skeleton feature information of the user in a current video frame according to the action angle corresponding to the action made by the user. The implementation manner of the action angle corresponding to the action made by the user is determined according to the position information of each skeleton key point of the user, and the implementation manner of the first skeleton feature information of the user in the current video frame is determined according to the action angle corresponding to the action made by the user, which may refer to the related description in the first embodiment, so that repetition is avoided and no description is repeated here.
Step 404: and acquiring second skeleton characteristic information corresponding to the teaching action in the teaching video.
Step 405: and evaluating actions made by the user according to the first framework characteristic information and the second framework characteristic information.
The steps 404 to 405 are substantially the same as the steps 102 to 103 in the first embodiment, and are not repeated here.
In addition, in step 404, if the second skeleton feature information of the learner in the teaching video is determined during the process of obtaining the second skeleton feature information corresponding to the teaching action in the teaching video, the manner of determining the first skeleton feature information of the user in the current video frame may be referred to in this embodiment. For example, firstly inputting teaching videos into a pre-trained Tensor Flow Lite neural network model for generating skeleton point diagrams frame by frame, and outputting skeleton point diagrams of students in the teaching videos frame by frame. And then, filtering the skeleton point diagram of the learner by adopting a preset filtering algorithm to obtain thermodynamic diagrams corresponding to the key points of the skeleton of the learner. And then, according to thermodynamic diagrams corresponding to the skeleton key points of the teaching students, determining second skeleton characteristic information of the teaching students in the teaching video frame by frame, namely second skeleton characteristic information corresponding to teaching actions in the teaching video.
The above examples in this embodiment are all examples for easy understanding, and do not limit the technical configuration of the present invention.
Compared with the prior art, the method and the device have the advantages that the framework chart is filtered by adopting the preset filtering algorithm, so that noise information is eliminated, the determined first framework characteristic information and the determined second framework characteristic information are more stable and accurate, and the stability and the accuracy of the evaluation of actions made by a user are further improved.
A third embodiment of the present invention relates to an operation evaluation method. The implementation details of the action evaluation method according to the present embodiment are specifically described below, and the following description is merely provided for convenience of understanding, and is not essential for implementing the present embodiment.
As shown in fig. 5, the flowchart of the operation evaluation method in the present embodiment may include:
step 501: and acquiring first skeleton characteristic information of the user in the current video frame.
Step 502: and determining the position information of each skeleton key point of the learner in the teaching video.
The implementation manners of step 501 to step 502 may refer to the related descriptions in the first embodiment or the second embodiment, and are not repeated here.
Step 503: and determining action angles corresponding to the plurality of key actions in the teaching action according to the position information of the key points of each skeleton of the learner.
The teaching actions in this embodiment include a plurality of preset key actions, for example, according to the characteristics of each teaching action, a plurality of key actions may be selected, for example, one teaching action may include 3 key actions as shown in fig. 6, and the teaching actions may be described as: standing, jumping, standing. According to the position information of each skeleton key point when a learner performs different key actions, the action angles corresponding to a plurality of key actions in the teaching action can be determined. In a particular implementation, each critical action may correspond to a plurality of action angles.
Step 504: and determining a plurality of angle difference values between any two adjacent key actions according to action angles respectively corresponding to the key actions.
For example, if the motion angle corresponding to the key motion 1 is +_abd=x1, the motion angle corresponding to the key motion 2 is +_abc=y1, the motion angle corresponding to the key motion 2 is +_abd=x2, and the multiple angle difference values between the key motion 1 and the key motion 2 include: the difference value x1-x2 of the angle ABD and the difference value y1-y2 of the angle ABC; wherein, the difference values x1-x2 and y1-y2 can take absolute values. Referring to the above example, a plurality of angle difference values between any two adjacent key actions in one teaching action may be determined.
Step 505: and determining key skeleton characteristic information for representing any two adjacent key actions according to the angle difference values.
In one example, the plurality of motion angle difference values may be first sorted from large to small, and the first n motion angle difference values are selected; wherein n is a natural number greater than 1. Then, from the action angles corresponding to the key actions, the action angles forming the first n action angle difference values are selected. And then, determining key skeleton characteristic information for representing any two adjacent key actions according to the selected action angles.
Let n be 15, refer to the action angle corresponding to the key action 1 +.abd=x1, +.abc=y1, the action angle corresponding to the key action 2 +.abd=x2, +.abc=y2. Assuming that x1-x2 is at Top15, it is considered that critical action 1 and critical action 2 can be distinguished by the magnitude of ++abc, then +.abd=x1 can be used as part of the critical skeleton feature information corresponding to critical action 1, and +.abd=x2 can be used as part of the critical skeleton feature information corresponding to critical action 2. In the subsequent process of identifying the user action, the key skeleton feature information corresponding to the key action 1 is assumed to only comprise the angle abd=x1, and if the action angle ABD corresponding to the action made by the user is close to x1, the similarity between the action made by the user and the key action 1 can be considered to be very high. Similarly, assuming that the key skeleton feature information corresponding to the key action 2 only includes +.abd=x2, if the action angle +.abd corresponding to the action made by the user is close to x2, the similarity between the action made by the user and the key action 2 can be considered to be very high.
Step 506: and determining second skeleton characteristic information corresponding to the teaching action in the teaching video according to the key skeleton characteristic information.
For example, the key skeleton feature information used for representing any two adjacent key actions can be combined to be used as second skeleton feature information corresponding to teaching actions. For the teaching action as shown in fig. 6, the corresponding second skeleton feature information may include: key skeleton feature information corresponding to the key action 1, key skeleton feature information corresponding to the key action 2 and key skeleton feature information corresponding to the key action 3.
Step 507: and evaluating actions made by the user according to the first framework characteristic information and the second framework characteristic information.
The first skeleton feature information may include a first action angle sequence corresponding to an action made by the user, the second skeleton feature information may include a second action angle sequence corresponding to an action made by the learner, and the second action angle sequence may include a key action angle sequence corresponding to a plurality of key actions. A key motion angle sequence is understood as an angle vector consisting of a plurality of key motion angles.
In one example, the Euclidean distance between the first sequence of action angles and each of the key sequence of action angles may be calculated, and based on the Euclidean distance between the first sequence of action angles and each of the key sequence of action angles, it is determined whether the action taken by the user is one of a plurality of key actions in the tutorial action. The larger the Euclidean distance is, the smaller the similarity between the action made by the user and the key action is indicated; the smaller the Euclidean distance, the greater the similarity of the action made by the user to the key action. The greater the similarity, the better the evaluation, the higher the corresponding score, whereas the smaller the similarity, the worse the evaluation, the lower the corresponding score. If the Euclidean distance between the first action angle sequence and each key action angle sequence is finally determined to be larger, the action made by the user is different from each key action made by the learner, and feedback information such as no action, action non-standard and the like can be fed back to the user.
In one example, for a teaching action that includes multiple key actions, the key actions for scoring may be specified. If the action taken by the user is identified as a key action for scoring, scoring the action taken by the user. The manner of identifying whether the action performed by the user is the key action for scoring may be: and determining the similarity between the action made by the user and the key action for scoring, if the similarity is greater than a preset threshold, identifying that the action made by the user is the key action for scoring, and scoring the action made by the user according to the similarity between the action made by the user and the key action for scoring, wherein the higher the similarity is, the lower the similarity is, and the lower the score is. The preset threshold may be set according to actual needs, which is not specifically limited in this embodiment. By designating the key actions for scoring, the method is beneficial to realizing noise interference of other actions and improving the stability of scoring.
In one example, for a teaching action including a plurality of key actions, the similarity of each key action with the action made by the user can be determined in turn, each key action is scored according to the similarity, and the score of each key action is fed back to the user. Alternatively, the highest one of the scores of the plurality of key actions is selected for feedback to the user to ensure the robustness of the score.
The above examples in this embodiment are all examples for easy understanding, and do not limit the technical configuration of the present invention.
Compared with the prior art, the teaching action in the embodiment comprises a plurality of preset key actions, a plurality of angle difference values between any two adjacent key actions are determined according to action angles corresponding to the plurality of key actions respectively, key skeleton characteristic information used for representing the any two adjacent key actions is determined according to the plurality of angle difference values, and second skeleton characteristic information corresponding to the teaching action in the teaching video is determined according to the key skeleton characteristic information, so that the plurality of key actions in the teaching action are selected according to the characteristics of each teaching action, the second skeleton characteristic information corresponding to the teaching action is determined by combining the key skeleton characteristic information used for representing the any two adjacent key actions in the plurality of key actions, the characteristics of the plurality of key actions in the teaching action are fully considered, the teaching action is comprehensively measured, and the accuracy of the subsequent evaluation of the user action is further improved.
The above steps of the methods are divided, for clarity of description, and may be combined into one step or split into multiple steps when implemented, so long as they include the same logic relationship, and they are all within the protection scope of this patent; it is within the scope of this patent to add insignificant modifications to the algorithm or flow or introduce insignificant designs, but not to alter the core design of its algorithm and flow.
A fourth embodiment of the invention relates to an electronic device, as shown in fig. 7, comprising at least one processor 701; and a memory 702 communicatively coupled to the at least one processor 701; the memory 702 stores instructions executable by the at least one processor 701, and the instructions are executed by the at least one processor 701 to enable the at least one processor 701 to perform the action evaluation methods of the first to third embodiments.
Where memory 702 and processor 701 are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting the various circuits of the one or more processors 701 and memory 702 together. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 701 is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor 701.
The processor 701 is responsible for managing the bus and general processing and may provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 702 may be used to store data used by processor 701 in performing operations.
A fifth embodiment of the present application relates to a computer-readable storage medium storing a computer program. The computer program implements the above-described method embodiments when executed by a processor.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program stored in a storage medium, where the program includes several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the methods of the embodiments of the application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples of carrying out the application and that various changes in form and details may be made therein without departing from the spirit and scope of the application.

Claims (6)

1. An action evaluation method, comprising:
acquiring first skeleton feature information of a user in a current video frame; the current video frame comprises actions which are made by the user under the guidance of a preset teaching video, and the first skeleton characteristic information comprises a first action angle sequence corresponding to the actions made by the user;
acquiring second skeleton feature information corresponding to teaching actions in the teaching video, wherein the second skeleton feature information comprises a second action angle sequence corresponding to the teaching actions;
evaluating the action made by the user according to the first skeleton feature information and the second skeleton feature information, specifically including: calculating Euclidean distance according to the first action angle sequence and the second action angle sequence; determining the similarity between the action made by the user and the teaching action according to the Euclidean distance, and evaluating the action made by the user according to the similarity;
the obtaining the second skeleton feature information corresponding to the teaching action in the teaching video includes:
determining position information of key points of each skeleton of a learner in the teaching video;
Determining an action angle corresponding to the teaching action according to the position information of each skeleton key point of the learner;
acquiring second skeleton feature information corresponding to the teaching action in the teaching video according to the action angle corresponding to the teaching action;
the teaching actions comprise a plurality of preset key actions, and the action angles corresponding to the teaching actions comprise action angles respectively corresponding to the plurality of key actions; the number of action angles corresponding to each key action is a plurality of;
the step of obtaining second skeleton feature information corresponding to the teaching action in the teaching video according to the action angle corresponding to the teaching action comprises the following steps:
determining a plurality of angle difference values between any two adjacent key actions according to action angles respectively corresponding to the plurality of key actions;
determining key skeleton feature information for representing the arbitrary two adjacent key actions according to the angle difference values;
and determining second skeleton characteristic information corresponding to teaching actions in the teaching video according to the key skeleton characteristic information.
2. The method of claim 1, wherein determining key skeleton feature information for characterizing the any two neighboring key actions based on the plurality of action angle difference values comprises:
Sorting the plurality of action angle difference values from large to small, and selecting the first n action angle difference values; wherein n is a natural number greater than 1;
selecting action angles forming the first n action angle difference values from action angles respectively corresponding to the plurality of key actions;
and determining key skeleton characteristic information for representing any two adjacent key actions according to the selected action angles.
3. The method for motion estimation according to claim 1, wherein the obtaining the first skeleton feature information of the user in the current video frame includes:
inputting the current video frame into a pre-trained Tensor Flow Lite neural network model for generating a skeleton point diagram, and outputting the skeleton point diagram of a user in the current video frame;
and determining first skeleton characteristic information of the user in the current video frame according to the skeleton point diagram of the user.
4. The action evaluation method according to claim 3, wherein the determining the first skeleton feature information of the user in the current video frame according to the skeleton point diagram of the user includes:
filtering the skeleton point diagram by adopting a preset filtering algorithm to obtain thermodynamic diagrams corresponding to the skeleton key points of the user;
And determining first skeleton characteristic information of the user in the current video frame according to thermodynamic diagrams corresponding to the skeleton key points of the user.
5. An electronic device, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the action evaluation method of any one of claims 1 to 4.
6. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the action evaluation method according to any one of claims 1 to 4.
CN202010843303.5A 2020-08-20 2020-08-20 Action evaluation method, electronic device, and computer-readable storage medium Active CN111967407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010843303.5A CN111967407B (en) 2020-08-20 2020-08-20 Action evaluation method, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010843303.5A CN111967407B (en) 2020-08-20 2020-08-20 Action evaluation method, electronic device, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN111967407A CN111967407A (en) 2020-11-20
CN111967407B true CN111967407B (en) 2023-10-20

Family

ID=73389229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010843303.5A Active CN111967407B (en) 2020-08-20 2020-08-20 Action evaluation method, electronic device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN111967407B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342439A (en) * 2021-06-11 2021-09-03 北京字节跳动网络技术有限公司 Display method, display device, electronic equipment and storage medium
CN114241595A (en) * 2021-11-03 2022-03-25 橙狮体育(北京)有限公司 Data processing method and device, electronic equipment and computer storage medium
CN114268849A (en) * 2022-01-29 2022-04-01 北京卡路里信息技术有限公司 Video processing method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016140591A (en) * 2015-02-03 2016-08-08 国立大学法人 鹿児島大学 Motion analysis and evaluation device, motion analysis and evaluation method, and program
CN108615055A (en) * 2018-04-19 2018-10-02 咪咕动漫有限公司 A kind of similarity calculating method, device and computer readable storage medium
CN108764120A (en) * 2018-05-24 2018-11-06 杭州师范大学 A kind of human body specification action evaluation method
CN109635644A (en) * 2018-11-01 2019-04-16 北京健康有益科技有限公司 A kind of evaluation method of user action, device and readable medium
CN110728220A (en) * 2019-09-30 2020-01-24 上海大学 Gymnastics auxiliary training method based on human body action skeleton information
CN110782482A (en) * 2019-10-21 2020-02-11 深圳市网心科技有限公司 Motion evaluation method and device, computer equipment and storage medium
CN110796077A (en) * 2019-10-29 2020-02-14 湖北民族大学 Attitude motion real-time detection and correction method
CN111144217A (en) * 2019-11-28 2020-05-12 重庆邮电大学 Motion evaluation method based on human body three-dimensional joint point detection
CN111652078A (en) * 2020-05-11 2020-09-11 浙江大学 Yoga action guidance system and method based on computer vision

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016140591A (en) * 2015-02-03 2016-08-08 国立大学法人 鹿児島大学 Motion analysis and evaluation device, motion analysis and evaluation method, and program
CN108615055A (en) * 2018-04-19 2018-10-02 咪咕动漫有限公司 A kind of similarity calculating method, device and computer readable storage medium
CN108764120A (en) * 2018-05-24 2018-11-06 杭州师范大学 A kind of human body specification action evaluation method
CN109635644A (en) * 2018-11-01 2019-04-16 北京健康有益科技有限公司 A kind of evaluation method of user action, device and readable medium
CN110728220A (en) * 2019-09-30 2020-01-24 上海大学 Gymnastics auxiliary training method based on human body action skeleton information
CN110782482A (en) * 2019-10-21 2020-02-11 深圳市网心科技有限公司 Motion evaluation method and device, computer equipment and storage medium
CN110796077A (en) * 2019-10-29 2020-02-14 湖北民族大学 Attitude motion real-time detection and correction method
CN111144217A (en) * 2019-11-28 2020-05-12 重庆邮电大学 Motion evaluation method based on human body three-dimensional joint point detection
CN111652078A (en) * 2020-05-11 2020-09-11 浙江大学 Yoga action guidance system and method based on computer vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
skeleton-base human activity analysis using deep neural networks with adaptive representation transformation;Jiahui Yu等;《2021 6th IEEE international conference on advanced robotics and mechatronic(ICARM)》;第278-282页 *
基于Openpose的姿态匹配研究及其在体育教学中的应用;应响;《cnki优秀硕士学位论文全文库 社会科学Ⅱ辑》(第03期);第H130-1470页 *

Also Published As

Publication number Publication date
CN111967407A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN109191588B (en) Motion teaching method, motion teaching device, storage medium and electronic equipment
CN108734104B (en) Body-building action error correction method and system based on deep learning image recognition
CN111967407B (en) Action evaluation method, electronic device, and computer-readable storage medium
CN106650630B (en) A kind of method for tracking target and electronic equipment
US20210406525A1 (en) Facial expression recognition method and apparatus, electronic device and storage medium
CN110675474B (en) Learning method for virtual character model, electronic device, and readable storage medium
CN110448870B (en) Human body posture training method
CN110427900B (en) Method, device and equipment for intelligently guiding fitness
CN110782482A (en) Motion evaluation method and device, computer equipment and storage medium
WO2017161734A1 (en) Correction of human body movements via television and motion-sensing accessory and system
KR20220028654A (en) Apparatus and method for providing taekwondo movement coaching service using mirror dispaly
CN113762133A (en) Self-weight fitness auxiliary coaching system, method and terminal based on human body posture recognition
CN113516064A (en) Method, device, equipment and storage medium for judging sports motion
CN113409651B (en) Live broadcast body building method, system, electronic equipment and storage medium
Zou et al. Intelligent fitness trainer system based on human pose estimation
CN110110647A (en) The method, apparatus and storage medium that information is shown are carried out based on AR equipment
CN111383735A (en) Unmanned body-building analysis method based on artificial intelligence
CN113505662A (en) Fitness guidance method, device and storage medium
CN114495169A (en) Training data processing method, device and equipment for human body posture recognition
EP4145400A1 (en) Evaluating movements of a person
CN116704603A (en) Action evaluation correction method and system based on limb key point analysis
CN115131879A (en) Action evaluation method and device
Hou et al. Mobile augmented reality system for preschool education
CN114241595A (en) Data processing method and device, electronic equipment and computer storage medium
CN113920578A (en) Intelligent home yoga coach information processing system, method, terminal and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant