CN112686138A - Calorie estimation method, device, equipment and storage medium - Google Patents

Calorie estimation method, device, equipment and storage medium Download PDF

Info

Publication number
CN112686138A
CN112686138A CN202011589806.0A CN202011589806A CN112686138A CN 112686138 A CN112686138 A CN 112686138A CN 202011589806 A CN202011589806 A CN 202011589806A CN 112686138 A CN112686138 A CN 112686138A
Authority
CN
China
Prior art keywords
training
action
image
video frame
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011589806.0A
Other languages
Chinese (zh)
Inventor
陈博
刘煦
景玉
郝铭尧
廖金花
张吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Calorie Information Technology Co ltd
Original Assignee
Beijing Calorie Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Calorie Information Technology Co ltd filed Critical Beijing Calorie Information Technology Co ltd
Priority to CN202011589806.0A priority Critical patent/CN112686138A/en
Publication of CN112686138A publication Critical patent/CN112686138A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a calorie estimation method, a calorie estimation device, electronic equipment and a medium. The method comprises the following steps: obtaining the action completion rate of each frame of image in the training course video frame sequence according to the user sign information, the training course video frame sequence of the user and a standard training action library; inputting the user physical sign information, two adjacent frames of images in the training course video frame sequence and action completion rates corresponding to the two adjacent frames of images into a calorie estimation model to obtain calories consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image; and obtaining the total calorie consumption of the training course video frame sequence according to the calorie consumed by the training action on the previous frame image except the first frame image in the training course video frame sequence to the training action on the next frame image. The method and the device realize accurate estimation of the calorie value corresponding to the training course of the user according to the physical sign information of the user and the video frame sequence of the training course.

Description

Calorie estimation method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a calorie estimation method, a calorie estimation device, calorie estimation equipment and a storage medium.
Background
With the increasing interest in physical health, healthy diet and exercise courses have become the basic way to maintain physical health. Ingesting nutrients and controlling caloric intake through diet; the heat consumption is exercised through training courses, and the physical function is improved. A scientific workout requires an accurate estimate of the calories consumed by each workout action in the workout. Generally, the calorie of the training course is calculated by estimating the calorie through the heart rate, the training time and various physical indexes of a trainer.
Because the heart rate acquisition equipment is not necessarily worn in the course training process of the user, the calorie of the action training cannot be estimated when the heart rate acquisition equipment is not worn. In the prior art, calorie estimation is performed through calorie calculation of the whole course, but the calorie estimation performed through the calorie calculation of the whole course cannot be subdivided into each action of training of a training course, so that the calorie estimation of an intelligent training plan and a self-programming course cannot be performed.
Disclosure of Invention
The invention provides a calorie estimation method, a device, equipment and a storage medium, which are used for accurately estimating the calorie value corresponding to a user training course according to user sign information and a training course video frame sequence.
In a first aspect, an embodiment of the present invention provides a calorie estimation method, including:
obtaining the action completion rate of each frame of image in the training course video frame sequence according to the user sign information, the training course video frame sequence of the user and a standard training action library;
inputting the user physical sign information, two adjacent frames of images in the training course video frame sequence and action completion rates corresponding to the two adjacent frames of images into a calorie estimation model to obtain calories consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image; the calorie estimation model is obtained by training a sample user physical sign information, two adjacent frames of images, action completion rates corresponding to the two adjacent frames of images and a calorie pair deep learning network model consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image in a sample training course video frame sequence;
and obtaining the total calorie consumption of the training course video frame sequence according to the calorie consumed by the training action on the previous frame image except the first frame image in the training course video frame sequence to the training action on the next frame image.
Further, the obtaining of the action completion rate of each frame of image in the training course video frame sequence according to the user sign information, the training course video frame sequence of the user, and the standard training action library includes:
according to the physical sign information of the trainee in the standard training action library and the physical sign information of the user, determining a proportional parameter corresponding to the training action in the standard training action library completed by the user;
determining completion parameters of standard training actions matched by the user according to the action characteristics of each frame of image in the training course video frame sequence, the action characteristics of the standard training actions in the standard training action library and the proportion parameters corresponding to the training actions in the standard training action library completed by the user;
and determining the completion rate of the user completing the action in each frame of image in the training course video frame sequence according to the completion parameter of the standard training action matched by the user and the completion parameter of the action in each frame of image in the training course video frame sequence.
Further, the determining, according to the sign information of the trainer in the standard training action library and the sign information of the user, a proportional parameter corresponding to the user completing the training action in the standard training action library includes:
and determining the proportion parameters corresponding to the training actions of the user in the standard training action library according to the height, the weight, the ratio of the four limbs, the movable joint point and the height, the weight, the ratio of the four limbs and the movable joint point in the physical sign information of the trainer in the standard training action library.
Further, the determining, according to the motion characteristics of each frame of image in the training course video frame sequence, the motion characteristics of the standard training motions in the standard training motion library, and the proportion parameters corresponding to the training motions in the user completed standard training motion library, the completion parameters of the standard training motions matched with the user completed by the user includes:
determining motion characteristics of each frame of image in the training course video frame sequence and completion parameters of user motion according to the training course video frame sequence;
and if the corresponding standard training action is matched according to the action characteristics of the standard training action in the standard training action library according to the action characteristics of each frame of image in the training course video frame sequence, determining the completion parameters of the standard training action matched by the user according to the completion parameters of the corresponding standard training action, the proportion parameters corresponding to the training action in the standard training action library completed by the user and the user sign information.
Further, the calorie estimation model is obtained by training a deep learning network model through sample user physical sign information, two adjacent frames of images, action completion rates corresponding to the two adjacent frames of images, and calories consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image in a sample training course video frame sequence, and includes:
acquiring a sample training course video frame sequence, training data corresponding to the sample training course video frame sequence and sample user physical sign information in the sample training course video frame sequence;
determining the action completion rate of each frame of image in the sample training course video frame sequence according to the action characteristics of each frame of image in the sample training course video frame sequence, the action characteristics of the training action in the standard training action library, the sample user physical sign information in the sample training course video frame sequence and the characteristic information of the trainer in the standard training library;
determining calories consumed by the training action change from the previous frame image to the next frame image in the two adjacent frames of images in the sample training course video frame sequence according to the training data corresponding to the sample training course video frame sequence and the sample training course video frame sequence;
and inputting the two adjacent frames of images in the sample training course video frame sequence, the sample user physical sign information in the sample training course video frame sequence, the action completion rate corresponding to the two adjacent frames of images, and the calorie consumed by the training action from the training action on the previous frame of image to the training action on the next frame of image in the sample training course video frame sequence into a deep learning network model to obtain a calorie estimation model.
Further, before obtaining the total calories consumed by the sequence of training session video frames according to the calories consumed by the training action on the previous frame of image in the sequence of training session video frames, except for the first frame of image, the method further includes:
and if the action characteristics of the first frame image in the training course video frame sequence are matched with the training starting action in the standard training action library, counting calories corresponding to the training starting action in the standard training action library into the total consumed calories of the training course video frame sequence.
Further, before obtaining the total calories consumed by the sequence of training session video frames according to the calories consumed by the training action on the previous frame of image in the sequence of training session video frames, except for the first frame of image, the method further includes:
if the action characteristics of the first frame image in the training course video frame sequence are matched with the training starting action in the standard training action library, calculating the calorie corresponding to the matched action in the standard training action library according to the action characteristics of the first frame image in the training course video frame sequence into the total consumed calorie of the training course video frame sequence.
In a second aspect, an embodiment of the present invention further provides a calorie estimation apparatus, including:
the completion rate determining module is used for obtaining the action completion rate of each frame of image in the training course video frame sequence according to the user sign information, the training course video frame sequence of the user and a standard training action library;
the calorie determining module is used for inputting the user physical sign information, two adjacent frames of images in the training course video frame sequence and action completion rates corresponding to the two adjacent frames of images into a calorie estimation model to obtain calories consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image; the calorie estimation model is obtained by training a sample user physical sign information, two adjacent frames of images, action completion rates corresponding to the two adjacent frames of images and a calorie pair deep learning network model consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image in a sample training course video frame sequence;
and the total consumption confirming module is used for obtaining the total consumed calorie of the training course video frame sequence according to the calorie consumed by the training action change from the training action on the previous frame image to the training action on the next frame image except the first frame image in the training course video frame sequence.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a calorie estimation method as in any one of the above.
In a fourth aspect, embodiments of the present invention also provide a storage medium containing computer-executable instructions which, when executed by a computer processor, are operable to perform any of the calorie estimation methods described herein.
According to the method, the action completion rate of each frame of image in a training course video frame sequence is obtained according to user sign information, the training course video frame sequence of a user and a standard training action library; inputting the user physical sign information, two adjacent frames of images in the training course video frame sequence and action completion rates corresponding to the two adjacent frames of images into a calorie estimation model to obtain calories consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image; the calorie estimation model is obtained by training a sample user physical sign information, two adjacent frames of images, action completion rates corresponding to the two adjacent frames of images and a calorie pair deep learning network model consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image in a sample training course video frame sequence; and obtaining the total calorie consumption of the training course video frame sequence according to the calorie consumed by the training action on the previous frame image except the first frame image in the training course video frame sequence to the training action on the next frame image. The problem that the calorie of an intelligent training plan and a self-organized course cannot be estimated in advance under the condition of no heart rate acquisition equipment is solved, and the effect of accurately estimating the calorie value corresponding to the user training course according to the user sign information and the training course video frame sequence is achieved.
Drawings
FIG. 1 is a flow chart of a calorie estimation method according to a first embodiment of the present invention;
FIG. 2 is a flow chart of a calorie estimation method according to a second embodiment of the present invention;
FIG. 3 is a schematic view showing the construction of a calorie estimation device according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device in a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a calorie estimation method according to an embodiment of the present invention, which may be applied to various scenarios requiring calorie estimation, based on a training course video frame sequence. The method is performed by a calorie estimation device, which may be implemented by means of software and/or hardware, and may in particular be inherited in an electronic device having storage and computing capabilities for calorie estimation.
As shown in fig. 1, there is provided a calorie estimation method including:
step S110, obtaining the action completion rate of each frame of image in the training course video frame sequence according to the user sign information, the training course video frame sequence of the user and a standard training action library;
in the embodiment of the present invention, the physical sign information of the user may be understood as related information of the training user, such as height, age, weight, gender, limb length, limb ratio, and articulation point, in the training course video frame sequence. The training course video frame sequence of the user can be understood as that a video corresponding to a training course shot in the training process of the user is split into one image frame, and the image frames are arranged according to the shooting time sequence. A standard training action library may be understood as a database added according to a sequence of standard action lesson video frames in a large database. Wherein the standard training action library comprises: each frame of image in the standard training course video frame sequence in the large database, the action characteristic of each frame of image in the standard training course video frame sequence in the large database and the physical sign information of a trainer. The action completion rate of each frame of image in the training course video frame sequence can be understood as the completion percentage of the matching action in the action completion standard training action library of each frame of image in the training course video frame sequence calculated according to the user sign information and the trainer sign information.
In the embodiment of the invention, the action of each frame of image in the user training course video frame sequence is determined according to the user physical sign information and the user training course video frame sequence. According to the user sign information and the trainer information in the standard training action library, determining a completion parameter corresponding to the training action of the user in the standard training action library; and determining the action completion rate of each frame of image in the user training course video frame sequence according to the action completion parameters of each frame of image in the user training course video frame sequence and the completion parameters corresponding to the training action in the user completion standard training action library.
Step 120, inputting the user physical sign information, two adjacent frames of images in the training course video frame sequence, and the action completion rates corresponding to the two adjacent frames of images into a calorie estimation model to obtain calories consumed by changing the training action on the previous frame of image to the training action on the next frame of image in the two adjacent frames of images; the calorie estimation model is obtained by training a sample user physical sign information, two adjacent frames of images, action completion rates corresponding to the two adjacent frames of images and a calorie pair deep learning network model consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image in a sample training course video frame sequence;
in the embodiment of the present invention, two adjacent frames of images in the sequence of the training course video frame may be understood as two adjacent frames of images arranged according to the shooting time sequence in the plurality of frames of images in the sequence of the training course video frame. The action completion rate corresponding to two adjacent frames of images can be understood as the completion rate corresponding to two adjacent frames of images arranged according to the shooting time sequence in the plurality of images in the training course video frame sequence. The calorie estimation model can be understood as an estimation model for calculating calories consumed by the change of the training action on the previous frame image to the training action on the next frame image in the two adjacent frames of images according to the user physical sign information, the two adjacent frames of images in the training course video frame sequence and the action completion rates corresponding to the two adjacent frames of images. The training action on the previous frame image can be understood as the image frame of the two adjacent frames of the training course video frame sequence, which is captured earlier than the next frame image. The training action on the next frame image can be understood as the image frame of the two adjacent frames of the training course video frame sequence, which is shot later than the previous frame image. The calorie consumed by the training action on the previous image in the two adjacent images to the training action on the next image can be understood as the calorie value consumed by the user when the training action on the previous image is changed to the training action on the next image.
In the embodiment of the invention, the physical sign information of a user, two adjacent frames of images in a training course video frame sequence and the action completion rate corresponding to the two adjacent frames of images are input into a calorie estimation model, and the calorie consumed by changing the training action on the previous frame of image to the training action on the next frame of image in the two adjacent frames of images is obtained. If the training course video frame sequence has N frames of images in total, N-1 pairs of adjacent two frames of images in the training course video frame sequence are needed, and the calorie consumed by the training action on the previous frame of image in the adjacent two frames of images corresponding to the adjacent two frames of images in all the training course video frame sequence is required to be calculated by inputting N-1 adjacent two frames of images.
Step S130, obtaining the total calorie consumption of the training course video frame sequence according to the calorie consumed by the training action change from the previous frame image to the next frame image in the training course video frame sequence except the first frame image.
In the embodiment of the present invention, the first frame image may be understood as the first frame image in the image frames in the training course video frame sequence. The total calories consumed for the sequence of training session video frames may be understood as the total calories consumed for the training action on the first frame of image frame of the user to change to the training action on the last frame of image.
In the embodiment of the invention, two adjacent frames of images in the training course video frame sequence are corresponded to N frames of images, N-1 times of two adjacent frames of images are sequentially input, and the calorie consumed by the training action of the previous frame of image in the two adjacent frames of images corresponding to the two adjacent frames of images in the N-1 training course video frame sequence is calculated to be changed to the calorie consumed by the training action of the next frame of image. And accumulating the calculated calorie consumed by changing the training action on the previous frame image to the training action on the next frame image in the two adjacent frame images corresponding to the two adjacent frame images in the N-1 training course video frame sequences to obtain the total calorie consumed by the training course video frame sequences.
Further, the obtaining of the action completion rate of each frame of image in the training course video frame sequence according to the user sign information, the training course video frame sequence of the user, and the standard training action library includes:
according to the physical sign information of the trainee in the standard training action library and the physical sign information of the user, determining a proportional parameter corresponding to the training action in the standard training action library completed by the user;
determining completion parameters of standard training actions matched by the user according to the action characteristics of each frame of image in the training course video frame sequence, the action characteristics of the standard training actions in the standard training action library and the proportion parameters corresponding to the training actions in the standard training action library completed by the user;
and determining the completion rate of the user completing the action in each frame of image in the training course video frame sequence according to the completion parameter of the standard training action matched by the user and the completion parameter of the action in each frame of image in the training course video frame sequence.
In the embodiment of the invention, the proportion parameters corresponding to the training actions of the user in the standard training action library can be understood as proportion parameters of four limbs, a trunk body, movable joint points, a horizontal ground, the gravity center position of the user, a space angle formed by the head at each movable joint point, a distance from the horizontal plane, the gravity center position of the user and the like when the user finishes the training actions in the standard training action library. The action features of the standard training actions in the standard training action library may be understood as action attribute information of the standard training actions in the standard training action library, for example: the displacement degree of the gravity center position of the user, the space angle formed by connecting the limbs through the movable joint points and the distance between the limbs and the horizontal plane in the training action. The motion characteristics of each frame of image in the sequence of lesson video frames can be understood as motion attribute information of the user's training motion on each frame of image in the sequence of lesson video frames. The completion parameter of the motion in each frame of image in the training course video frame sequence can be understood as a data parameter that the limb of the user needs to reach when the user completes the motion in each frame of image in the training course video frame sequence. The completion parameters of the standard training actions matched by the user can be understood as data parameters which are required to be reached by the limbs of the user when the user completes the standard training actions according to the proportion parameters corresponding to the training actions in the standard training action library completed by the user. The position can be determined by using a 6-axis sensor, a space positioning algorithm and a depth camera or an RGB (red, green and blue) camera.
In the embodiment of the invention, whether the height proportion, the weight proportion, the four-limb proportion, the movable joint point position, the age difference and the gender are the same or not is determined according to the physical sign information of a trainer and the physical sign information of a user in the standard training action library, and the proportion parameter corresponding to the training action finished by the user in the standard training action library is determined. And matching according to the action characteristics of each frame of image in the training course video frame sequence and the action characteristics of the standard training action in the standard training action library to determine the standard training action corresponding to each frame of image in the training course video frame sequence, and determining the completion parameters of the standard training action matched by the user according to the standard training action and the proportion parameters corresponding to the training action in the standard training action library completed by the user. And determining the completion rate of the user completing the action in each frame of image in the training course video frame sequence according to the completion parameters of the standard training action matched by the user and the completion parameters of the action in each frame of image in the training course video frame sequence.
Further, the determining, according to the sign information of the trainer in the standard training action library and the sign information of the user, a proportional parameter corresponding to the user completing the training action in the standard training action library includes:
and determining the proportion parameters corresponding to the training actions of the user in the standard training action library according to the height, the weight, the ratio of the four limbs, the movable joint point and the height, the weight, the ratio of the four limbs and the movable joint point in the physical sign information of the trainer in the standard training action library.
In the embodiment of the invention, whether the height proportion, the weight proportion, the four-limb proportion, the movable joint point position, the age difference and the gender are the same or not is determined according to the physical sign information of a trainer and the physical sign information of a user in the standard training action library, and the proportion parameter corresponding to the training action finished by the user in the standard training action library is determined. For example: determining the ratio of the moving range of the movable joint point position determined according to the limb ratio of the user to the moving range of the movable joint point position determined according to the limb ratio of the trainer, and determining the position ratio parameter corresponding to the limb ratio of the user and the movable joint point.
Further, the determining, according to the motion characteristics of each frame of image in the training course video frame sequence, the motion characteristics of the standard training motions in the standard training motion library, and the proportion parameters corresponding to the training motions in the user completed standard training motion library, the completion parameters of the standard training motions matched with the user completed by the user includes:
determining motion characteristics of each frame of image in the training course video frame sequence and completion parameters of user motion according to the training course video frame sequence;
and if the corresponding standard training action is matched according to the action characteristics of the standard training action in the standard training action library according to the action characteristics of each frame of image in the training course video frame sequence, determining the completion parameters of the standard training action matched by the user according to the completion parameters of the corresponding standard training action, the proportion parameters corresponding to the training action in the standard training action library completed by the user and the user sign information.
In the embodiment of the present invention, the completion parameter of the user action may be understood as a completion parameter corresponding to an action on each frame of image in a sequence of video frames of the training course completed by the user, for example: the distance between a user foot point and the horizontal ground in each frame of image in the training course video frame sequence, the spatial position of each movable joint point and the spatial angle formed by limbs connected through each movable joint point.
In the embodiment of the invention, the action characteristics of each frame of image in the training course video frame sequence are identified through the training course video frame sequence, and the standard training action corresponding to the action characteristics of each frame of image in the training course video frame sequence is matched in the standard training action library according to the identified action characteristics of each frame of image in the training course video frame sequence. And if the standard training action corresponding to the action characteristic of each frame of image in the training course video frame sequence is matched in the standard training action library according to the identified action characteristic of each frame of image in the training course video frame sequence, calculating the completion parameter of the training action matched by the user according to the completion parameter with more standard training actions, the proportion parameter corresponding to the training action in the standard training action library completed by the user and the user sign information. If the standard training action corresponding to the action characteristics of each frame of image in the training course video frame sequence cannot be matched in the standard training library according to the action characteristics of each frame of image in the training course video frame sequence, the corresponding training action is continuously matched in the training action database according to the action characteristics of each frame of image in the training course video frame sequence until the training action matched with the training action is found, and the completion parameters of the matched training action completed by the user are calculated by repeating the operation in the standard training library.
According to the method, the action completion rate of each frame of image in a training course video frame sequence is obtained according to user sign information, the training course video frame sequence of a user and a standard training action library; inputting the user physical sign information, two adjacent frames of images in the training course video frame sequence and action completion rates corresponding to the two adjacent frames of images into a calorie estimation model to obtain calories consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image; the calorie estimation model is obtained by training a sample user physical sign information, two adjacent frames of images, action completion rates corresponding to the two adjacent frames of images and a calorie pair deep learning network model consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image in a sample training course video frame sequence; and obtaining the total calorie consumption of the training course video frame sequence according to the calorie consumed by the training action on the previous frame image except the first frame image in the training course video frame sequence to the training action on the next frame image. The problem that the calorie of an intelligent training plan and a self-organized course cannot be estimated in advance under the condition of no heart rate acquisition equipment is solved, and the effect of accurately estimating the calorie value corresponding to the user training course according to the user sign information and the training course video frame sequence is achieved.
Example two
Fig. 2 is a flowchart of a calorie estimation method in the second embodiment of the present invention, and the technical solution of the second embodiment of the present invention is further detailed on the basis of the above technical solution, and specifically includes the following steps:
step 210, obtaining a sample training course video frame sequence, training data corresponding to the sample training course video frame sequence, and sample user sign information in the sample training course video frame sequence;
in the embodiment of the present invention, the sample training course video frame sequence may be understood as a video frame sequence split from a sample training course input during the training of the calorie estimation model, and may also be understood as a video frame sequence split from a historical training course in the training video library. The training data corresponding to the sample training course video frame sequence can be understood as the time point of each frame image in the sample training course video frame sequence corresponding to the accumulated calorie value and the time point and the time period corresponding to the rest time in the sample training course video frame sequence. The sign information of the sample user in the sample training course video frame sequence can be understood as the sign information of the training user in the sample training course video frame sequence, for example: training the height, age, weight, sex, limb length, limb proportion, movable joint point and other related information of the user.
In the embodiment of the invention, the sample training course video is acquired from the historical training video library, the acquired sample training course video is split into the sample training course video frame sequence, and the training data corresponding to the sample training course video frame sequence and the sample user physical sign information in the sample training course video frame sequence are acquired simultaneously, so that the motion completion rate of each frame of image in the sample training course video frame sequence is calculated.
In the embodiment of the present invention, the rest time of the sample training course divided video frame sequence may be understood as a video frame sequence image with negligible video frame sequence or no user picture in the video frame sequence. The intermittent rest time may also be understood as a video frame sequence division point, and the entire video frame sequence is divided into two parts for related calorie calculation, which finally accumulates into the total consumed calories of the entire video frame sequence.
Step 220, determining the action completion rate of each frame of image in the sample training course video frame sequence according to the action characteristics of each frame of image in the sample training course video frame sequence, the action characteristics of the training action in the standard training action library, the sample user physical sign information in the sample training course video frame sequence and the trainer physical sign information in the standard training library;
in the embodiment of the present invention, the motion characteristics of each frame of image in the sample training course video frame sequence may be understood as the motion attribute information of the training motion of the user on each frame of image in the sample training course video frame sequence. The action completion rate of each frame of image in the sample training course video frame sequence can be understood as the completion percentage of the matched action in the action completion standard training action library of each frame of image in the sample training course video frame sequence calculated according to the sample user sign information and the trainer sign information.
In the embodiment of the invention, the action of each frame of image in the sample user training course video frame sequence is determined according to the sample user sign information and the sample user training course video frame sequence. According to the sign information of the sample user and the information of the trainee in the standard training action library, determining a completion parameter corresponding to the completion of the training action in the standard training action library by the sample user; and determining the action completion rate of each frame of image in the sample user training course video frame sequence according to the action completion parameters of each frame of image in the sample user training course video frame sequence and the completion parameters corresponding to the training action in the sample user completion standard training action library.
Step 230, determining calories consumed by the training action change from the previous frame image to the next frame image in the two adjacent frames of images in the sample training course video frame sequence according to the training data corresponding to the sample training course video frame sequence and the sample training course video frame sequence;
in the embodiment of the invention, the calorie consumed by the sample training course when the training action on the previous image in the two adjacent images in the video frame sequence is changed to the training action on the next image can be understood as the calorie value consumed by the sample user when the training action on the previous image is changed to the training action on the next image.
In the embodiment of the invention, the calorie value corresponding to the time point of each frame of image in the sample training course video frame sequence in the training data corresponding to the sample training course video frame sequence is accumulated, and the calorie value corresponding to the time point of the next frame of image in the two adjacent frames of images in the sample training course video frame sequence is subtracted from the accumulated calorie value corresponding to the time point of the previous frame of image, so that the calorie consumed by the training action from the previous frame of image to the next frame of image in the two adjacent frames of images in the training course video frame sequence is obtained.
Step 240, inputting the two adjacent frames of images in the sample training course video frame sequence, the sample user physical sign information in the sample training course video frame sequence, the action completion rate corresponding to the two adjacent frames of images, and the calorie consumed by the training action from the training action on the previous frame of image to the training action on the next frame of image in the sample training course video frame sequence into the deep learning network model to obtain the calorie estimation model.
In the embodiment of the invention, the deep learning network model can be understood as a machine learning algorithm which can learn the intrinsic rules and the expression levels of the sample data. The deep learning network model can adopt, but is not limited to, linear regression and polynomial regression. Ridge regression, neural networks, and other machine learning models.
In the embodiment of the invention, two adjacent frames of images in a sample training course video frame sequence, sample user physical sign information in the sample training course video frame sequence, action completion rates corresponding to the two adjacent frames of images, and calories consumed by training actions changed from a training action on a previous frame of image to a training action on a next frame of image in the two adjacent frames of images in the sample training course video frame sequence are input into a deep learning network model, and then the deep learning network model analyzes two adjacent frames of images in the sample training course video frame sequence, sample user physical sign information in the sample training course video frame sequence, and action completion rates corresponding to the two adjacent frames of images, sample user physical sign information in the sample training course video frame sequence, and action completion rate intrinsic rules and characteristics corresponding to the two adjacent frames of images in the sample training course video frame sequence and carries out training, a calorie estimation model is obtained.
Optionally, the user calorie estimation model is obtained by inputting training data corresponding to the sample user body test training course video frame sequence, the sample user body test training course video frame sequence and the sample user sign information into the deep learning network model.
Optionally, the simple calorie estimation model is obtained by inputting training data corresponding to the sample user training course video frame sequence, the sample user training course video frame sequence and the sample user sign information into the deep learning network model.
In the embodiment of the invention, the data which can be input can be calculated according to the data acquired under the current condition, and a proper estimation model can be selected from a calorie estimation model, a user calorie estimation model and a simple calorie estimation model for calorie estimation.
Further, before obtaining the total calories consumed by the sequence of training session video frames according to the calories consumed by the training action on the previous frame of image in the sequence of training session video frames, except for the first frame of image, the method further includes:
and if the action characteristics of the first frame image in the training course video frame sequence are matched with the training starting action in the standard training action library, counting calories corresponding to the training starting action in the standard training action library into the total consumed calories of the training course video frame sequence.
In the embodiment of the invention, the training starting action in the standard training library can be understood as the starting action corresponding to the standard training course. The calorie corresponding to the training starting action in the standard training library can be understood as the calorie consumption corresponding to the training starting action when the trainer enters the standard training library from the natural standing action.
In the embodiment of the invention, when the total calorie consumption of the training course video frame sequence is taken as the total calorie consumption corresponding to the training course training of the user, the action characteristic of the training action on the first frame image in the training course video frame sequence needs to be matched with the action characteristic of the training starting action in the standard training library, and if the action characteristic of the first frame image in the training course video frame sequence is matched with the training starting action in the standard training action library, the calorie corresponding to the training starting action in the standard training library is counted into the total calorie consumption of the training course video frame sequence.
Further, before obtaining the total calories consumed by the sequence of training session video frames according to the calories consumed by the training action on the previous frame of image in the sequence of training session video frames, except for the first frame of image, the method further includes:
if the action characteristics of the first frame image in the training course video frame sequence are matched with the training starting action in the standard training action library, calculating the calorie corresponding to the matched action in the standard training action library according to the action characteristics of the first frame image in the training course video frame sequence into the total consumed calorie of the training course video frame sequence.
In the embodiment of the invention, when the total calorie consumption of the training course video frame sequence is taken as the total calorie consumption corresponding to the training course training of the user, the action characteristic of the training action on the first frame image in the training course video frame sequence needs to be matched with the action characteristic of the training starting action in the standard training action library, and if the action characteristic of the first frame image in the training course video frame sequence is not matched with the training starting action in the standard training action library, the calorie corresponding to the matching action in the standard training action library according to the action characteristic of the first frame image in the training course video frame sequence is counted into the total calorie consumption of the training course video frame sequence. The calorie corresponding to the matching action can be understood as the calorie consumption of the trainer for training the matching action from the natural standing action into the standard training library.
The method comprises the steps of obtaining a sample training course video frame sequence, training data corresponding to the sample training course video frame sequence and sample user physical sign information in the sample training course video frame sequence; determining the action completion rate of each frame of image in the sample training course video frame sequence according to the action characteristics of each frame of image in the sample training course video frame sequence, the action characteristics of the training action in the standard training action library, the sample user physical sign information in the sample training course video frame sequence and the characteristic information of the trainer in the standard training library; determining calories consumed by the training action change from the previous frame image to the next frame image in the two adjacent frames of images in the sample training course video frame sequence according to the training data corresponding to the sample training course video frame sequence and the sample training course video frame sequence; and changing the user sign information, two adjacent frames of images in the training course video frame sequence and the action completion rate corresponding to the two adjacent frames of images, the two adjacent frames of images in the sample training course video frame sequence, the sample user sign information in the sample training course video frame sequence, the action completion rate corresponding to the two adjacent frames of images, and the training action on the previous frame of image in the two adjacent frames of images in the sample training course video frame sequence to a calorie input deep learning network model consumed by the training action on the next frame of image to obtain a calorie estimation model. The problem that the calorie of an intelligent training plan and a self-organized course cannot be estimated in advance under the condition of no heart rate acquisition equipment is solved, and the effect of accurately estimating the calorie value corresponding to the user training course according to the user sign information and the training course video frame sequence is achieved.
EXAMPLE III
Fig. 3 is a calorie estimation apparatus according to an embodiment of the present invention, the apparatus including: a completion rate determination module 310, a calorie determination module 320, and a total consumption confirmation module 330;
the completion rate determining module 310 is configured to obtain, according to the user sign information, the training course video frame sequence of the user, and the standard training action library, an action completion rate of each frame of image in the training course video frame sequence;
the calorie determination module 320 is configured to input the user sign information, two adjacent frames of images in the training course video frame sequence, and the action completion rate corresponding to the two adjacent frames of images into a calorie estimation model, so as to obtain a calorie consumed by a training action on a previous frame of image in the two adjacent frames of images being changed to a training action on a next frame of image; the calorie estimation model is obtained by training a sample user physical sign information, two adjacent frames of images, action completion rates corresponding to the two adjacent frames of images and a calorie pair deep learning network model consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image in a sample training course video frame sequence;
the total consumption confirming module 330 is configured to obtain total calories consumed by the training course video frame sequence according to calories consumed by the training action on the previous frame image except the first frame image in the training course video frame sequence changed to the training action on the next frame image.
Further, the completion rate determining module 310 is specifically configured to:
according to the physical sign information of the trainee in the standard training action library and the physical sign information of the user, determining a proportional parameter corresponding to the training action in the standard training action library completed by the user;
determining completion parameters of standard training actions matched by the user according to the action characteristics of each frame of image in the training course video frame sequence, the action characteristics of the standard training actions in the standard training action library and the proportion parameters corresponding to the training actions in the standard training action library completed by the user;
and determining the completion rate of the user completing the action in each frame of image in the training course video frame sequence according to the completion parameter of the standard training action matched by the user and the completion parameter of the action in each frame of image in the training course video frame sequence.
Further, the completion rate determining module 310 is specifically configured to:
and determining the proportion parameters corresponding to the training actions of the user in the standard training action library according to the height, the weight, the ratio of the four limbs, the movable joint point and the height, the weight, the ratio of the four limbs and the movable joint point in the physical sign information of the trainer in the standard training action library.
Further, the completion rate determining module 310 is specifically configured to:
determining motion characteristics of each frame of image in the training course video frame sequence and completion parameters of user motion according to the training course video frame sequence;
and if the corresponding standard training action is matched according to the action characteristics of the standard training action in the standard training action library according to the action characteristics of each frame of image in the training course video frame sequence, determining the completion parameters of the standard training action matched by the user according to the completion parameters of the corresponding standard training action, the proportion parameters corresponding to the training action in the standard training action library completed by the user and the user sign information.
Further, the calorie determination module 320 is specifically further configured to:
acquiring a sample training course video frame sequence, training data corresponding to the sample training course video frame sequence and sample user physical sign information in the sample training course video frame sequence;
determining the action completion rate of each frame of image in the sample training course video frame sequence according to the action characteristics of each frame of image in the sample training course video frame sequence, the action characteristics of the training action in the standard training action library, the sample user physical sign information in the sample training course video frame sequence and the characteristic information of the trainer in the standard training library;
determining calories consumed by the training action change from the previous frame image to the next frame image in the two adjacent frames of images in the sample training course video frame sequence according to the training data corresponding to the sample training course video frame sequence and the sample training course video frame sequence;
and inputting the two adjacent frames of images in the sample training course video frame sequence, the sample user physical sign information in the sample training course video frame sequence, the action completion rate corresponding to the two adjacent frames of images, and the calorie consumed by the training action from the training action on the previous frame of image to the training action on the next frame of image in the sample training course video frame sequence into a deep learning network model to obtain a calorie estimation model.
Further, the total consumption confirming module 330 is further specifically configured to:
and if the action characteristics of the first frame image in the training course video frame sequence are matched with the training starting action in the standard training action library, counting calories corresponding to the training starting action in the standard training action library into the total consumed calories of the training course video frame sequence.
Further, the total consumption confirming module 330 is further specifically configured to:
if the action characteristics of the first frame image in the training course video frame sequence are matched with the training starting action in the standard training action library, calculating the calorie corresponding to the matched action in the standard training action library according to the action characteristics of the first frame image in the training course video frame sequence into the total consumed calorie of the training course video frame sequence.
The calorie estimation device provided by the embodiment of the invention can execute the calorie estimation method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of an electronic device according to embodiment 4 of the present invention. FIG. 4 illustrates a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 4 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in FIG. 4, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, and commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by executing programs stored in the system memory 28, for example, implementing a calorie estimation method provided by an embodiment of the present invention, the method including:
obtaining the action completion rate of each frame of image in the training course video frame sequence according to the user sign information, the training course video frame sequence of the user and a standard training action library;
inputting the user physical sign information, two adjacent frames of images in the training course video frame sequence and action completion rates corresponding to the two adjacent frames of images into a calorie estimation model to obtain calories consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image; the calorie estimation model is obtained by training a sample user physical sign information, two adjacent frames of images, action completion rates corresponding to the two adjacent frames of images and a calorie pair deep learning network model consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image in a sample training course video frame sequence;
and obtaining the total calorie consumption of the training course video frame sequence according to the calorie consumed by the training action on the previous frame image except the first frame image in the training course video frame sequence to the training action on the next frame image.
EXAMPLE five
An embodiment of the present invention also provides a storage medium containing computer-executable instructions which, when executed by a computer processor, perform a calorie estimation method, the method comprising:
obtaining the action completion rate of each frame of image in the training course video frame sequence according to the user sign information, the training course video frame sequence of the user and a standard training action library;
inputting the user physical sign information, two adjacent frames of images in the training course video frame sequence and action completion rates corresponding to the two adjacent frames of images into a calorie estimation model to obtain calories consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image; the calorie estimation model is obtained by training a sample user physical sign information, two adjacent frames of images, action completion rates corresponding to the two adjacent frames of images and a calorie pair deep learning network model consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image in a sample training course video frame sequence;
and obtaining the total calorie consumption of the training course video frame sequence according to the calorie consumed by the training action on the previous frame image except the first frame image in the training course video frame sequence to the training action on the next frame image.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A calorie estimation method, comprising:
obtaining the action completion rate of each frame of image in the training course video frame sequence according to the user sign information, the training course video frame sequence of the user and a standard training action library;
inputting the user physical sign information, two adjacent frames of images in the training course video frame sequence and action completion rates corresponding to the two adjacent frames of images into a calorie estimation model to obtain calories consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image; the calorie estimation model is obtained by training a sample user physical sign information, two adjacent frames of images, action completion rates corresponding to the two adjacent frames of images and a calorie pair deep learning network model consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image in a sample training course video frame sequence;
and obtaining the total calorie consumption of the training course video frame sequence according to the calorie consumed by the training action on the previous frame image except the first frame image in the training course video frame sequence to the training action on the next frame image.
2. The method of claim 1, wherein obtaining the rate of completing the exercise of each frame of image in the sequence of workout video frames according to the information about the physical signs of the user, the sequence of workout video frames of the user, and a standard workout action library comprises:
according to the physical sign information of the trainee in the standard training action library and the physical sign information of the user, determining a proportional parameter corresponding to the training action in the standard training action library completed by the user;
determining completion parameters of standard training actions matched by the user according to the action characteristics of each frame of image in the training course video frame sequence, the action characteristics of the standard training actions in the standard training action library and the proportion parameters corresponding to the training actions in the standard training action library completed by the user;
and determining the completion rate of the user completing the action in each frame of image in the training course video frame sequence according to the completion parameter of the standard training action matched by the user and the completion parameter of the action in each frame of image in the training course video frame sequence.
3. The method according to claim 2, wherein the determining, according to the physical sign information of the trainer and the physical sign information of the user in the standard training action library, a proportional parameter corresponding to completion of the training action in the standard training action library by the user comprises:
and determining the proportion parameters corresponding to the training actions of the user in the standard training action library according to the height, the weight, the ratio of the four limbs, the movable joint point and the height, the weight, the ratio of the four limbs and the movable joint point in the physical sign information of the trainer in the standard training action library.
4. The method as claimed in claim 2, wherein the determining the completion parameters of the standard training actions performed by the user according to the action features of each frame of image in the sequence of training session video frames, the action features of the standard training actions in the standard training action library, and the corresponding proportion parameters of the training actions performed by the user in the standard training action library comprises:
determining motion characteristics of each frame of image in the training course video frame sequence and completion parameters of user motion according to the training course video frame sequence;
and if the corresponding standard training action is matched according to the action characteristics of the standard training action in the standard training action library according to the action characteristics of each frame of image in the training course video frame sequence, determining the completion parameters of the standard training action matched by the user according to the completion parameters of the corresponding standard training action, the proportion parameters corresponding to the training action in the standard training action library completed by the user and the user sign information.
5. The method of claim 1, wherein the calorie estimation model is trained from a calorie versus deep learning network model of sample user physical sign information, two adjacent frames of images, action completion rates corresponding to the two adjacent frames of images, and a change of a training action on a previous image in the two adjacent frames of images to a training action on a next image, and comprises:
acquiring a sample training course video frame sequence, training data corresponding to the sample training course video frame sequence and sample user physical sign information in the sample training course video frame sequence;
determining the action completion rate of each frame of image in the sample training course video frame sequence according to the action characteristics of each frame of image in the sample training course video frame sequence, the action characteristics of the training action in the standard training action library, the sample user physical sign information in the sample training course video frame sequence and the characteristic information of the trainer in the standard training library;
determining calories consumed by the training action change from the previous frame image to the next frame image in the two adjacent frames of images in the sample training course video frame sequence according to the training data corresponding to the sample training course video frame sequence and the sample training course video frame sequence;
and inputting the two adjacent frames of images in the sample training course video frame sequence, the sample user physical sign information in the sample training course video frame sequence, the action completion rate corresponding to the two adjacent frames of images, and the calorie consumed by the training action from the training action on the previous frame of image to the training action on the next frame of image in the sample training course video frame sequence into a deep learning network model to obtain a calorie estimation model.
6. The method as claimed in claim 1, wherein before obtaining the total calories consumed for the sequence of workout video frames from the calories consumed by the workout movement on the previous image to the next image in the sequence of workout video frames, the method further comprises:
and if the action characteristics of the first frame image in the training course video frame sequence are matched with the training starting action in the standard training action library, counting calories corresponding to the training starting action in the standard training action library into the total consumed calories of the training course video frame sequence.
7. The method as claimed in claim 1, wherein before obtaining the total calories consumed for the sequence of workout video frames from the calories consumed by the workout movement on the previous image to the next image in the sequence of workout video frames, the method further comprises:
if the action characteristics of the first frame image in the training course video frame sequence are matched with the training starting action in the standard training action library, calculating the calorie corresponding to the matched action in the standard training action library according to the action characteristics of the first frame image in the training course video frame sequence into the total consumed calorie of the training course video frame sequence.
8. A calorie estimation device, comprising:
the completion rate determining module is used for obtaining the action completion rate of each frame of image in the training course video frame sequence according to the user sign information, the training course video frame sequence of the user and a standard training action library;
the calorie determining module is used for inputting the user physical sign information, two adjacent frames of images in the training course video frame sequence and action completion rates corresponding to the two adjacent frames of images into a calorie estimation model to obtain calories consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image; the calorie estimation model is obtained by training a sample user physical sign information, two adjacent frames of images, action completion rates corresponding to the two adjacent frames of images and a calorie pair deep learning network model consumed by changing a training action on a previous frame of image in the two adjacent frames of images to a training action on a next frame of image in a sample training course video frame sequence;
and the total consumption confirming module is used for obtaining the total consumed calorie of the training course video frame sequence according to the calorie consumed by the training action change from the training action on the previous frame image to the training action on the next frame image except the first frame image in the training course video frame sequence.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the calorie estimation method of any of claims 1-7.
10. A storage medium containing computer executable instructions for performing the calorie estimation method of any one of claims 1-7 when executed by a computer processor.
CN202011589806.0A 2020-12-29 2020-12-29 Calorie estimation method, device, equipment and storage medium Pending CN112686138A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011589806.0A CN112686138A (en) 2020-12-29 2020-12-29 Calorie estimation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011589806.0A CN112686138A (en) 2020-12-29 2020-12-29 Calorie estimation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112686138A true CN112686138A (en) 2021-04-20

Family

ID=75454811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011589806.0A Pending CN112686138A (en) 2020-12-29 2020-12-29 Calorie estimation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112686138A (en)

Similar Documents

Publication Publication Date Title
US11763603B2 (en) Physical activity quantification and monitoring
Kranz et al. The mobile fitness coach: Towards individualized skill assessment using personalized mobile devices
CN108229282A (en) Critical point detection method, apparatus, storage medium and electronic equipment
US20140170609A1 (en) Personalized compliance feedback via model-driven sensor data assessment
WO2021098616A1 (en) Motion posture recognition method, motion posture recognition apparatus, terminal device and medium
US10350454B1 (en) Automated circuit training
Chen et al. Using real-time acceleration data for exercise movement training with a decision tree approach
CN113743273B (en) Real-time rope skipping counting method, device and equipment based on video image target detection
CN111597975B (en) Personnel action detection method and device and electronic equipment
KR20220028654A (en) Apparatus and method for providing taekwondo movement coaching service using mirror dispaly
CN110782482A (en) Motion evaluation method and device, computer equipment and storage medium
CN114022512A (en) Exercise assisting method, apparatus and medium
CN115516531A (en) System and method for real-time interaction and guidance
CN108229291A (en) Characteristic point detection, network training method, device, electronic equipment and storage medium
CN112686138A (en) Calorie estimation method, device, equipment and storage medium
CN116704603A (en) Action evaluation correction method and system based on limb key point analysis
US10896760B2 (en) Estimation of muscle activities using the muscles relationship during simulating movements
CN113934901B (en) Information matching method, device, equipment, system and medium based on behavior cognition
Smyrnakis et al. Advancing Monocular Video-Based Gait Analysis Using Motion Imitation with Physics-Based Simulation
CN117423166B (en) Motion recognition method and system according to human body posture image data
CN117173789A (en) Solid ball action scoring method, system, equipment and storage medium
US20220328159A1 (en) Range of motion determination
CN115761071A (en) Fitness result display method, device, terminal and storage medium based on AR technology
Saraee Data analytics for image visual complexity and kinect-based videos of rehabilitation exercises
Peetsma The Development of a Novel Method for Measuring Biomechanical Features of Agility Using Markerless Motion Capture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination