CN110418205A - Body-building teaching method, device, equipment, system and storage medium - Google Patents
Body-building teaching method, device, equipment, system and storage medium Download PDFInfo
- Publication number
- CN110418205A CN110418205A CN201910599390.1A CN201910599390A CN110418205A CN 110418205 A CN110418205 A CN 110418205A CN 201910599390 A CN201910599390 A CN 201910599390A CN 110418205 A CN110418205 A CN 110418205A
- Authority
- CN
- China
- Prior art keywords
- user
- coach
- video frame
- action
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000009471 action Effects 0.000 claims abstract description 289
- 230000015572 biosynthetic process Effects 0.000 claims abstract 2
- 238000003786 synthesis reaction Methods 0.000 claims abstract 2
- 239000013598 vector Substances 0.000 claims description 155
- 238000011156 evaluation Methods 0.000 claims description 95
- 230000033001 locomotion Effects 0.000 claims description 68
- 238000012545 processing Methods 0.000 claims description 22
- 230000002194 synthesizing effect Effects 0.000 claims description 22
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000000977 initiatory effect Effects 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 2
- 238000001514 detection method Methods 0.000 claims description 2
- 230000002708 enhancing effect Effects 0.000 abstract description 5
- 230000000875 corresponding effect Effects 0.000 description 101
- 238000010586 diagram Methods 0.000 description 14
- 238000005516 engineering process Methods 0.000 description 13
- 238000004891 communication Methods 0.000 description 11
- 230000000007 visual effect Effects 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 230000003287 optical effect Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000003190 augmentative effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000001965 increasing effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
- G09B5/065—Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/254—Management at additional data server, e.g. shopping server, rights management server
- H04N21/2543—Billing, e.g. for subscription services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Business, Economics & Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Educational Administration (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Marketing (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Educational Technology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Electrically Operated Instructional Devices (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The disclosure provides a kind of body-building teaching method, device, equipment and computer readable storage medium, which comprises obtains the action video frame of user;The video frame that the action video frame generates when including the user action;It is retrieved as the coach's video frame and the corresponding study position of the user that the user plays;The study position is display position of the user action in coach's video frame;In coach's video frame that the most described user of user action synthesis in the action video frame of the user is played on corresponding study position, enhancing coach's video frame is generated;The enhancing is played for the user and trains video frame, provides a kind of mode of the sport and body-building of high-quality.
Description
Technical Field
The present disclosure relates to the field of augmented reality technologies, and in particular, to a fitness teaching method, apparatus, device, system, and computer-readable storage medium.
Background
With the improvement of living standard and the enhancement of sports consciousness of people, more and more people are willing to invest more time and energy in body building or different sports such as basketball, yoga and the like; with the development of science and technology, the way that people learn body-building or different sports through virtual teaching such as video teaching is more and more common; however, compared with the real teaching process, the effect of the current virtual teaching process is still to be improved, the interactivity is not strong, and the learner is difficult to obtain the real feeling of being in the fitness site.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a fitness teaching method, apparatus, device, system, and computer-readable storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided a fitness teaching method, the method comprising:
acquiring a motion video frame of a user; the motion video frame comprises a video frame generated when the user moves;
acquiring a coach video frame played for the user and a learning position corresponding to the user; the learning position is a display position of the user action in the coach video frame;
synthesizing the user action in the action video frame of the user to a corresponding learning position in a coach video frame played for the user to generate an enhanced coach video frame;
playing the enhanced coach video frame for the user.
Optionally, the coach video frame is a video frame comprising a coach demonstration action;
the method further comprises:
acquiring a coach video frame corresponding to the action video frame of the user; wherein the time for acquiring the action video frame of the user is the same as or within a preset time difference with the time for playing the coach video frame;
and comparing the user action in the action video frame of the user with the coach demonstration action in the coach video frame to generate an evaluation score for measuring the user action.
Optionally, the comparing the user action in the user action video frame with the coach demonstration action in the coach video frame to generate an evaluation score for measuring the user action includes:
acquiring coordinate data of the skeleton characteristic points of the user from the action video frame of the user as user action data, and acquiring coordinate data of the skeleton characteristic points of the coach from the coach video frame as coach demonstration action data;
generating an evaluation score that measures the user action based on a difference between the user action data and the coach demonstration action data.
Optionally, the generating an assessment score that measures the user action based on the difference between the user action data and the coach demonstration action data comprises:
acquiring all user vector included angles based on the user action data, and acquiring all standard vector included angles based on the coach demonstration action data; the user vector included angle is an included angle of two vectors formed by coordinate data of any three adjacent skeleton feature points corresponding to the user; the standard vector included angle is an included angle of two vectors formed by coordinate data of any three adjacent skeleton feature points corresponding to the coach;
determining similarity parameters according to all user vector included angles and all standard vector included angles, and accordingly determining evaluation scores corresponding to the user actions; the similarity parameter at least comprises a standard deviation result or a variance result of the included angle of the user vector and the included angle of the standard vector.
Optionally, the determining the similarity parameter according to all user vector included angles and all standard vector included angles includes:
calculating a difference vector of the corresponding standard vector included angle and the user vector included angle; wherein, the number of the standard vector included angles and the user vector included angles are both N (N is an integer greater than 0), and the ith (i is more than or equal to 1 and less than or equal to N) standard vector included angle is alphaiThe i-th user vector angle is betaiThe difference vector is Δ αiThen Δ αi=αi-βi;
Calculating an average difference vector according to all difference vectors; wherein, assuming the average difference vector as Δ r, then
Calculating a similarity parameter by using all the difference vectors and the average difference vector; wherein, if the similarity parameter is S, thenOr
Optionally, the method further comprises:
synthesizing the rating score into the enhanced coach video frame. Optionally, said synthesizing the rating score into the enhanced coach video frame comprises:
if the same coach video frame is played for a plurality of users at the same time, and corresponding evaluation scores are respectively generated based on the action video frames of the users, a plurality of evaluation scores are synthesized into the enhanced coach video frame based on the ranking result of the evaluation scores.
Optionally, the method further comprises:
receiving a selection request sent by a user; the selection request includes at least one of: an identification of a user-selected coach video or an identification of a user-selected learning location.
Optionally, the coach video is a video comprising coach demonstration actions; the coach video comprises a template coach video and an enhanced coach video; wherein the template coach video does not synthesize any user's action video frames, and the enhanced coach video is pre-synthesized with one or more user's action video frames.
Optionally, the obtaining a coach video frame played for the user includes:
obtaining a coach video frame played for the user based on a start time and a current time of playing a coach video for the user.
Optionally, each coaching video includes one or more learning locations;
before said generating the enhanced coach video frame, further comprising:
if the same coach video is played for a plurality of users at the same time and the coach video has unselected learning positions, selecting the users corresponding to the unselected learning positions from the users who do not select the learning positions to receive the action video frame of the selected user, and synthesizing the user actions corresponding to the selected user to the unselected learning positions in the coach video.
Optionally, the user corresponds to a user account; the learning position corresponds to asset information;
the body-building teaching method further comprises the following steps:
and acquiring asset information corresponding to the learning position selected by the user, and initiating asset processing operation on the user account according to the asset information.
Optionally, the acquiring the motion video frame of the user includes:
detecting a human body in a video frame shot by a camera;
and based on the detected human body, performing background segmentation on the video frame to extract the human body, and generating a motion video frame only comprising human body motion.
According to a second aspect of embodiments of the present disclosure, there is provided a fitness teaching device, the device comprising:
the action video frame acquisition module is used for acquiring an action video frame of a user; the motion video frame comprises a video frame generated when the user moves;
the acquisition module is used for acquiring a coach video frame played for the user and a learning position corresponding to the user; the learning position is a display position of the user action in the coach video frame;
the enhanced coach video frame generation module is used for synthesizing the user action in the action video frame of the user to a corresponding learning position in a coach video frame played for the user to generate an enhanced coach video frame;
and the enhanced coach video frame playing unit is used for playing the enhanced coach video frame for the user.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein,
the processor is configured to perform the operations of the method as described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program, which, when executed by one or more processors, causes the processors to perform the operations in the method as described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the method, the user action in the action video frame of the user can be synthesized to the corresponding learning position in the coach video frame played for the user according to the corresponding learning position of the user, so that the enhanced coach video frame is generated.
According to the method and the device, the user can freely select the fitness video and the learning position based on the self requirement, so that the selection request is generated, the selection request comprises the identification of the coach video selected by the user or the identification of the learning position selected by the user, and the use experience of the user is improved.
In the disclosure, if the same coach video is played for multiple users at the same time and the coach video has unselected learning positions, the server may further select users corresponding to the unselected learning positions from the users who do not select the learning positions, so as to receive the motion video frames sent by the selected users, and synthesize the user motions corresponding to the selected users to the unselected learning positions in the coach video, so that the users who do not select the learning positions can see their own motions, and the use experience of the users is improved.
In the present disclosure, the user action in the action video frame may be compared with the demonstration action of the coach in the coach video frame, an evaluation score for measuring the user action sent to the user side is generated, and the evaluation score is synthesized into the enhanced coach video frame, so that the user has a clear judgment on the standard degree of the user action, and the user experience is improved.
In the present disclosure, if the same coach video frame is played for a plurality of users at the same time, and corresponding evaluation scores are generated based on the motion video frames of the plurality of users, the plurality of evaluation scores are synthesized into the enhanced coach video frame based on the ranking result of the evaluation scores, so that the user can see not only his own score but also scores of other users, thereby enhancing the interactivity of fitness.
In the disclosure, the user corresponds to a user account, the learning position corresponds to asset information, and when the asset information corresponding to the learning position selected by the user is obtained, asset processing operation can be initiated on the user account according to the asset information, so that corresponding economic value is provided for a merchant.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a block diagram of a fitness teaching system according to an exemplary embodiment of the present disclosure.
FIG. 2 is a schematic diagram of one of the frames of a template coach video shown in accordance with an exemplary embodiment of the present disclosure.
FIG. 3 is a schematic diagram illustrating one frame of an enhanced coaching video according to an exemplary embodiment of the present disclosure.
FIG. 4 is a schematic diagram illustrating another enhanced coach video with one frame according to an exemplary embodiment of the present disclosure.
FIG. 5 is a block diagram illustrating a second type of fitness teaching system according to an exemplary embodiment of the present disclosure.
FIG. 6 illustrates skeletal feature points and vector angles of a human body according to an exemplary embodiment of the present disclosure.
FIG. 7 is a block diagram illustrating a third type of workout teaching system according to an exemplary embodiment of the present disclosure.
FIG. 8 is a flow chart diagram illustrating a method of fitness teaching according to an exemplary embodiment of the present disclosure.
FIG. 9 is a schematic diagram illustrating the structure of a fitness teaching device according to an exemplary embodiment of the present disclosure.
FIG. 10 is an architectural diagram illustrating an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The related art body building teaching generally includes the following ways: firstly, a learner acquires a coach video, wherein the coach video is a video including a coach demonstration action, and then the learner follows to learn while watching the demonstration of the coach in the coach video; second, one or more learners and a coach access the same video channel where the learner learns following the real-time demonstration of the coach and communications between the learner and between the learner and the coach are possible.
In the process of implementing the embodiment of the present disclosure, the inventor finds that: the first fitness teaching mode lacks interactivity, a learner may not follow up the action rhythm of a coach, the learning action is not determined whether to be standard or not, and certain learning difficulty exists; although the second fitness teaching mode can obtain the online guidance of the coach to a certain degree, the learner cannot obtain the real feeling of being close to the fitness site, and the learner cannot train at any time due to the fact that the learner needs to match with the lesson starting time of the coach in time, so that the second fitness teaching mode has certain limitation.
Therefore, to solve the problems in the related art, the embodiments of the present disclosure provide a fitness teaching method, where the fitness teaching method may be applied to a local terminal, and the terminal may be an electronic device such as a computer, a tablet, a smart television, or a mobile phone; the body-building teaching method can also be applied to a server, and the server can be an electronic device which can provide computing services, such as a server and a computer.
The following description is given by taking a fitness teaching system, where the fitness teaching system includes the server and a user, the user may be an electronic device with a camera function and an audio/video display function, such as a smart television, a smart phone, a computer, a Personal Digital Assistant (PDA), or a tablet, and the server executes the fitness teaching method as an example:
referring to fig. 1, fig. 1 is a block diagram of a fitness teaching system according to an exemplary embodiment of the present disclosure.
In the embodiment shown in fig. 1, the system includes a server and a client.
And the user side is used for acquiring the action video frame of the user and sending the action video frame to the server side when the coach video selected by the user is played.
The server is used for receiving the action video frames sent by the user side and acquiring the coach video frames played for the user side and the learning position corresponding to the user side; the learning position is a display position of the user action in the coach video frame.
The server is further configured to synthesize the user actions in the action video frames to corresponding learning positions in a coach video frame played for the user side, generate an enhanced coach video frame, and send the enhanced coach video frame to the user side.
And the user side is also used for receiving and playing the enhanced coach video frame sent by the server side.
It should be noted that, a storage module (a database, a ROM, etc.) is arranged on the server and is used to store coach videos, where the coach videos are videos including coach demonstration motions, each coach video may correspond to one or more learning locations, and the specific number of the coach videos may be specifically set based on actual situations, the learning locations are display locations of user motions in the coach videos, please refer to fig. 2 and 3, fig. 2 shows a scene that the coach videos correspond to 4 learning locations, and fig. 3 shows a schematic view that the user motions are displayed at one of the learning locations in a coach video frame; in addition, the trainer video includes a template trainer video and an enhanced trainer video, please refer to fig. 2, the template trainer video is a video in which any user's motion video frame is not synthesized, please refer to fig. 3, the enhanced trainer video is a video in which one or more user's motion video frames are pre-synthesized, and the user's motion video frame in fig. 3 is pre-synthesized.
In an embodiment, before a user at the user side exercises or builds body based on the body building teaching system disclosed herein, if a selection of a coach video is required, the service side obtains the coach video from a database, and pushes a corresponding coach video to the user side to be displayed at the user side, the user side can select the coach video to be played and the learning position to be displayed or one of the coach video and the learning position to be displayed at the user side based on an actual requirement of the user side, and then the user side detects a selection of the user on the coach video and the learning position or one of the coach video and the learning position, initiates a selection request to the service side, and then receives the selection request sent by the user side and performs corresponding recording.
Wherein, based on the selection of the user, the selection request can include the identification of the coach video selected by the user and the identification of the learning position selected by the user, or the identification of the coach video selected by the user, or the identification of the learning position selected by the user; the user can select one or more learning positions based on own needs, which is not limited in the embodiments of the present disclosure, and in addition, the embodiments of the present disclosure do not limit the type of the trainer video selected by the user, and may be a template trainer video or an enhanced trainer video, and it can be understood that, on the premise of following protection of user privacy, the embodiments of the present disclosure do not limit the specific form of the enhanced trainer video frame selected by the user, for example, the enhanced trainer video frame may be an enhanced trainer video frame generated when the user last exercises or builds up body, or the enhanced trainer video frame may be an enhanced trainer video frame shared by other users.
Optionally, the user may not select a coach video and/or a learning position to be played, and the service end determines the coach video and/or the learning position; for example, the user clicks a random play key (which may be a virtual key or a physical key) on the user side, the user side sends a random play request, and then the service side randomly determines a coach video and a learning position for the user; or the server can also automatically determine a coach video and a learning position according to historical playing data of the user or user preference and the like.
In a possible implementation manner, the client corresponds to a user account, and the server can associate a coach video (a template coach video and an enhanced coach video) selected and played by the client, the generated enhanced coach video and the selected learning position with the user account, so that a user can conveniently obtain the associated coach video subsequently, the user can be helped to review a learning process, and the action accuracy of subsequent exercise or fitness exercise can be improved conveniently; and if the user selects the same coach video for exercise or fitness for multiple times, the learning position selected by the user associated with the user account can be directly obtained, repeated selection by the user is not needed, the operation steps of the user are reduced, and the user can select the learning position again based on the self requirement.
In the embodiment of the disclosure, when doing sports or fitness, the user side plays a coach video selected by a user, the user makes corresponding actions according to demonstration actions of coaches in the played coach video, the user side synchronously shoots user actions through the camera to generate action video frames including the user actions, and then the action video frames are sent to the server side.
The user side can generate an action video frame including a user action through the following two possible implementation manners:
in one implementation, the camera may be a 2D RGB camera, and the user side acquires an RGB video frame captured by the 2D RGB camera, detects a human body in the RGB video frame, and then performs background segmentation on the video frame based on the detected human body to extract the human body, so as to generate an action video frame including only human body actions.
In another implementation manner, the camera may be a 2D RGB camera and a 3D depth camera, the user side obtains an RGB video frame photographed by the 2D RGB camera and a depth visual field frame photographed by the 3D depth camera, calculates a visual depth field based on the depth visual field frame, detects a human body in the RGB video frame, then segments the detected human body in the RGB video frame from a background according to the visual depth field, and retains a human body foreground, thereby generating an action video frame including only human body actions.
It can be seen that, in the embodiment of the present disclosure, the user sends the motion video frame including only the human motion (i.e., the user motion) to the server, which is beneficial to reducing the video transmission amount, thereby increasing the transmission speed.
In the embodiment of the present disclosure, after receiving the action video frame sent by the user side, the server side obtains a coach video frame played for the user side, that is, a coach video frame currently played by the user side; in one example, the user end plays the content of the coach video frame of the first frame, and shoots the user to do the action corresponding to the coach demonstration action in the first frame through the camera, since the user performs the corresponding demonstration action based on the coach video frame of the first frame, and the user terminal needs to perform certain processing and transmit the captured video frame to the server, it may take a certain time, when the user terminal uploads the generated action video frame corresponding to the coach video frame of the first frame to the service terminal, the coach video frame of the first frame is already played at the user terminal, the content of the coach video frame of the third frame may be played for the user side at this time, and therefore, the service side needs to obtain the coach video frame played for the user side at this time and synthesize the action video frame, and the user can see the action of the user based on the playing progress; it should be noted that, at this time, the coach video frame played for the user side is not the same as the coach video frame corresponding to the user action in the action video frame.
In one implementation, since the user sends a corresponding play request to the server when playing the selected coach video, where the play request includes a timestamp, the server can obtain the start time of playing the selected coach video for the user, and then the server can determine the coach video frame played for the user at this time based on the start time and the current time of playing the selected coach video for the user.
It should be noted that, in the fitness teaching scenario, one-to-one (that is, one user selects one trainer video for learning) or many-to-one (that is, multiple users select one trainer video for learning at the same time) may be selected, and then the number of the user sides is not limited in this embodiment.
In a possible scenario, if the server detects that only one user side selects the coach video to play at present, the server receives an action video frame sent by the user side, then obtains a coach video frame played for the user side at this time and a learning position corresponding to the user side, where the learning position is a display position of the user action in the coach video frame, and finally the server synthesizes the user action in the action video frame to a learning position corresponding to the coach video frame played for the user side at this time, generates an enhanced coach video frame and sends the enhanced coach video frame to the user side so that the user side plays the enhanced coach video frame, where a demonstration action of a coach and an action of the user themselves exist in the enhanced coach video frame, please refer to fig. 3, if the coach video selected by the user is a template coach video, only the demonstration action of the coach and the action of the user exist in the generated enhanced video frame; the disclosed embodiments are based on AR technology (visual augmented reality technology) so that the user can see the actions he or she is doing and the actions he or she is doing at the same time.
It can be understood that, in the embodiment of the present disclosure, there is no limitation on the number of the motion video frames uploaded by the user side at the same time, that is, the embodiment of the present disclosure is not limited to a scene where one user side can only upload the motion video frames of one user, and 2 or more learners can learn based on the played coach video in the same place, and then the user side shoots the user motion through the camera, and can obtain at least two human body motions according to each shot video frame, so as to correspondingly generate at least two motion video frames and send the two motion video frames to the server side.
If the user side selects learning positions corresponding to the number of learners on the corresponding user side before the coach video is played, the user side can acquire a plurality of action video frames only comprising human body actions of a single learner based on the shot video frames and send the action video frames to the server side, and the server side receives the plurality of action video frames sent by the user side and synthesizes the human body actions in the action video frames to the corresponding learning positions in the coach video frames played for the user side at the moment to generate an enhanced coach video frame; if the user side only selects one learning position before the coach video is played, the server side selects one action video frame from a plurality of action video frame frames sent by the user side, synthesizes human body actions in the action video frame to the corresponding learning position in the coach video frame played for the user side at the moment, and generates an enhanced coach video frame; as for the manner of selecting the motion video frame by the server, the embodiment of the present disclosure does not limit this, and for example, the selection may be random, or the motion video frame corresponding to the user face image may be identified from the motion video frames based on the user face image pre-stored in the client.
It should be noted that, if the user selects to play an enhanced coach video and the selected learning position has synthesized the actions of other users, the server may remove the synthesized actions of other users before receiving and synthesizing the action video frames sent by the user side, and then synthesize the user actions in the action video frames of the user side to the corresponding learning position in the coach video frame played for the user side at this time.
In another possible scenario, if the server detects that multiple clients play the same coach video at the same time, the server receives motion video frames sent by the multiple clients, then obtains the coach video frames played for the clients and learning position information corresponding to each client, and finally the server synthesizes user motions in the multiple motion video frames to corresponding learning positions in the coach video frames played for the multiple clients synchronously at the moment to generate enhanced coach video frames, wherein the enhanced coach video frames have coach motions and actions corresponding to multiple users, see fig. 4, and if the coach video selected to be played by the user is a template coach video, the generated enhanced video frames have coach motions and actions corresponding to multiple users; in the embodiment of the disclosure, the user can also see the actions of other users, so that the interactivity of the movement is enhanced, a high-quality mutual-excitation and joint-movement body-building mode is provided, and the movement enthusiasm of the user is improved; it should be noted that the motion video frames sent by multiple clients correspond to the same coach video frame.
In a possible implementation manner, the server may only receive the action video frames of the clients with the selected learning positions to improve the receiving efficiency, and if it is detected that the same coach video frame is played for multiple users at the same time and the coach video has unselected learning positions, the server may select the client corresponding to the unselected learning positions from the clients with the unselected learning positions to receive the action video frames sent by the selected client, and synthesize the user actions corresponding to the selected client to the unselected learning positions in the coach video, thereby enhancing the interactivity of the users.
For example, the server may randomly select a user side corresponding to the unselected learning position from the user sides that have unselected learning positions, or the server may randomly select a user side corresponding to the unselected learning position from the user sides that have unselected learning positions based on a preset rule, where the preset rule may be a user side playing the largest number of videos from the user sides that have unselected learning positions, or a user side having selected learning positions for other trainer videos, and the like.
In the embodiment of the disclosure, after generating an enhanced coach video frame, a server sends the enhanced coach video frame to a client, so that the client receives and plays the enhanced coach video frame; the electronic device integrated with the user side can comprise a display screen, so that the user side can play the enhanced coach video frame through the display screen; it should be noted that the user side uploads the motion video frame through a streaming media technology, and the server side sends the enhanced coach video frame through the streaming media technology, so as to ensure fast transmission of the video frame.
It should be noted that, after the server generates the enhanced coach video frame corresponding to the coach video, the generated enhanced coach video is stored in the storage module, and if the user has a corresponding account, the generated enhanced coach video can be associated with the account of the user, so that the user can be helped to perform a copy learning process, the action accuracy of subsequent exercises or fitness exercises can be improved, and meanwhile, coach video resources are enriched.
In an embodiment, referring to fig. 2, the learning location may further correspond to asset information, where the asset information represents a value of the learning location, for example, the asset information may be virtual currency, integral, or actual currency, and when the server receives the learning location selected by the user, the server initiates an asset processing operation on a user account corresponding to the user side according to the asset information corresponding to the learning location, and then the user side executes the asset processing operation initiated by the server side to the user account of the user side; for example, the asset processing operation may be an operation of deducting a virtual coin, an integral or a balance in a user account, or an operation of deducting a balance in a bank card bound to the user account or a third party payment account; thereby providing the merchant with corresponding economic value.
Referring to fig. 5, fig. 5 is a block diagram of a second type of fitness teaching system according to an exemplary embodiment of the present disclosure.
In the embodiment shown in fig. 5, the system includes a server and a client.
And the user side is used for acquiring the action video frame of the user and sending the action video frame to the server side when the coach video selected by the user is played.
The server is used for receiving the action video frames sent by the user side and acquiring the coach video frames played for the user side and the learning position corresponding to the user side; the learning position is a display position of the user action in the coach video frame.
And the server is also used for synthesizing the user actions in the action video frames to the corresponding learning positions in the currently played coach video frames, generating enhanced coach video frames and sending the enhanced coach video frames to the user side.
The server is further used for acquiring a coach video frame corresponding to the action video frame; the time for the user side to acquire the action video frame is the same as or within a preset time difference with the time for playing the coach video frame; comparing the user action in the action video frame with a coach demonstration action in the coach video frame to generate an evaluation score for measuring the user action; and sending the evaluation score to the user side.
The client is further used for receiving the enhanced coach video frame and the evaluation score sent by the server, and synthesizing the evaluation score into the enhanced coach video frame and playing the enhanced coach video frame.
In an embodiment of the present disclosure, the server may obtain a coach video frame corresponding to the action video frame for comparison, in an example, the client plays content of the coach video frame of the first frame, and shoots the user through the camera to perform an action corresponding to a coach demonstration action in the first frame, and the client uploads the generated action video frame to the server, where the action video frame corresponds to the coach video frame of the first frame; in a possible implementation manner, when the user terminal shoots the user's action through the camera, corresponding shooting time is usually available, and in an ideal state, the server terminal can determine a corresponding coach video frame based on the shooting time of the action video frame, that is, the time for the user terminal to obtain the action video frame is the same as the time for the user terminal to play the coach video frame; in another possible implementation manner, a certain response delay time is considered when the user performs the action, so that the server may further determine the corresponding coach video frame based on the shooting time of the action video frame and a preset time difference, that is, the time when the user acquires the action video frame and the time when the user plays the coach video frame are within the preset time difference.
In the embodiment of the disclosure, after obtaining the coach video frame corresponding to the action video frame, the server may compare the user action in the action video frame with a coach demonstration action in the coach video frame to generate an evaluation score for measuring the user action; specifically, the server may obtain, from the motion video frame, coordinate data of a skeleton feature point of a user as user motion data, and obtain, from the coach video frame, coordinate data of a skeleton feature point of a coach as coach demonstration motion data, where the coordinate data of the skeleton feature point may be two-dimensional coordinate data or three-dimensional coordinate data, and then generate an evaluation score for measuring the user motion according to a difference between the user motion data and the coach demonstration motion data; the calculation process of the embodiment of the disclosure is beneficial to improving the accuracy of the evaluation score, wherein the coach action data can be offline data acquired in advance, so that the processing speed of the server is increased, and the response efficiency is improved.
In a specific implementation manner, the server may obtain all user vector included angles based on the user motion data, and obtain all standard vector included angles based on the coach demonstration motion data, where the user vector included angle may be an included angle between two vectors formed by coordinate data of any three adjacent skeleton feature points corresponding to a user, the standard vector included angle may be an included angle between two vectors formed by coordinate data of any three adjacent skeleton feature points corresponding to a coach, the user vector included angle corresponds to the standard vector included angle one to one, please refer to fig. 6, which shows 14 skeleton feature points (black mark points in fig. 6) in fig. 6, two vectors formed by any three adjacent skeleton feature points determine a vector included angle, and total 13 vector included angles, and then determine a similarity parameter according to all user vector included angles and all corresponding standard vector included angles, determining an evaluation score corresponding to the user action, wherein the similarity parameter at least comprises a standard deviation result or a variance result of a vector included angle and a standard vector included angle, and the smaller the standard deviation result and the variance result, the greater the similarity, the higher the evaluation score; the standard vector included angle can be offline data acquired in advance, so that the processing speed of the server is increased, and the response efficiency is improved.
In an example, the server side calculating the similarity parameter may include: calculating a difference vector of the corresponding standard vector included angle and the user vector included angle; wherein, the number of the standard vector included angles and the user vector included angles are both N (N is an integer greater than 0), and the ith (i is more than or equal to 1 and less than or equal to N) standard vector included angle is alphaiThe i-th user vector angle is betaiThe difference vector is Δ αiThen Δ αi=αi-βi(ii) a Calculating an average difference vector according to all difference vectors; wherein, assuming the average difference vector as Δ r, thenCalculating a similarity parameter by using all the difference vectors and the average difference vector; wherein, if the similarity parameter is S, thenOr
In a possible scenario, if the server detects that a coach video is played for a single user, the server sends the generated evaluation score corresponding to the user side, so that the user side receives the evaluation score sent by the server side, generates a score image according to the evaluation score, synthesizes the score image with the enhanced coach video frame, generates an enhanced coach video frame including the evaluation score, and plays the enhanced coach video frame, so that the user has a clear judgment on the standard degree of the own action, and the use experience of the user is improved.
In another possible scenario, if the server detects that the same coach video is played synchronously for a plurality of user terminals, and generates corresponding evaluation scores based on motion video frames sent by the plurality of user terminals, the server sends the evaluation scores of the plurality of client terminals to each user terminal, so that each user terminal receives the plurality of evaluation scores, sorts the plurality of evaluation scores based on the evaluation scores, generates a score image according to the sorted evaluation scores, synthesizes the score image with the enhanced coach video frame, generates an enhanced coach video frame including the evaluation scores, and plays the enhanced coach video frame, so that the user can see not only the scores of the user but also the scores of other users, and enhances the interactivity of fitness; it can be understood that the user side may also receive only the evaluation score corresponding to the user side for composite playing based on the selection of the user, which is not limited in this disclosure.
Referring to FIG. 7, FIG. 7 is a block diagram illustrating a third type of workout instruction system according to an exemplary embodiment of the present disclosure.
In the embodiment shown in fig. 7, the system includes a server and a client.
And the user side is used for acquiring the action video frame of the user and sending the action video frame to the server side when the coach video selected by the user is played.
The server is used for receiving the action video frames sent by the user side and acquiring the coach video frames played for the user side and the learning position corresponding to the user side; the learning position is a display position of the user action in the coach video frame.
And the server is also used for synthesizing the user actions in the action video frames to the corresponding learning positions in the currently played coach video frames, generating enhanced coach video frames and sending the enhanced coach video frames to the user side.
The server is further used for acquiring a coach video frame corresponding to the action video frame; the time for the user side to acquire the action video frame is the same as or within a preset time difference with the time for playing the coach video frame; comparing the user action in the action video frame with a coach demonstration action in the coach video frame to generate an evaluation score for measuring the user action; and synthesizing the evaluation scores into the enhanced coach video frames, generating the enhanced coach video frames comprising the evaluation scores and sending the enhanced coach video frames to the user side.
And the user side is also used for receiving and playing the enhanced coach video frames which are sent by the server side and comprise the evaluation scores.
In an embodiment of the present disclosure, after obtaining a coach video frame corresponding to the action video frame, the server may compare a user action in the action video frame with a coach demonstration action in the coach video frame to generate an evaluation score for measuring the user action.
In a possible implementation scenario, if the server detects that a coach video is played for a single user, the server generates a score image according to the generated evaluation score corresponding to the user side, then synthesizes the score image with the enhanced coach video frame, generates an enhanced coach video frame including the evaluation score, and sends the enhanced coach video frame to the user side, so that the user side receives and plays the enhanced coach video frame including the evaluation score sent by the server side; therefore, the user can clearly judge the standard degree of the action of the user, and the use experience of the user is improved.
In another possible implementation scenario, if the server detects that the same coach video is played synchronously for multiple clients and generates corresponding evaluation scores based on motion video frames sent by the multiple clients, the server sorts the multiple evaluation scores based on the evaluation scores, generates a score image according to the sorted evaluation scores, superimposes and synthesizes the score image and the enhanced coach video frame to generate an enhanced coach video frame including the evaluation scores and sends the enhanced coach video frame to the clients, so that the clients receive and play the enhanced coach video frame including the evaluation scores sent by the server, and thus users can see not only their scores but also scores of other users, and interactivity of fitness is enhanced; it is understood that the server may also generate an enhanced trainer video frame including only the rating score of the user based on the selection of the user and send the enhanced trainer video frame to the user terminal, which is not limited in any way by the embodiment of the disclosure.
The following description takes the local terminal to which the body-building teaching method is applied as an example:
as shown in fig. 8, fig. 8 is a flow chart of a method of fitness teaching shown in the present disclosure according to an exemplary embodiment.
In the embodiment shown in fig. 8, the method comprises:
in step S101, a motion video frame of a user is acquired; the motion video frame comprises a video frame generated upon the user motion.
In step S102, obtaining a coach video frame played for the user and a learning position corresponding to the user; the learning position is a display position of the user action in the coach video frame.
In step S103, the user actions in the action video frame of the user are synthesized to the corresponding learning positions in the coach video frame played for the user, so as to generate an enhanced coach video frame.
In step S104, the enhanced coach video frame is played for the user.
In one embodiment, a storage module (database, ROM, etc.) may be disposed on the terminal and is used to store coach videos, where the coach videos include coach demonstration motions, each coach video may correspond to one or more learning locations, the specific number of the coach videos may be specifically set based on actual conditions, and the learning locations are display locations of user motions in the coach videos; in addition, the coach video comprises a template coach video and an enhanced coach video, wherein the template coach video is a video which is not synthesized into action video frames of any user, and the enhanced coach video is a video which is synthesized into action video frames of one or more users in advance; wherein the terminal can be connected with a preset cloud end to update the coach video regularly.
In one embodiment, the terminal can push a coach video to the user based on the storage module, and receive a selection request sent by the user when the user wants to exercise, wherein the selection request comprises at least one of the following: an identification of a user-selected coach video or an identification of a user-selected learning location.
Optionally, the user may not select a coach video and/or a learning position to be played, and the terminal determines the coach video and/or the learning position; for example, the user clicks a random play button (which may be a virtual button or a physical button) on the terminal, and then the terminal randomly determines a coach video and a learning position for the user; or the terminal can also automatically determine a coach video and a learning position according to historical playing data of the user or user preference and the like.
In the embodiment of the disclosure, when the user does exercise or builds body, the terminal plays the coach video selected by the user, the user makes corresponding actions according to the demonstration actions of the coach in the played coach video, and the user side synchronously shoots the user actions through the camera to generate the action video frame including the user actions.
In one implementation, the camera may be a 2D RGB camera, and the user side acquires an RGB video frame captured by the 2D RGB camera, detects a human body in the RGB video frame, and then performs background segmentation on the video frame based on the detected human body to extract the human body, so as to generate an action video frame including only human body actions.
In another implementation manner, the camera may be a 2D RGB camera and a 3D depth camera, the user side obtains an RGB video frame photographed by the 2D RGB camera and a depth visual field frame photographed by the 3D depth camera, calculates a visual depth field based on the depth visual field frame, detects a human body in the RGB video frame, then segments the detected human body in the RGB video frame from a background according to the visual depth field, and retains a human body foreground, thereby generating an action video frame including only human body actions.
Then, after the action video is generated, the terminal acquires a coach video frame played for the user at the moment and a learning position corresponding to the user, synthesizes the user action in the action video frame of the user to the learning position corresponding to the coach video frame played for the user, generates an enhanced coach video frame, and plays the enhanced coach video frame for the user; the terminal can obtain a coach video frame played for the user at the moment based on the starting time and the current time for playing the coach video for the user; the embodiment of the disclosure enables a user to see actions performed by the user and actions performed by a coach at the same time based on AR technology (visual augmented reality technology), thereby improving interactivity.
In an embodiment, the coach video frame is a video frame including a coach demonstration action, and the terminal may obtain a coach video frame corresponding to the action video frame of the user; the time for obtaining the action video frame of the user is the same as the time for playing the coach video frame or within a preset time difference, then the user action in the action video frame of the user is compared with the coach demonstration action in the coach video frame, an evaluation score for measuring the user action is generated, and then the evaluation score is synthesized into the enhanced coach video frame.
It should be noted that, after the terminal generates the enhanced coach video frame corresponding to the coach video, the generated enhanced coach video is stored in the storage module, and if the user has a corresponding account, the generated enhanced coach video can be associated with the account of the user, so that the user can be helped to perform a copy learning process, the action accuracy of subsequent exercises or fitness exercises can be improved, and meanwhile, coach video resources are enriched.
In an embodiment, the learning location may further correspond to asset information, where the asset information represents a value of the learning location, for example, the asset information may be virtual money, credit, or actual currency, and when the terminal receives a learning location selected by a user, the terminal initiates an asset processing operation on an account of the user according to the asset information corresponding to the learning location, for example, the asset processing operation may be an operation of deducting the virtual money, credit, or balance in the user account, or an operation of deducting the balance in a bank card bound to the user account or a third party payment account; thereby providing the merchant with corresponding economic value.
In one implementation, the terminal may obtain, from the motion video frame of the user, coordinate data of skeleton feature points of the user as user motion data, and obtain, from the coach video frame, coordinate data of skeleton feature points of a coach as coach demonstration motion data, where the coordinate data of the skeleton feature points are two-dimensional coordinate data (shot by a 2D camera) or three-dimensional coordinate data (shot by a combination of 2D and 3D cameras), and then generate an evaluation score for measuring the user motion according to a difference between the user motion data and the coach demonstration motion data.
In one implementation, the terminal obtains all user vector angles based on the user motion data, and acquiring all standard vector included angles based on the coach demonstration motion data, wherein the user vector included angle is an included angle of two vectors formed by coordinate data of any three adjacent skeleton feature points corresponding to the user, the standard vector included angle is an included angle of two vectors formed by coordinate data of any three adjacent skeleton characteristic points corresponding to the coach, then determining similarity parameters according to all the user vector included angles and all the standard vector included angles so as to determine the evaluation scores corresponding to the user actions, the similarity parameter at least comprises a standard deviation result or a variance result of the included angle of the user vector and the included angle of the standard vector, the smaller the standard deviation result and the variance result, the greater the similarity, and the higher the evaluation score.
In one implementation, the determining, by the terminal, the similarity parameter may include: for the correspondingCalculating a difference vector of the standard vector included angle and the user vector included angle; wherein, the number of the standard vector included angles and the user vector included angles are both N (N is an integer greater than 0), and the ith (i is more than or equal to 1 and less than or equal to N) standard vector included angle is alphaiThe i-th user vector angle is betaiThe difference vector is Δ αiThen Δ αi=αi-βi(ii) a Calculating an average difference vector according to all difference vectors; wherein, assuming the average difference vector as Δ r, thenCalculating a similarity parameter by using all the difference vectors and the average difference vector; wherein, if the similarity parameter is S, thenOr
In an embodiment, if the same coach video frame is played for a plurality of users at the same time, and corresponding evaluation scores are generated based on the motion video frames of the plurality of users, the terminal may combine a plurality of evaluation scores into the enhanced coach video frame based on the ranking result of the evaluation scores, so that the users can see not only their own scores but also scores of other users, thereby enhancing the interactivity of fitness.
As shown in fig. 9, fig. 9 is a schematic structural diagram of a fitness teaching device shown in the present disclosure according to an exemplary embodiment.
In the embodiment shown in fig. 9, the apparatus comprises:
the action video frame acquisition module 21 is used for acquiring an action video frame of a user; the motion video frame comprises a video frame generated upon the user motion.
An obtaining module 22, configured to obtain a coach video frame played for the user and a learning position corresponding to the user; the learning position is a display position of the user action in the coach video frame.
And an enhanced coach video frame generation module 23, configured to synthesize the user action in the action video frame of the user to a corresponding learning position in a coach video frame played for the user, so as to generate an enhanced coach video frame.
And an enhanced coach video frame playing unit 24, configured to play the enhanced coach video frame for the user.
Optionally, the coaching video frame is a video frame comprising a coaching action.
The apparatus further comprises:
the obtaining module is further used for obtaining a coach video frame corresponding to the action video frame of the user; and the time for acquiring the action video frame of the user is the same as or within a preset time difference with the time for playing the coach video frame.
And the evaluation score generation module is used for comparing the user action in the action video frame of the user with the coach demonstration action in the coach video frame to generate an evaluation score for measuring the user action.
Optionally, the evaluation score generating module includes:
and the action data acquisition submodule is used for acquiring coordinate data of the skeleton characteristic points of the user from the action video frame of the user as user action data, and acquiring coordinate data of the skeleton characteristic points of the coach from the coach video frame as coach demonstration action data.
And the evaluation score generation submodule is used for generating an evaluation score for measuring the user action according to the difference between the user action data and the coach demonstration action data.
Optionally, the evaluation score generation sub-module includes:
the vector included angle acquisition unit is used for acquiring all user vector included angles based on the user action data and acquiring all standard vector included angles based on the coach demonstration action data; the user vector included angle is an included angle of two vectors formed by coordinate data of any three adjacent skeleton feature points corresponding to the user; the standard vector included angle is an included angle of two vectors formed by coordinate data of any three adjacent skeleton feature points corresponding to the coach.
The evaluation score generating unit is used for determining similarity parameters according to all the user vector included angles and all the standard vector included angles so as to determine the evaluation scores corresponding to the user actions; the similarity parameter at least comprises a standard deviation result or a variance result of the included angle of the user vector and the included angle of the standard vector.
Optionally, the evaluation score generating unit includes:
the difference vector calculation subunit is used for calculating a difference vector of the corresponding standard vector included angle and the user vector included angle; wherein, the number of the standard vector included angles and the user vector included angles are both N (N is an integer greater than 0), and the ith (i is more than or equal to 1 and less than or equal to N) standard vector included angle is alphaiThe i-th user vector angle is betaiThe difference vector is Δ αiThen Δ αi=αi-βi。
The average difference vector calculation subunit is used for calculating an average difference vector according to all the difference vectors; wherein, assuming the average difference vector as Δ r, then
The similarity parameter calculation subunit is used for calculating a similarity parameter by using all the difference vectors and the average difference vector; wherein, if the similarity parameter is S, thenOr
And the evaluation score determining subunit is used for determining the evaluation score corresponding to the user action according to the similarity parameter.
Optionally, the method further comprises:
and the evaluation score synthesizing module is used for synthesizing the evaluation score into the enhanced coach video frame.
Optionally, the evaluation score synthesizing module includes:
if the same coach video frame is played for a plurality of users at the same time, and corresponding evaluation scores are respectively generated based on the action video frames of the users, a plurality of evaluation scores are synthesized into the enhanced coach video frame based on the ranking result of the evaluation scores.
Optionally, the method further comprises:
a selection request receiving module for receiving a selection request sent by a user; the selection request includes at least one of: an identification of a user-selected coach video or an identification of a user-selected learning location.
Optionally, the coach video is a video comprising coach demonstration actions; the coach video comprises a template coach video and an enhanced coach video; wherein the template coach video does not synthesize any user's action video frames, and the enhanced coach video is pre-synthesized with one or more user's action video frames.
Optionally, the step of obtaining, in the obtaining module 22, a coaching video frame played for the user includes:
obtaining a coach video frame played for the user based on a start time and a current time of playing a coach video for the user.
Optionally, each coaching video includes one or more learning locations.
Before said generating the enhanced coach video frame, further comprising:
and the learning position distribution module is used for selecting a user corresponding to the unselected learning position from the users who do not select the learning position to receive the action video frame of the selected user and synthesizing the user action corresponding to the selected user to the unselected learning position in the coach video if the same coach video is played for a plurality of users and the unselected learning position exists in the coach video.
Optionally, the user corresponds to a user account; the learning location corresponds to asset information.
The body-building teaching method further comprises the following steps:
and the asset operation initiating module is used for acquiring asset information corresponding to the learning position selected by the user and initiating asset processing operation on the user account according to the asset information.
Optionally, the action video frame acquiring module 21 includes:
and the human body detection submodule is used for detecting a human body in the video frame shot by the camera.
And the action video frame generation sub-module is used for carrying out background segmentation on the video frame based on the detected human body so as to extract the human body and generate an action video frame only comprising human body actions.
The specific details of the implementation process of the functions and actions of each module in the body-building teaching device are described in the implementation process of the corresponding steps in the body-building teaching system or the body-building teaching method, and are not further described here.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
Correspondingly, the present disclosure also provides an electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein,
the processor is configured to perform the operations of the method as described above.
Fig. 10 is a schematic diagram of an electronic device for use with a fitness teaching device according to an exemplary embodiment.
As shown in fig. 10, according to an exemplary embodiment, an electronic device 300 is shown, where the electronic device 300 may be a computer, a server, a mobile phone, a tablet, or other computing device.
Referring to fig. 10, electronic device 300 may include one or more of the following components: processing component 301, memory 302, power component 303, multimedia component 304, audio component 305, input/output (I/O) interface 306, sensor component 307, and communication component 308.
The processing component 301 generally controls the overall operation of the device 300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 301 may include one or more processors 309 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 301 may include one or more modules that facilitate interaction between the processing component 301 and other components. For example, the processing component 301 may include a multimedia module to facilitate interaction between the multimedia component 304 and the processing component 301.
The memory 302 is configured to store various types of data to support operations at the electronic device 300. Examples of such data include instructions for any application or method operating on the electronic device 300, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 302 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 303 provides power to the various components of the electronic device 300. Power components 303 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 300.
The multimedia component 304 includes a screen providing an output interface between the electronic device 300 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 304 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 300 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 305 is configured to output and/or input audio signals. For example, the audio component 305 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 300 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 302 or transmitted via the communication component 303. In some embodiments, audio component 303 further comprises a speaker for outputting audio signals.
The I/O interface 302 provides an interface between the processing component 301 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 307 includes one or more sensors for providing various aspects of status assessment for the electronic device 300. For example, the sensor component 307 may detect an open/closed state of the electronic device 300, the relative positioning of components, such as a display and keypad of the electronic device 300, the sensor component 307 may also detect a change in the position of the electronic device 300 or a component of the electronic device 300, the presence or absence of user contact with the electronic device 300, orientation or acceleration/deceleration of the electronic device 300, and a change in the temperature of the electronic device 300. The sensor component 307 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 307 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor component 307 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, a heart rate signal sensor, an electrocardiogram sensor, a fingerprint sensor, or a temperature sensor.
The communication component 308 is configured to facilitate communication between the electronic device 300 and other devices in a wired or wireless manner. The electronic device 300 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 308 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 308 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 302 comprising instructions, executable by the processor 309 of the electronic device 300 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Wherein the instructions in the storage medium, when executed by the processor 309, enable the apparatus 300 to perform the aforementioned fitness teaching method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.
Claims (28)
1. A method of fitness teaching, the method comprising:
acquiring a motion video frame of a user; the motion video frame comprises a video frame generated when the user moves;
acquiring a coach video frame played for the user and a learning position corresponding to the user; the learning position is a display position of the user action in the coach video frame;
synthesizing the user action in the action video frame of the user to a corresponding learning position in a coach video frame played for the user to generate an enhanced coach video frame;
playing the enhanced coach video frame for the user.
2. The fitness teaching method of claim 1, wherein the coaching video frame is a video frame comprising a coaching demonstration action;
the method further comprises:
acquiring a coach video frame corresponding to the action video frame of the user; wherein the time for acquiring the action video frame of the user is the same as or within a preset time difference with the time for playing the coach video frame;
and comparing the user action in the action video frame of the user with the coach demonstration action in the coach video frame to generate an evaluation score for measuring the user action.
3. The fitness teaching method of claim 2, wherein the comparing the user action in the user action video frame to the coach demonstration action in the coach video frame to generate an assessment score that measures the user action comprises:
acquiring coordinate data of the skeleton characteristic points of the user from the action video frame of the user as user action data, and acquiring coordinate data of the skeleton characteristic points of the coach from the coach video frame as coach demonstration action data;
generating an evaluation score that measures the user action based on a difference between the user action data and the coach demonstration action data.
4. The fitness teaching method of claim 3, wherein generating an assessment score that measures the user action based on the difference between the user action data and the trainer demonstration action data comprises:
acquiring all user vector included angles based on the user action data, and acquiring all standard vector included angles based on the coach demonstration action data; the user vector included angle is an included angle of two vectors formed by coordinate data of any three adjacent skeleton feature points corresponding to the user; the standard vector included angle is an included angle of two vectors formed by coordinate data of any three adjacent skeleton feature points corresponding to the coach;
determining similarity parameters according to all user vector included angles and all standard vector included angles, and accordingly determining evaluation scores corresponding to the user actions; the similarity parameter at least comprises a standard deviation result or a variance result of the included angle of the user vector and the included angle of the standard vector.
5. The method for body-building teaching according to claim 4, wherein the determining the similarity parameter according to all the user vector angles and all the standard vector angles comprises:
calculating a difference vector of the corresponding standard vector included angle and the user vector included angle; wherein, the number of the standard vector included angles and the user vector included angles are both N (N is an integer greater than 0), and the ith (i is more than or equal to 1 and less than or equal to N) standard vector included angle is alphaiThe i-th user vector angle is betaiThe difference vector is Δ αiThen Δ αi=αi-βi;
Calculating an average difference vector according to all difference vectors; wherein, assuming the average difference vector as Δ r, then
Calculating a similarity parameter by using all the difference vectors and the average difference vector; wherein, if the similarity parameter is S, thenOr
6. The fitness teaching method of claim 2, further comprising:
synthesizing the rating score into the enhanced coach video frame.
7. The fitness teaching method of claim 6, wherein the synthesizing of the assessment score into the enhanced trainer video frame comprises:
if the same coach video frame is played for a plurality of users at the same time, and corresponding evaluation scores are respectively generated based on the action video frames of the users, a plurality of evaluation scores are synthesized into the enhanced coach video frame based on the ranking result of the evaluation scores.
8. The fitness teaching method of claim 1, further comprising:
receiving a selection request sent by a user; the selection request includes at least one of: an identification of a user-selected coach video or an identification of a user-selected learning location.
9. The fitness teaching method of claim 8, wherein the trainer video is a video comprising a trainer demonstration action; the coach video comprises a template coach video and an enhanced coach video; wherein the template coach video does not synthesize any user's action video frames, and the enhanced coach video is pre-synthesized with one or more user's action video frames.
10. The method of body-building instruction of claim 1, wherein said obtaining a video frame of a coach playing for said user comprises:
obtaining a coach video frame played for the user based on a start time and a current time of playing a coach video for the user.
11. The fitness teaching method of claim 9, wherein each coaching video includes one or more learning locations;
before said generating the enhanced coach video frame, further comprising:
if the same coach video is played for a plurality of users at the same time and the coach video has unselected learning positions, selecting the users corresponding to the unselected learning positions from the users who do not select the learning positions to receive the action video frame of the selected user, and synthesizing the user actions corresponding to the selected user to the unselected learning positions in the coach video.
12. The fitness teaching method of claim 1, wherein the user corresponds to a user account; the learning position corresponds to asset information;
the body-building teaching method further comprises the following steps:
and acquiring asset information corresponding to the learning position selected by the user, and initiating asset processing operation on the user account according to the asset information.
13. The fitness teaching method of claim 1,
the acquiring the action video frame of the user comprises:
detecting a human body in a video frame shot by a camera;
and based on the detected human body, performing background segmentation on the video frame to extract the human body, and generating a motion video frame only comprising human body motion.
14. A fitness teaching device, the method comprising:
the action video frame acquisition module is used for acquiring an action video frame of a user; the motion video frame comprises a video frame generated when the user moves;
the acquisition module is used for acquiring a coach video frame played for the user and a learning position corresponding to the user; the learning position is a display position of the user action in the coach video frame;
the enhanced coach video frame generation module is used for synthesizing the user action in the action video frame of the user to a corresponding learning position in a coach video frame played for the user to generate an enhanced coach video frame;
and the enhanced coach video frame playing unit is used for playing the enhanced coach video frame for the user.
15. The fitness teaching device of claim 14, wherein the coaching video frame is a video frame comprising a coaching demonstration action;
the apparatus further comprises:
the obtaining module is further used for obtaining a coach video frame corresponding to the action video frame of the user; wherein the time for acquiring the action video frame of the user is the same as or within a preset time difference with the time for playing the coach video frame;
and the evaluation score generation module is used for comparing the user action in the action video frame of the user with the coach demonstration action in the coach video frame to generate an evaluation score for measuring the user action.
16. The fitness teaching device of claim 15, wherein the evaluation score generation module comprises:
the action data acquisition sub-module is used for acquiring coordinate data of the skeleton characteristic points of the user from the action video frame of the user as user action data and acquiring coordinate data of the skeleton characteristic points of the coach from the coach video frame as coach demonstration action data; (ii) a
And the evaluation score generation submodule is used for generating an evaluation score for measuring the user action according to the difference between the user action data and the coach demonstration action data.
17. The fitness teaching device of claim 16, wherein the evaluation score generation submodule comprises:
the vector included angle acquisition unit is used for acquiring all user vector included angles based on the user action data and acquiring all standard vector included angles based on the coach demonstration action data; the user vector included angle is an included angle of two vectors formed by coordinate data of any three adjacent skeleton feature points corresponding to the user; the standard vector included angle is an included angle of two vectors formed by coordinate data of any three adjacent skeleton feature points corresponding to the coach;
the evaluation score generating unit is used for determining similarity parameters according to all the user vector included angles and all the standard vector included angles so as to determine the evaluation scores corresponding to the user actions; the similarity parameter at least comprises a standard deviation result or a variance result of the included angle of the user vector and the included angle of the standard vector.
18. The fitness teaching device of claim 17, wherein the evaluation score generation unit comprises:
the difference vector calculation subunit is used for calculating a difference vector of the corresponding standard vector included angle and the user vector included angle; wherein, the number of the standard vector included angles and the user vector included angles are both N (N is an integer greater than 0), and the ith (i is more than or equal to 1 and less than or equal to N) standard vector included angle is alphaiThe i-th user vector angle is betaiThe difference vector is Δ αiThen Δ αi=αi-βi;
The average difference vector calculation subunit is used for calculating an average difference vector according to all the difference vectors; wherein, assuming the average difference vector as Δ r, then
The similarity parameter calculation subunit is used for calculating a similarity parameter by using all the difference vectors and the average difference vector; wherein, if the similarity parameter is S, thenOr
And the evaluation score determining subunit is used for determining the evaluation score corresponding to the user action according to the similarity parameter.
19. The fitness teaching device of claim 15, further comprising:
and the evaluation score synthesizing module is used for synthesizing the evaluation score into the enhanced coach video frame.
20. The fitness teaching device of claim 19, wherein the evaluation score synthesis module comprises:
if the same coach video frame is played for a plurality of users at the same time, and corresponding evaluation scores are respectively generated based on the action video frames of the users, a plurality of evaluation scores are synthesized into the enhanced coach video frame based on the ranking result of the evaluation scores.
21. The fitness teaching device of claim 14, further comprising:
a selection request receiving module for receiving a selection request sent by a user; the selection request includes at least one of: an identification of a user-selected coach video or an identification of a user-selected learning location.
22. The fitness teaching device of claim 21, wherein the coaching video is a video comprising a coaching demonstration action; the coach video comprises a template coach video and an enhanced coach video; wherein the template coach video does not synthesize any user's action video frames, and the enhanced coach video is pre-synthesized with one or more user's action video frames.
23. The fitness teaching device of claim 14, wherein the acquisition module comprises:
obtaining a coach video frame played for the user based on a start time and a current time of playing a coach video for the user.
24. A fitness teaching device according to claim 21, wherein each coaching video includes one or more learning locations;
before said generating the enhanced coach video frame, further comprising:
and the learning position distribution module is used for selecting a user corresponding to the unselected learning position from the users who do not select the learning position to receive the action video frame of the selected user and synthesizing the user action corresponding to the selected user to the unselected learning position in the coach video if the same coach video is played for a plurality of users and the unselected learning position exists in the coach video.
25. The fitness teaching device of claim 14, wherein the user corresponds to a user account; the learning position corresponds to asset information;
then the body-building teaching device further comprises:
and the asset operation initiating module is used for acquiring asset information corresponding to the learning position selected by the user and initiating asset processing operation on the user account according to the asset information.
26. The fitness teaching device of claim 14, wherein the action video frame acquisition module comprises:
the human body detection submodule is used for detecting a human body in a video frame shot by the camera;
and the action video frame generation sub-module is used for carrying out background segmentation on the video frame based on the detected human body so as to extract the human body and generate an action video frame only comprising human body actions.
27. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein,
the processor configured to perform the fitness teaching method of any of the above claims 1-13.
28. A computer readable storage medium having stored thereon a computer program which, when executed by one or more processors, causes the processors to perform the method of fitness teaching of any of claims 1-13.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910599390.1A CN110418205A (en) | 2019-07-04 | 2019-07-04 | Body-building teaching method, device, equipment, system and storage medium |
PCT/CN2020/095369 WO2021000708A1 (en) | 2019-07-04 | 2020-06-10 | Fitness teaching method and apparatus, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910599390.1A CN110418205A (en) | 2019-07-04 | 2019-07-04 | Body-building teaching method, device, equipment, system and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110418205A true CN110418205A (en) | 2019-11-05 |
Family
ID=68360288
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910599390.1A Pending CN110418205A (en) | 2019-07-04 | 2019-07-04 | Body-building teaching method, device, equipment, system and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110418205A (en) |
WO (1) | WO2021000708A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111445738A (en) * | 2020-04-30 | 2020-07-24 | 北京打铁师体育文化产业有限公司 | Online motion action tutoring method and system |
CN111522522A (en) * | 2020-04-22 | 2020-08-11 | 咪咕互动娱乐有限公司 | Demonstration video display method, system, device and storage medium |
CN111652078A (en) * | 2020-05-11 | 2020-09-11 | 浙江大学 | Yoga action guidance system and method based on computer vision |
WO2021000708A1 (en) * | 2019-07-04 | 2021-01-07 | 安徽华米信息科技有限公司 | Fitness teaching method and apparatus, electronic device and storage medium |
CN112348942A (en) * | 2020-09-18 | 2021-02-09 | 当趣网络科技(杭州)有限公司 | Body-building interaction method and system |
CN113128283A (en) * | 2019-12-31 | 2021-07-16 | 沸腾时刻智能科技(深圳)有限公司 | Evaluation method, model construction method, teaching machine, teaching system and electronic equipment |
CN113992957A (en) * | 2020-09-30 | 2022-01-28 | 深度练习(杭州)智能科技有限公司 | Motion synchronization system and method in video file suitable for intelligent terminal |
CN114973066A (en) * | 2022-04-29 | 2022-08-30 | 浙江运动家体育发展有限公司 | Online and offline fitness interaction method and system |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112885164A (en) * | 2021-02-01 | 2021-06-01 | 黄华 | Visual intelligent interactive teaching and examination system and method by utilizing augmented reality wearing equipment |
CN113239849B (en) * | 2021-05-27 | 2023-12-19 | 数智引力(厦门)运动科技有限公司 | Body-building action quality assessment method, body-building action quality assessment system, terminal equipment and storage medium |
CN114666639B (en) * | 2022-03-18 | 2023-11-03 | 海信集团控股股份有限公司 | Video playing method and display device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015140573A1 (en) * | 2014-03-20 | 2015-09-24 | 2Mee Ltd | Augmented reality apparatus and method |
CN106411889A (en) * | 2016-09-29 | 2017-02-15 | 宇龙计算机通信科技(深圳)有限公司 | Grouped movement method and system, and terminal |
CN107833283A (en) * | 2017-10-30 | 2018-03-23 | 努比亚技术有限公司 | A kind of teaching method and mobile terminal |
CN108734104A (en) * | 2018-04-20 | 2018-11-02 | 杭州易舞科技有限公司 | Body-building action error correction method based on deep learning image recognition and system |
CN108777081A (en) * | 2018-05-31 | 2018-11-09 | 华中师范大学 | A kind of virtual Dancing Teaching method and system |
CN109191588A (en) * | 2018-08-27 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | Move teaching method, device, storage medium and electronic equipment |
CN109432753A (en) * | 2018-09-26 | 2019-03-08 | Oppo广东移动通信有限公司 | Act antidote, device, storage medium and electronic equipment |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104598867B (en) * | 2013-10-30 | 2017-12-01 | 中国艺术科技研究所 | A kind of human action automatic evaluation method and dancing points-scoring system |
US10078780B2 (en) * | 2015-03-27 | 2018-09-18 | Intel Corporation | Gesture recognition mechanism |
CN104882036A (en) * | 2015-05-27 | 2015-09-02 | 江西理工大学 | Digital fitness teaching system |
CN207913143U (en) * | 2017-12-18 | 2018-09-28 | 郑州特瑞通节能技术有限公司 | A kind of athletic performance correction smart home body-building system |
CN108764120B (en) * | 2018-05-24 | 2021-11-09 | 杭州师范大学 | Human body standard action evaluation method |
CN110298309B (en) * | 2019-06-28 | 2024-09-13 | 腾讯科技(深圳)有限公司 | Image-based action feature processing method, device, terminal and storage medium |
CN110418205A (en) * | 2019-07-04 | 2019-11-05 | 安徽华米信息科技有限公司 | Body-building teaching method, device, equipment, system and storage medium |
-
2019
- 2019-07-04 CN CN201910599390.1A patent/CN110418205A/en active Pending
-
2020
- 2020-06-10 WO PCT/CN2020/095369 patent/WO2021000708A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015140573A1 (en) * | 2014-03-20 | 2015-09-24 | 2Mee Ltd | Augmented reality apparatus and method |
CN106411889A (en) * | 2016-09-29 | 2017-02-15 | 宇龙计算机通信科技(深圳)有限公司 | Grouped movement method and system, and terminal |
CN107833283A (en) * | 2017-10-30 | 2018-03-23 | 努比亚技术有限公司 | A kind of teaching method and mobile terminal |
CN108734104A (en) * | 2018-04-20 | 2018-11-02 | 杭州易舞科技有限公司 | Body-building action error correction method based on deep learning image recognition and system |
CN108777081A (en) * | 2018-05-31 | 2018-11-09 | 华中师范大学 | A kind of virtual Dancing Teaching method and system |
CN109191588A (en) * | 2018-08-27 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | Move teaching method, device, storage medium and electronic equipment |
CN109432753A (en) * | 2018-09-26 | 2019-03-08 | Oppo广东移动通信有限公司 | Act antidote, device, storage medium and electronic equipment |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021000708A1 (en) * | 2019-07-04 | 2021-01-07 | 安徽华米信息科技有限公司 | Fitness teaching method and apparatus, electronic device and storage medium |
CN113128283A (en) * | 2019-12-31 | 2021-07-16 | 沸腾时刻智能科技(深圳)有限公司 | Evaluation method, model construction method, teaching machine, teaching system and electronic equipment |
CN111522522A (en) * | 2020-04-22 | 2020-08-11 | 咪咕互动娱乐有限公司 | Demonstration video display method, system, device and storage medium |
CN111522522B (en) * | 2020-04-22 | 2023-12-08 | 咪咕互动娱乐有限公司 | Demonstration video display method, system, device and storage medium |
CN111445738A (en) * | 2020-04-30 | 2020-07-24 | 北京打铁师体育文化产业有限公司 | Online motion action tutoring method and system |
CN111445738B (en) * | 2020-04-30 | 2021-06-08 | 北京打铁师体育文化产业有限公司 | Online motion action tutoring method and system |
CN111652078A (en) * | 2020-05-11 | 2020-09-11 | 浙江大学 | Yoga action guidance system and method based on computer vision |
CN112348942A (en) * | 2020-09-18 | 2021-02-09 | 当趣网络科技(杭州)有限公司 | Body-building interaction method and system |
CN112348942B (en) * | 2020-09-18 | 2024-03-19 | 当趣网络科技(杭州)有限公司 | Body-building interaction method and system |
CN113992957A (en) * | 2020-09-30 | 2022-01-28 | 深度练习(杭州)智能科技有限公司 | Motion synchronization system and method in video file suitable for intelligent terminal |
CN114973066A (en) * | 2022-04-29 | 2022-08-30 | 浙江运动家体育发展有限公司 | Online and offline fitness interaction method and system |
Also Published As
Publication number | Publication date |
---|---|
WO2021000708A1 (en) | 2021-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110418205A (en) | Body-building teaching method, device, equipment, system and storage medium | |
US11503377B2 (en) | Method and electronic device for processing data | |
CN109637518B (en) | Virtual anchor implementation method and device | |
US11636653B2 (en) | Method and apparatus for synthesizing virtual and real objects | |
WO2021008158A1 (en) | Method and apparatus for detecting key points of human body, electronic device and storage medium | |
WO2022227393A1 (en) | Image photographing method and apparatus, electronic device, and computer readable storage medium | |
TWI255141B (en) | Method and system for real-time interactive video | |
US11030733B2 (en) | Method, electronic device and storage medium for processing image | |
CN109729372B (en) | Live broadcast room switching method, device, terminal, server and storage medium | |
CN112543343B (en) | Live broadcast picture processing method and device based on live broadcast with wheat | |
KR102161034B1 (en) | System for providing exercise lecture and method for providing exercise lecture using the same | |
KR101962578B1 (en) | A fitness exercise service providing system using VR | |
US20130265448A1 (en) | Analyzing Human Gestural Commands | |
CN108986117B (en) | Video image segmentation method and device | |
CN114697539B (en) | Photographing recommendation method and device, electronic equipment and storage medium | |
CN108986803B (en) | Scene control method and device, electronic equipment and readable storage medium | |
KR20200028830A (en) | Real-time computer graphics video broadcasting service system | |
CN114241604A (en) | Gesture detection method and device, electronic equipment and storage medium | |
CN105608469B (en) | The determination method and device of image resolution ratio | |
KR20170127354A (en) | Apparatus and method for providing video conversation using face conversion based on facial motion capture | |
US20190104249A1 (en) | Server apparatus, distribution system, distribution method, and program | |
CN116939275A (en) | Live virtual resource display method and device, electronic equipment, server and medium | |
CN114374880A (en) | Joint live broadcast method and device, electronic equipment and computer readable storage medium | |
CN116132794A (en) | Automatic guiding and broadcasting device | |
CN114900738A (en) | Film viewing interaction method and device and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191105 |