CN116266869A - Bidirectional identification method and system for body-building actions in live broadcast - Google Patents

Bidirectional identification method and system for body-building actions in live broadcast Download PDF

Info

Publication number
CN116266869A
CN116266869A CN202111553780.9A CN202111553780A CN116266869A CN 116266869 A CN116266869 A CN 116266869A CN 202111553780 A CN202111553780 A CN 202111553780A CN 116266869 A CN116266869 A CN 116266869A
Authority
CN
China
Prior art keywords
action
user
motion
data
coach
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111553780.9A
Other languages
Chinese (zh)
Inventor
奚伟涛
付勇
张谦
张博文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Fit Future Technology Co Ltd
Original Assignee
Chengdu Fit Future Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Fit Future Technology Co Ltd filed Critical Chengdu Fit Future Technology Co Ltd
Priority to CN202111553780.9A priority Critical patent/CN116266869A/en
Publication of CN116266869A publication Critical patent/CN116266869A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a body-building action bidirectional identification method in live broadcast, which comprises the following steps: displaying the exercise video and the training real-time video to a user in live broadcast; the body-building video is synchronously displayed at the training end; when receiving an acquisition instruction sent by a training side, starting to acquire motion data of a user; the instruction is obtained and sent when the training end detects that the training action is matched with any action in the body-building video; the motion data is input into a first recognition model corresponding to the coaching motion to recognize whether the motion of the user matches the coaching motion. The invention also discloses a body-building action bidirectional recognition system in live broadcast. According to the method and the system for bidirectionally identifying the body-building action in live broadcast, the action of the coach is used for detecting to trigger the detection of the action of the user, the model used for detecting corresponds to the action of the coach, so that the user and the coach can see the standard condition of the action of the current wheel, the coach can conduct targeted adjustment conveniently, and the user experience is improved.

Description

Bidirectional identification method and system for body-building actions in live broadcast
Technical Field
The invention relates to an intelligent body-building technology, in particular to a body-building action bidirectional identification method and system in live broadcast.
Background
The live webcast exists for a long time, with the improvement of the uplink and downlink bandwidths and the descending of the tariffs, the live webcast is endowed with more entertainment and social properties, people enjoy live broadcast and watching anytime and anywhere, the host is not satisfied with unidirectional live broadcast, the audience is more desirous of interaction, and the opening time and delay of the live broadcast become important indexes for influencing the product function development.
At present, the live broadcast technology is already applied to intelligent fitness, but whether single-to-single intelligent fitness live broadcast or single-to-many intelligent fitness live broadcast is adopted, because the live broadcast is different from dance or game live broadcast, users and coaches need to continuously move in the intelligent fitness live broadcast; if the traditional live broadcast mode is adopted, the interaction between a coach and a user is reduced, the user is difficult to feel the difference between the body-building live broadcast and the body-building video recording broadcast, and the user experience is reduced.
Disclosure of Invention
The invention aims to solve the technical problem that the prior live broadcast technology is poor in user experience when applied to body building, and aims to provide a method and a system for identifying body building actions in live broadcast in two directions, so as to solve the problems.
The invention is realized by the following technical scheme:
in one aspect, the present embodiment provides a method for bidirectional identifying exercise actions in live broadcast, including:
displaying the exercise video and the training real-time video to a user in live broadcast; the body-building video is synchronously displayed at the training end;
when receiving an acquisition instruction sent by the coach end, starting to acquire motion data of a user; the instruction acquisition is sent when the training end detects that the training action is matched with any action in the body-building video;
and inputting the motion data into a first recognition model corresponding to the coach motion to recognize whether the motion of the user is matched with the coach motion.
In the prior art, the interaction modes for the live user and the anchor comprise: the modes of user on-demand, language interaction, voting interaction and the like can be directly applied to body-building live broadcast to realize interaction. However, the inventor has found that the user's needs in exercise live broadcast are quite different from other live broadcast where the user is watching live broadcast as a pure live broadcast recipient, whereas in exercise live broadcast, the user's expectations are generally to follow the exercise trainer to complete a predetermined exercise session or to achieve a certain exercise effect. The current live broadcast technology lacks an effect feedback means in the user exercise process, and a coach is difficult to quickly adjust exercise courses.
In the implementation of this embodiment, the exercise video and the real-time video of the coach need to be simultaneously displayed to the user, and the exercise video needs to be simultaneously displayed to the coach, wherein the exercise video should be a recorded and played exercise video, so that the exercise video itself includes what actions should be known. When the coach takes action through the coach end, the coach action is detected, the detection means can be carried out by using the existing image recognition or IMU data recognition mode, and the IMU data is preferred based on the consideration of calculation amount.
Under the most ideal condition, a coach carries out the display of exercise courses along the exercise video, at the moment, the coach and the exercise video are basically synchronous, when the coach is detected to do a certain action each time, the action of the user is detected by triggering the instruction acquisition mode, whether the action of the user reaches the standard is judged, and the judgment result can be intuitively displayed for the coach and the user. In another case, the user may request the coach to conduct an explanation of a certain action as desired, or the coach may need to change the order of some of the actions in the exercise session as desired. At this time, in this embodiment, as soon as the coach makes a corresponding action, it starts to detect whether the user makes a corresponding action, so as to maintain the interaction between the coach and the user in this case.
Meanwhile, in the live broadcast process, the video is generally unidirectional for privacy protection, namely, the user can see the video of the coach, and the coach cannot see the video of the user, so that the matching result in the embodiment can provide a fitness condition of the user for the coach when being displayed at the coach end, and the targeted adjustment can be made. According to the embodiment, the action of the coach is used for detecting to trigger the detection of the user action, the model used for detecting also corresponds to the action of the coach, so that the user and the coach can see the standard condition of the action of the current wheel, the coach can conveniently conduct targeted adjustment, and the user experience is improved.
Further, the detecting, by the trainer end, that the trainer action matches with any action in the fitness video includes:
the training end acquires IMU data of a training machine as first IMU data;
the training end inputs the first IMU data to be configured on the training end second recognition model, and receives the action output by the second recognition model as training action; and the input data of the second recognition model is IMU data, and the output data is actions corresponding to the IMU data.
Further, when receiving the acquiring instruction sent by the coach end, the step of starting to acquire the motion data of the user includes:
when an acquisition instruction is received, acquiring a reference action from the acquisition instruction, and starting to receive second IMU data detected by an IMU sensor worn by a user as motion data of the user; the length of the second IMU data corresponds to the reference action; the reference motion is a motion in the set of motions that the coaching motion matches.
Further, inputting the motion data into a first recognition model corresponding to the coaching motion to recognize whether the motion of the user matches the coaching motion comprises:
acquiring an identification model corresponding to the reference action as a first identification model;
inputting the motion data into the first recognition model, and receiving a matching result output by the first recognition model; the matching result is the matching degree of the motion data and the reference motion;
and judging whether the action of the user is matched with the action of the coach according to the matching degree.
Further, the method further comprises the following steps:
when the action of the user is not matched with the action of the coach, an interrupt prompt is sent to the coach end, and a video stream matched with the reference action in the body-building video is obtained;
acquiring difference IMU data output by the first recognition model; the difference IMU data is IMU data with the largest difference from the reference action in the second IMU data;
extracting videos of corresponding parts from the video stream according to the difference IMU data to serve as a selected video stream;
playing the selected video stream to a user or playing the selected video stream with a reduced play rate.
In another aspect, the present embodiment provides a live exercise bidirectional recognition system, including:
a user terminal configured at a user, configured to display the exercise video and the coaching real-time video to the user in live broadcast;
a coaching end configured to synchronously display the fitness video;
the user terminal starts to acquire the motion data of the user when receiving the acquisition instruction sent by the coach terminal; the instruction acquisition is sent when the training end detects that the training action is matched with any action in the body-building video;
and the user side inputs the motion data into a first recognition model corresponding to the coaching action to recognize whether the action of the user is matched with the coaching action.
Further, the training side acquires IMU data of a training side as first IMU data;
the training end inputs the first IMU data to be configured on the training end second recognition model, and receives the action output by the second recognition model as training action; and the input data of the second recognition model is IMU data, and the output data is actions corresponding to the IMU data.
Further, the client is further configured to:
when an acquisition instruction is received, acquiring a reference action from the acquisition instruction, and starting to receive second IMU data detected by an IMU sensor worn by a user as motion data of the user; the length of the second IMU data corresponds to the reference action; the reference motion is a motion in the set of motions that the coaching motion matches.
Further, the client is further configured to:
acquiring an identification model corresponding to the reference action as a first identification model;
inputting the motion data into the first recognition model, and receiving a matching result output by the first recognition model; the matching result is the matching degree of the motion data and the reference motion;
and judging whether the action of the user is matched with the action of the coach according to the matching degree.
Further, the client is further configured to:
when the action of the user is not matched with the action of the coach, an interrupt prompt is sent to the coach end, and a video stream matched with the reference action in the body-building video is obtained;
acquiring difference IMU data output by the first recognition model; the difference IMU data is IMU data with the largest difference from the reference action in the second IMU data;
extracting videos of corresponding parts from the video stream according to the difference IMU data to serve as a selected video stream;
playing the selected video stream to a user or playing the selected video stream with a reduced play rate.
Compared with the prior art, the invention has the following advantages and beneficial effects:
according to the method and the system for bidirectionally identifying the body-building action in live broadcast, the action of the coach is used for detecting to trigger the detection of the action of the user, the model used for detecting corresponds to the action of the coach, so that the user and the coach can see the standard condition of the action of the current wheel, the coach can conduct targeted adjustment conveniently, and the user experience is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention. In the drawings:
FIG. 1 is a schematic diagram of steps of a method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a system architecture according to an embodiment of the invention.
Detailed Description
For the purpose of making apparent the objects, technical solutions and advantages of the present invention, the present invention will be further described in detail with reference to the following examples and the accompanying drawings, wherein the exemplary embodiments of the present invention and the descriptions thereof are for illustrating the present invention only and are not to be construed as limiting the present invention.
Examples
Referring to fig. 1 in combination, a flow chart of a live-broadcast body-building motion bidirectional recognition method according to an embodiment of the present invention is provided, where the live-broadcast body-building motion bidirectional recognition method may be applied to a live-broadcast body-building motion bidirectional recognition system in fig. 2, and further, the live-broadcast body-building motion bidirectional recognition method may specifically include the following descriptions of steps S1 to S3.
S1: displaying the exercise video and the training real-time video to a user in live broadcast; the body-building video is synchronously displayed at the training end;
s2: when receiving an acquisition instruction sent by the coach end, starting to acquire motion data of a user; the instruction acquisition is sent when the training end detects that the training action is matched with any action in the body-building video;
s3: and inputting the motion data into a first recognition model corresponding to the coach motion to recognize whether the motion of the user is matched with the coach motion.
In the prior art, the interaction modes for the live user and the anchor comprise: the modes of user on-demand, language interaction, voting interaction and the like can be directly applied to body-building live broadcast to realize interaction. However, the inventor has found that the user's needs in exercise live broadcast are quite different from other live broadcast where the user is watching live broadcast as a pure live broadcast recipient, whereas in exercise live broadcast, the user's expectations are generally to follow the exercise trainer to complete a predetermined exercise session or to achieve a certain exercise effect. The current live broadcast technology lacks an effect feedback means in the user exercise process, and a coach is difficult to quickly adjust exercise courses.
In the implementation of this embodiment, the exercise video and the real-time video of the coach need to be simultaneously displayed to the user, and the exercise video needs to be simultaneously displayed to the coach, wherein the exercise video should be a recorded and played exercise video, so that the exercise video itself includes what actions should be known. When the coach takes action through the coach end, the coach action is detected, the detection means can be carried out by using the existing image recognition or IMU data recognition mode, and the IMU data is preferred based on the consideration of calculation amount. The coaching action in this embodiment is matched to any action in the exercise video, which may be the same or similar. The method can be used for labeling each action in the body-building video, and then matching with the action of a coach through labeling.
Under the most ideal condition, a coach carries out the display of exercise courses along the exercise video, at the moment, the coach and the exercise video are basically synchronous, when the coach is detected to do a certain action each time, the action of the user is detected by triggering the instruction acquisition mode, whether the action of the user reaches the standard is judged, and the judgment result can be intuitively displayed for the coach and the user. In another case, the user may request the coach to conduct an explanation of a certain action as desired, or the coach may need to change the order of some of the actions in the exercise session as desired. At this time, in this embodiment, as soon as the coach makes a corresponding action, it starts to detect whether the user makes a corresponding action, so as to maintain the interaction between the coach and the user in this case.
Meanwhile, in the live broadcast process, the video is generally unidirectional for privacy protection, namely, the user can see the video of the coach, and the coach cannot see the video of the user, so that the matching result in the embodiment can provide a fitness condition of the user for the coach when being displayed at the coach end, and the targeted adjustment can be made. According to the embodiment, the action of the coach is used for detecting to trigger the detection of the user action, the model used for detecting also corresponds to the action of the coach, so that the user and the coach can see the standard condition of the action of the current wheel, the coach can conveniently conduct targeted adjustment, and the user experience is improved.
In one embodiment, the detecting by the trainer end that a trainer action matches any action in the exercise video comprises:
the training end acquires IMU data of a training machine as first IMU data;
the training end inputs the first IMU data to be configured on the training end second recognition model, and receives the action output by the second recognition model as training action; and the input data of the second recognition model is IMU data, and the output data is actions corresponding to the IMU data.
In the implementation of this embodiment, the second recognition model configured at the training side recognizes the training action, and the recognition result is the training action described in the above embodiment. The second recognition model in this embodiment may be implemented by various model training means in the prior art, and will not be repeated here. The training process of the model is performed by collecting a large number of samples, and this embodiment will not be repeated here. By the configuration of the second recognition model, recognition of the coaching action at the coaching end can be achieved.
In one embodiment, when receiving the acquiring instruction sent by the coach end, starting to acquire the motion data of the user includes:
when an acquisition instruction is received, acquiring a reference action from the acquisition instruction, and starting to receive second IMU data detected by an IMU sensor worn by a user as motion data of the user; the length of the second IMU data corresponds to the reference action; the reference motion is a motion in the set of motions that the coaching motion matches.
In the embodiment, since the reference motion and the instruction motion are actually the same motion, the instruction motion may be the instruction motion, and this mode is considered to be equivalent to the embodiment. The length of the second IMU data needs to be determined according to the duration of the action itself, e.g. 600ms for a set of boxing actions, then 700-1000 ms of second IMU data may need to be detected, whereas 300ms for a set of opening and closing actions, then 400-500 ms of second IMU data may need to be detected. In this way, the detection of the user's actions can be more accurate.
In one embodiment, entering the motion data into a first recognition model corresponding to the coaching motion to identify whether the motion of the user matches the coaching motion comprises:
acquiring an identification model corresponding to the reference action as a first identification model;
inputting the motion data into the first recognition model, and receiving a matching result output by the first recognition model; the matching result is the matching degree of the motion data and the reference motion;
and judging whether the action of the user is matched with the action of the coach according to the matching degree.
When the embodiment is implemented, the corresponding model can be selected to identify the standard degree of the user action by acquiring the reference action. It should be understood that the corresponding model is meant to correspond to a reference motion or a coaching motion, since the reference motion and coaching motion are themselves one motion, typically provided with the same identification. In this embodiment, the matching result output by the first recognition model is generally a relative value, and whether the action of the user reaches the standard can be determined by the value.
In one embodiment, further comprising:
when the action of the user is not matched with the action of the coach, an interrupt prompt is sent to the coach end, and a video stream matched with the reference action in the body-building video is obtained;
acquiring difference IMU data output by the first recognition model; the difference IMU data is IMU data with the largest difference from the reference action in the second IMU data;
extracting videos of corresponding parts from the video stream according to the difference IMU data to serve as a selected video stream;
playing the selected video stream to a user or playing the selected video stream with a reduced play rate.
When the embodiment is implemented, when the action of the user is not matched with the action of the coach, an interrupt instruction is sent to the coach side, and the coach can select a guiding mode according to the interrupt instruction. In practice, the inventor finds that although many action coaches can do very standard, the coaches are difficult to show details of the error part of the user to the user, so in this embodiment, difference IMU data is obtained, the difference IMU data is generally data of at least one IMU sensor, through the data, which position the user does is not accurately can be known clearly, then the video stream is adjusted, a local video of the video stream is obtained as a selected video stream, then the selected video stream is played normally or played slowly, and then the coaching instruction in live broadcast is matched, so that a very good body-building effect can be achieved.
Referring to fig. 2, based on the same inventive concept, there is also provided a system for bidirectional identification of exercise in live broadcast, including:
a user terminal configured at a user, configured to display the exercise video and the coaching real-time video to the user in live broadcast;
a coaching end configured to synchronously display the fitness video;
the user terminal starts to acquire the motion data of the user when receiving the acquisition instruction sent by the coach terminal; the instruction acquisition is sent when the training end detects that the training action is matched with any action in the body-building video;
and the user side inputs the motion data into a first recognition model corresponding to the coaching action to recognize whether the action of the user is matched with the coaching action.
In one embodiment, the coach side obtains the IMU data of the coach as first IMU data;
the training end inputs the first IMU data to be configured on the training end second recognition model, and receives the action output by the second recognition model as training action; and the input data of the second recognition model is IMU data, and the output data is actions corresponding to the IMU data.
In one embodiment, the client is further configured to:
when an acquisition instruction is received, acquiring a reference action from the acquisition instruction, and starting to receive second IMU data detected by an IMU sensor worn by a user as motion data of the user; the length of the second IMU data corresponds to the reference action; the reference motion is a motion in the set of motions that the coaching motion matches.
In one embodiment, the client is further configured to:
acquiring an identification model corresponding to the reference action as a first identification model;
inputting the motion data into the first recognition model, and receiving a matching result output by the first recognition model; the matching result is the matching degree of the motion data and the reference motion;
and judging whether the action of the user is matched with the action of the coach according to the matching degree.
In one embodiment, the client is further configured to:
when the action of the user is not matched with the action of the coach, an interrupt prompt is sent to the coach end, and a video stream matched with the reference action in the body-building video is obtained;
acquiring difference IMU data output by the first recognition model; the difference IMU data is IMU data with the largest difference from the reference action in the second IMU data;
extracting videos of corresponding parts from the video stream according to the difference IMU data to serve as a selected video stream;
playing the selected video stream to a user or playing the selected video stream with a reduced play rate.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices, or elements, or may be an electrical, mechanical, or other form of connection.
The elements described as separate components may or may not be physically separate, and it will be apparent to those skilled in the art that elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the elements and steps of the examples have been generally described functionally in the foregoing description so as to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a grid device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. A body-building action bidirectional identification method in live broadcast is characterized by comprising the following steps:
displaying the exercise video and the training real-time video to a user in live broadcast; the body-building video is synchronously displayed at the training end;
when receiving an acquisition instruction sent by the coach end, starting to acquire motion data of a user; the instruction acquisition is sent when the training end detects that the training action is matched with any action in the body-building video;
and inputting the motion data into a first recognition model corresponding to the coach motion to recognize whether the motion of the user is matched with the coach motion.
2. The method of claim 1, wherein the detecting by the trainer end that a trainer action matches any action in the exercise video comprises:
the training end acquires IMU data of a training machine as first IMU data;
the training end inputs the first IMU data to be configured on the training end second recognition model, and receives the action output by the second recognition model as training action; and the input data of the second recognition model is IMU data, and the output data is actions corresponding to the IMU data.
3. The method for bi-directional identification of exercise in live broadcast according to claim 1, wherein when receiving the acquisition command from the trainer end, starting to acquire exercise data of the user comprises:
when an acquisition instruction is received, acquiring a reference action from the acquisition instruction, and starting to receive second IMU data detected by an IMU sensor worn by a user as motion data of the user; the length of the second IMU data corresponds to the reference action; the reference motion is a motion in the set of motions that the coaching motion matches.
4. A method of bi-directional identification of exercise in a live broadcast according to claim 3, wherein entering the motion data into a first identification model corresponding to the coaching motion to identify whether the user's motion matches the coaching motion comprises:
acquiring an identification model corresponding to the reference action as a first identification model;
inputting the motion data into the first recognition model, and receiving a matching result output by the first recognition model; the matching result is the matching degree of the motion data and the reference motion;
and judging whether the action of the user is matched with the action of the coach according to the matching degree.
5. A method of bi-directional identification of exercise in live broadcast according to claim 3, further comprising:
when the action of the user is not matched with the action of the coach, an interrupt prompt is sent to the coach end, and a video stream matched with the reference action in the body-building video is obtained;
acquiring difference IMU data output by the first recognition model; the difference IMU data is IMU data with the largest difference from the reference action in the second IMU data;
extracting videos of corresponding parts from the video stream according to the difference IMU data to serve as a selected video stream;
playing the selected video stream to a user or playing the selected video stream with a reduced play rate.
6. A system for bi-directional identification of exercise in live broadcast, comprising:
a user terminal configured at a user, configured to display the exercise video and the coaching real-time video to the user in live broadcast;
a coaching end configured to synchronously display the fitness video;
the user terminal starts to acquire the motion data of the user when receiving the acquisition instruction sent by the coach terminal; the instruction acquisition is sent when the training end detects that the training action is matched with any action in the body-building video;
and the user side inputs the motion data into a first recognition model corresponding to the coaching action to recognize whether the action of the user is matched with the coaching action.
7. The system of claim 6, wherein the trainer end obtains the IMU data of the trainer as the first IMU data;
the training end inputs the first IMU data to be configured on the training end second recognition model, and receives the action output by the second recognition model as training action; and the input data of the second recognition model is IMU data, and the output data is actions corresponding to the IMU data.
8. The in-live exercise bidirectional recognition system of claim 6, wherein the client is further configured to:
when an acquisition instruction is received, acquiring a reference action from the acquisition instruction, and starting to receive second IMU data detected by an IMU sensor worn by a user as motion data of the user; the length of the second IMU data corresponds to the reference action; the reference motion is a motion in the set of motions that the coaching motion matches.
9. The in-live exercise bidirectional recognition system of claim 8, wherein the client is further configured to:
acquiring an identification model corresponding to the reference action as a first identification model;
inputting the motion data into the first recognition model, and receiving a matching result output by the first recognition model; the matching result is the matching degree of the motion data and the reference motion;
and judging whether the action of the user is matched with the action of the coach according to the matching degree.
10. The in-live exercise bidirectional recognition system of claim 6, wherein the client is further configured to:
when the action of the user is not matched with the action of the coach, an interrupt prompt is sent to the coach end, and a video stream matched with the reference action in the body-building video is obtained;
acquiring difference IMU data output by the first recognition model; the difference IMU data is IMU data with the largest difference from the reference action in the second IMU data;
extracting videos of corresponding parts from the video stream according to the difference IMU data to serve as a selected video stream;
playing the selected video stream to a user or playing the selected video stream with a reduced play rate.
CN202111553780.9A 2021-12-17 2021-12-17 Bidirectional identification method and system for body-building actions in live broadcast Pending CN116266869A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111553780.9A CN116266869A (en) 2021-12-17 2021-12-17 Bidirectional identification method and system for body-building actions in live broadcast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111553780.9A CN116266869A (en) 2021-12-17 2021-12-17 Bidirectional identification method and system for body-building actions in live broadcast

Publications (1)

Publication Number Publication Date
CN116266869A true CN116266869A (en) 2023-06-20

Family

ID=86743734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111553780.9A Pending CN116266869A (en) 2021-12-17 2021-12-17 Bidirectional identification method and system for body-building actions in live broadcast

Country Status (1)

Country Link
CN (1) CN116266869A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274319A (en) * 2023-11-20 2023-12-22 西安瑜乐文化科技股份有限公司 Big data-based body-building exercise live broadcast method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274319A (en) * 2023-11-20 2023-12-22 西安瑜乐文化科技股份有限公司 Big data-based body-building exercise live broadcast method and system
CN117274319B (en) * 2023-11-20 2024-01-30 西安瑜乐文化科技股份有限公司 Big data-based body-building exercise live broadcast method and system

Similar Documents

Publication Publication Date Title
CN110505519B (en) Video editing method, electronic equipment and storage medium
US9706235B2 (en) Time varying evaluation of multimedia content
US8926443B2 (en) Virtual golf simulation device, system including the same and terminal device, and method for virtual golf simulation
CN109637518A (en) Virtual newscaster's implementation method and device
TWI631978B (en) Apparatus for virtual golf simulation and information service method using the same
US10088901B2 (en) Display device and operating method thereof
WO2013149357A1 (en) Analyzing human gestural commands
KR20110017258A (en) Fitness learning system based on user's participation and the method of training
US20210299518A1 (en) Flagging irregularities in user performance in an exercise machine system
US11951377B2 (en) Leaderboard with irregularity flags in an exercise machine system
KR20200129327A (en) Method of providing personal training service and system thereof
CN107376302B (en) A kind of method and device of the shooting simulating of the laminated bow based on virtual reality technology
CN111918122A (en) Video processing method and device, electronic equipment and readable storage medium
CN116266869A (en) Bidirectional identification method and system for body-building actions in live broadcast
CN107454437A (en) A kind of video labeling method and its device, server
CN111442464B (en) Air conditioner and control method thereof
CN112312142B (en) Video playing control method and device and computer readable storage medium
US20230021945A1 (en) Systems and methods for dynamically generating exercise playlist
US11547904B2 (en) Exercise assisting device and exercise assisting method
CN105469070A (en) Intelligent monitoring system capable of prompting sitting position
KR20180056055A (en) System for providing solution of justice on martial arts sports and analyzing bigdata using augmented reality, and Drive Method of the Same
CN112714345A (en) Video playing control method and device, storage medium and video playing equipment
Jang et al. Cloud Mat: Context-aware personalization of fitness content
US10369467B2 (en) Interactive game system
CN115712405B (en) Picture presentation method and device, intelligent body-building display equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination