CN113992957A - Motion synchronization system and method in video file suitable for intelligent terminal - Google Patents
Motion synchronization system and method in video file suitable for intelligent terminal Download PDFInfo
- Publication number
- CN113992957A CN113992957A CN202111168483.2A CN202111168483A CN113992957A CN 113992957 A CN113992957 A CN 113992957A CN 202111168483 A CN202111168483 A CN 202111168483A CN 113992957 A CN113992957 A CN 113992957A
- Authority
- CN
- China
- Prior art keywords
- action
- videos
- synchronization
- attitude data
- intelligent terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 20
- 230000009471 action Effects 0.000 claims abstract description 45
- 230000001360 synchronised effect Effects 0.000 claims abstract description 8
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000386 athletic effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
- H04N21/43072—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses action synchronization system and method in video file suitable for intelligent terminal, and intelligent terminal includes: a display at least for displaying a video file; the memory is at least used for storing video files to be synchronized; a processor at least for executing a motion synchronization program; the processor includes: the action recognition unit is used for recognizing the characters in the video and generating a digital skeleton; the action calculating unit outputs the attitude data corresponding to the current action according to the digital skeleton; the action comparison unit is used for calculating two frames of images with the minimum difference of the attitude data according to the difference of the attitude data of each frame of the two videos; and the action synchronization unit takes the time points of the two frames of images with the minimum difference in the two videos in the respective videos as the time synchronization points during the contrast playing. The application has the advantages that: the system and the method for synchronizing the actions in the video files are suitable for the intelligent terminal, and can help users to synchronize intelligently.
Description
Technical Field
The application relates to a system and a method for synchronizing actions in video files, which are suitable for an intelligent terminal.
Background
There is no effective tool to address the need to compare two complex sets of movements to improve the quality of a person's athletic movements. In the existing scheme, two videos are played simultaneously, the playing initial positions of the two videos are manually adjusted, and then one of the videos is paused and adjusted as required to deeply compare action details to understand the difference between the videos.
Synchronizing and comparing a set of videos is extremely cumbersome and difficult.
Disclosure of Invention
An action synchronization system in a video file suitable for an intelligent terminal, wherein the intelligent terminal comprises: a display at least for displaying a video file; the memory is at least used for storing video files to be synchronized; a processor at least for executing a motion synchronization program; the processor includes: the action recognition unit is used for recognizing the characters in the video and generating a digital skeleton; the action calculating unit outputs the attitude data corresponding to the current action according to the digital skeleton; the action comparison unit is used for calculating two frames of images with the minimum difference of the attitude data according to the difference of the attitude data of each frame of the two videos; and the action synchronization unit takes the time points of the two frames of images with the minimum difference in the two videos in the respective videos as the time synchronization points during the contrast playing.
A method for synchronizing actions in a video file suitable for an intelligent terminal comprises the following steps: a display at least for displaying a video file; the memory is at least used for storing video files to be synchronized; a processor at least for executing a motion synchronization program; the action synchronization method comprises the following steps: identifying characters in the video and generating a digital skeleton; outputting attitude data corresponding to the current action according to the digital skeleton; calculating two frames of images with minimum difference of the attitude data according to the difference of the attitude data of each frame of the two videos; and taking the time points of the two frames of images with the minimum difference in the two videos in the respective videos as time synchronization points when the two videos are played in contrast.
The application has the advantages that: the system and the method for synchronizing the actions in the video files are suitable for the intelligent terminal, and can help users to synchronize intelligently.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
fig. 1 is a schematic diagram of a digital skeleton generated when a video is processed by a motion synchronization system and a synchronization method in a video file suitable for an intelligent terminal according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In this application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings. These terms are used primarily to better describe the present application and its embodiments, and are not used to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meaning of these terms in this application will be understood by those of ordinary skill in the art as appropriate.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "sleeved" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1, the present application provides a motion synchronization system in a video file suitable for an intelligent terminal, where the intelligent terminal includes: a display at least for displaying a video file; the memory is at least used for storing video files to be synchronized; a processor at least for executing a motion synchronization program; the processor includes: the action recognition unit is used for recognizing the characters in the video and generating a digital skeleton; the action calculating unit outputs the attitude data corresponding to the current action according to the digital skeleton; the action comparison unit is used for calculating two frames of images with the minimum difference of the attitude data according to the difference of the attitude data of each frame of the two videos; and the action synchronization unit takes the time points of the two frames of images with the minimum difference in the two videos in the respective videos as the time synchronization points during the contrast playing.
The intelligent terminal can further comprise a collector to collect video files.
The digital skeleton includes digital joints, and the posture data includes position data and angle data of the digital joints.
When the action comparison unit compares, one frame of the first video file can be compared with a plurality of frames in the set range in the second video file. The setting range can take the relative value of each time point in the video as a selected reference.
And an action prompting unit can be arranged to prompt the part with the maximum difference of the digital frameworks in the corresponding frame of each time after synchronization to the user.
The application also provides a method for synchronizing actions in the video file, which is suitable for the intelligent terminal, wherein the intelligent terminal comprises the following steps: a display at least for displaying a video file; the memory is at least used for storing video files to be synchronized; a processor at least for executing a motion synchronization program; the action synchronization method comprises the following steps: identifying characters in the video and generating a digital skeleton; outputting attitude data corresponding to the current action according to the digital skeleton; calculating two frames of images with minimum difference of the attitude data according to the difference of the attitude data of each frame of the two videos; and taking the time points of the two frames of images with the minimum difference in the two videos in the respective videos as time synchronization points when the two videos are played in contrast.
As a specific scheme, two groups of action videos (different people do the same action or the same person does the same action at different time) are adjusted to play initial positions (the A video is played 0.xx seconds later than the B video) through an AI (artificial intelligence) visual algorithm so that the two videos are positioned at the most synchronous positions of the action and are played simultaneously. During playing, the most different joints at every moment are marked, and the similarity of the two groups of actions is quantified.
Two sets of skeletons for a set of videos were acquired based on existing Computer Vision (CV) Skeletonization (skelonionization) techniques. And calculating the difference sum of all joint distances of the two frames of motion at each moment at different playing moments of the two groups of videos (the A video is played 0.xx seconds later than the B video) through matrix operation. Within a specified range, the time when the sum of the differences is minimum is calculated and is marked as a synchronization point. This minimum difference is the similarity of the two sets of actions. In this synchronization point, the joint that is the farthest away joint from the joint at each moment is marked. It is possible to synchronize the video, which is a single action but different, one after another and to indicate at this synchronization point which body part is the most different at each moment.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (2)
1. An action synchronization system in a video file suitable for an intelligent terminal, wherein the intelligent terminal comprises:
a display at least for displaying a video file;
the memory is at least used for storing video files to be synchronized;
a processor at least for executing a motion synchronization program;
the method is characterized in that:
the processor includes:
the action recognition unit is used for recognizing the characters in the video and generating a digital skeleton;
the action calculating unit outputs the attitude data corresponding to the current action according to the digital skeleton;
the action comparison unit is used for calculating two frames of images with the minimum difference of the attitude data according to the difference of the attitude data of each frame of the two videos;
and the action synchronization unit takes the time points of the two frames of images with the minimum difference in the two videos in the respective videos as the time synchronization points during the contrast playing.
2. A method for synchronizing actions in a video file suitable for an intelligent terminal comprises the following steps:
a display at least for displaying a video file;
the memory is at least used for storing video files to be synchronized;
a processor at least for executing a motion synchronization program;
the method is characterized in that:
the action synchronization method comprises the following steps:
identifying characters in the video and generating a digital skeleton;
outputting attitude data corresponding to the current action according to the digital skeleton;
calculating two frames of images with minimum difference of the attitude data according to the difference of the attitude data of each frame of the two videos;
and taking the time points of the two frames of images with the minimum difference in the two videos in the respective videos as time synchronization points when the two videos are played in contrast.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2020110635406 | 2020-09-30 | ||
CN202011063540 | 2020-09-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113992957A true CN113992957A (en) | 2022-01-28 |
Family
ID=79737659
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111168483.2A Pending CN113992957A (en) | 2020-09-30 | 2021-09-29 | Motion synchronization system and method in video file suitable for intelligent terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113992957A (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160182769A1 (en) * | 2014-12-19 | 2016-06-23 | Postech Academy - Industry Foundation | Apparatus and method for generating motion effects by analyzing motions of objects |
CN107071559A (en) * | 2017-05-11 | 2017-08-18 | 大连动感智慧科技有限公司 | Many video comparison systems based on crucial frame synchronization |
CN107349594A (en) * | 2017-08-31 | 2017-11-17 | 华中师范大学 | A kind of action evaluation method of virtual Dance System |
KR20180065955A (en) * | 2016-12-07 | 2018-06-18 | (주)스코트 | Device and method for motion synchronization and mutual interference in choreography copyright system |
US20180176423A1 (en) * | 2016-12-15 | 2018-06-21 | Disney Enterprises, Inc. | Apparatus, Systems and Methods For Nonlinear Synchronization Of Action Videos |
CN110418205A (en) * | 2019-07-04 | 2019-11-05 | 安徽华米信息科技有限公司 | Body-building teaching method, device, equipment, system and storage medium |
US20200090408A1 (en) * | 2018-09-14 | 2020-03-19 | Virkar Hemant | Systems and methods for augmented reality body movement guidance and measurement |
CN111565298A (en) * | 2020-04-30 | 2020-08-21 | 腾讯科技(深圳)有限公司 | Video processing method, device, equipment and computer readable storage medium |
-
2021
- 2021-09-29 CN CN202111168483.2A patent/CN113992957A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160182769A1 (en) * | 2014-12-19 | 2016-06-23 | Postech Academy - Industry Foundation | Apparatus and method for generating motion effects by analyzing motions of objects |
KR20180065955A (en) * | 2016-12-07 | 2018-06-18 | (주)스코트 | Device and method for motion synchronization and mutual interference in choreography copyright system |
US20180176423A1 (en) * | 2016-12-15 | 2018-06-21 | Disney Enterprises, Inc. | Apparatus, Systems and Methods For Nonlinear Synchronization Of Action Videos |
CN107071559A (en) * | 2017-05-11 | 2017-08-18 | 大连动感智慧科技有限公司 | Many video comparison systems based on crucial frame synchronization |
CN107349594A (en) * | 2017-08-31 | 2017-11-17 | 华中师范大学 | A kind of action evaluation method of virtual Dance System |
US20200090408A1 (en) * | 2018-09-14 | 2020-03-19 | Virkar Hemant | Systems and methods for augmented reality body movement guidance and measurement |
CN110418205A (en) * | 2019-07-04 | 2019-11-05 | 安徽华米信息科技有限公司 | Body-building teaching method, device, equipment, system and storage medium |
CN111565298A (en) * | 2020-04-30 | 2020-08-21 | 腾讯科技(深圳)有限公司 | Video processing method, device, equipment and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105005777B (en) | Audio and video recommendation method and system based on human face | |
CN110363133B (en) | Method, device, equipment and storage medium for sight line detection and video processing | |
CN106161939A (en) | A kind of method, photo taking and terminal | |
Wolf et al. | An eye for an eye: A single camera gaze-replacement method | |
CN102724449A (en) | Interactive TV and method for realizing interaction with user by utilizing display device | |
CN108198130B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN110837750B (en) | Face quality evaluation method and device | |
CN104914989B (en) | The control method of gesture recognition device and gesture recognition device | |
CN109785396B (en) | Writing posture monitoring method, system and device based on binocular camera | |
CN110780742B (en) | Eyeball tracking processing method and related device | |
CN110544302A (en) | Human body action reconstruction system and method based on multi-view vision and action training system | |
CN110838353A (en) | Action matching method and related product | |
CN110399794B (en) | Human body-based gesture recognition method, device, equipment and storage medium | |
CN112069863B (en) | Face feature validity determination method and electronic equipment | |
CN112749684A (en) | Cardiopulmonary resuscitation training and evaluating method, device, equipment and storage medium | |
CN108090968B (en) | Method and device for realizing augmented reality AR and computer readable storage medium | |
CN110298220A (en) | Action video live broadcasting method, system, electronic equipment, storage medium | |
CN109274883A (en) | Posture antidote, device, terminal and storage medium | |
CN110334629A (en) | Can multi-faceted detecting distance method, apparatus and readable storage medium storing program for executing | |
CN110478903B (en) | Control method and device for virtual camera | |
CN110314344B (en) | Exercise reminding method, device and system | |
CN111134686A (en) | Human body disease determination method and device, storage medium and terminal | |
CN113342157B (en) | Eyeball tracking processing method and related device | |
CN108553889A (en) | Dummy model exchange method and device | |
CN113012042B (en) | Display device, virtual photo generation method, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |