CN112511889B - Video playing method, device, terminal and storage medium - Google Patents

Video playing method, device, terminal and storage medium Download PDF

Info

Publication number
CN112511889B
CN112511889B CN202011289879.8A CN202011289879A CN112511889B CN 112511889 B CN112511889 B CN 112511889B CN 202011289879 A CN202011289879 A CN 202011289879A CN 112511889 B CN112511889 B CN 112511889B
Authority
CN
China
Prior art keywords
video
action
interface
playing
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011289879.8A
Other languages
Chinese (zh)
Other versions
CN112511889A (en
Inventor
任超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202011289879.8A priority Critical patent/CN112511889B/en
Publication of CN112511889A publication Critical patent/CN112511889A/en
Application granted granted Critical
Publication of CN112511889B publication Critical patent/CN112511889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content

Abstract

The disclosure relates to a video playing method, a video playing device, a video playing terminal and a video storage medium, and belongs to the technical field of computers. The method comprises the following steps: identifying a video segment to which an action contained in a target video belongs, wherein the video segment comprises a plurality of continuous target video frames; displaying an action display inlet in a video playing interface of the target video; and responding to the triggering operation of the action presentation portal, and displaying the dynamic picture generated according to the video clip. The method enriches the content displayed to the user, improves the interactivity, avoids searching the video segments corresponding to the actions from the target video, and simplifies the user operation.

Description

Video playing method, device, terminal and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a video playing method, a video playing device, a video playing terminal and a storage medium.
Background
With the development of computer technology, watching video has become a common entertainment mode, and in the process of playing video, users can discuss the video by issuing comments or barrages, for example, for live broadcast of basketball games, users can discuss whether players make a foul or not by issuing comments in the process of watching. However, the video playing mode only displays the content discussed by the user to the user, and the displayed content is single and has poor interactivity.
Disclosure of Invention
The disclosure provides a video playing method, a video playing device, a video playing terminal and a video storage medium, which enrich displayed contents and improve interactivity.
According to an aspect of the embodiments of the present disclosure, there is provided a video playing method, including:
identifying a video segment to which an action contained in a target video belongs, wherein the video segment comprises a plurality of continuous video frames;
displaying an action display inlet in a video playing interface of the target video;
and responding to the triggering operation of the action presentation portal, and displaying the dynamic picture generated according to the video clip.
In one possible implementation manner, the identifying the video segment to which the action included in the target video belongs includes:
performing motion trail identification on any object in the target video to obtain a motion trail of the object and a plurality of continuous video frames corresponding to the motion trail;
and extracting video fragments to which the plurality of video frames belong from the target video.
In another possible implementation manner, the identifying the motion trail of any object in the target video to obtain the motion trail of the object and a plurality of continuous video frames corresponding to the motion trail includes:
And carrying out motion trail identification on a target part of any object in the target video to obtain a motion trail of the target part of the object and a plurality of continuous video frames corresponding to the motion trail.
In another possible implementation manner, the video playing method further includes:
determining action information corresponding to the action;
the responding to the triggering operation of the action presentation portal displays the dynamic picture generated according to the video clip, and comprises the following steps:
and responding to the triggering operation of the action display entrance, and displaying the dynamic picture and the action information.
In another possible implementation manner, the video playing method further includes:
performing motion trail identification on any object in the target video to obtain a motion trail of the object;
and determining the action executed by the object according to the motion trail.
In another possible implementation manner, the displaying the moving picture generated according to the video clip in response to the triggering operation of the action presentation portal includes:
responding to the triggering operation of the action display entrance, and displaying the dynamic picture and the video tag; or alternatively, the process may be performed,
And responding to the triggering operation of the action presentation portal, and displaying the dynamic picture and at least one similar video, wherein the similar video is the same as the video label of the target video.
In another possible implementation manner, the actions include a plurality of moving pictures, each moving picture includes a plurality of moving pictures, each moving picture is generated according to a video clip to which the corresponding action belongs, and the displaying, in response to a triggering operation of the action presentation portal, the moving picture generated according to the video clip includes:
and respectively displaying a plurality of dynamic pictures in response to the triggering operation of the action display entrance.
In another possible implementation manner, the displaying the moving picture generated according to the video clip in response to the triggering operation of the action presentation portal includes:
and responding to the triggering operation of the action display entrance, and circularly playing the dynamic picture.
In another possible implementation manner, the moving picture includes a plurality of moving pictures, and the displaying the moving picture generated according to the video clip in response to the triggering operation of the action presentation portal includes:
and responding to the triggering operation of the action display entrance, and sequentially playing a plurality of dynamic pictures according to the sequence of the video clips corresponding to each dynamic picture in the target video.
In another possible implementation manner, the moving picture includes a plurality of moving pictures, and the displaying the moving picture generated according to the video clip in response to the triggering operation of the action presentation portal includes:
displaying thumbnails of a plurality of moving pictures in response to a trigger operation of the action presentation portal;
and in response to the triggering operation of any thumbnail in the plurality of dynamic pictures, circularly playing the dynamic picture corresponding to the triggered thumbnail.
In another possible implementation manner, the displaying the moving picture generated according to the video clip in response to the triggering operation of the action presentation portal includes:
and responding to the triggering operation of the action display entrance, displaying an action display interface different from the video playing interface, wherein the action display interface comprises the dynamic picture.
In another possible implementation manner, the action presentation interface includes a shooting entry, and the video playing method further includes:
responding to the triggering operation of the shooting entrance, and jumping to a shooting interface;
and shooting a video through the shooting interface, and adding a video tag which is the same as the video tag of the target video for the video.
In another possible implementation manner, the video playing method further includes:
responding to the closing operation of the video playing interface, and synchronously closing the action display interface; or alternatively, the process may be performed,
responding to the closing operation of the video playing interface, and stopping playing the dynamic pictures in the action display interface; or alternatively, the process may be performed,
responding to the video playing interface to play other videos, and closing the action display interface; or alternatively, the process may be performed,
and responding to the playing of other videos in the video playing interface, and stopping playing the dynamic pictures in the action display interface.
In another possible implementation manner, the identifying the video segment to which the action included in the target video belongs includes:
and identifying the video clip to which the action contained in the target video belongs in the process of playing the target video by the video playing interface.
According to still another aspect of the embodiments of the present disclosure, there is provided a video playing method including:
receiving a dynamic picture sent by a server, wherein the dynamic picture is generated according to a video segment after identifying the video segment to which an action in a target video belongs, and the video segment comprises a plurality of continuous video frames;
Displaying an action display inlet in a video playing interface of the target video;
and responding to the triggering operation of the action display entrance, and displaying the dynamic picture.
According to still another aspect of the embodiments of the present disclosure, there is provided a video playback apparatus including:
a video recognition unit configured to perform recognition of a video clip to which an action included in a target video belongs, the video clip including a plurality of video frames in succession;
an entry display unit configured to display an action presentation entry in a video playback interface of the target video;
and a display unit configured to perform a trigger operation in response to the action presentation portal, and display a moving picture generated from the video clip.
In one possible implementation, the video recognition unit includes:
the track recognition subunit is configured to perform motion track recognition on any object in the target video to obtain a motion track of the object and a plurality of continuous video frames corresponding to the motion track;
and a segment extraction subunit configured to perform extraction of video segments to which the plurality of video frames belong from the target video.
In another possible implementation, the track recognition subunit is configured to perform:
and carrying out motion trail identification on a target part of any object in the target video to obtain a motion trail of the target part of the object and a plurality of continuous video frames corresponding to the motion trail.
In another possible implementation manner, the video playing device further includes:
an action information determining unit configured to perform determination of action information corresponding to the action;
the display unit is configured to perform a trigger operation in response to the action presentation entry, and display the moving picture and the action information.
In another possible implementation manner, the video playing device further includes:
the action determining unit is configured to perform motion trail identification on any object in the target video to obtain a motion trail of the object;
the action determining unit is further configured to perform an action performed by the object according to the motion trajectory.
In another possible implementation, the display unit is configured to perform:
responding to the triggering operation of the action display entrance, and displaying the dynamic picture and the video tag; or alternatively, the process may be performed,
And responding to the triggering operation of the action presentation portal, and displaying the dynamic picture and at least one similar video, wherein the similar video is the same as the video label of the target video.
In another possible implementation manner, the action includes a plurality of moving pictures, each moving picture is generated according to a video clip to which the corresponding action belongs, and the display unit is configured to perform a triggering operation for the action presentation portal, and display the plurality of moving pictures respectively.
In another possible implementation manner, the display unit is configured to perform a cyclic playing of the dynamic picture in response to a trigger operation on the action presentation portal.
In another possible implementation manner, the moving pictures include a plurality of moving pictures, and the display unit is configured to perform a triggering operation for responding to the action presentation entry, and sequentially play the plurality of moving pictures according to the sequence of the video segment corresponding to each moving picture in the target video.
In another possible implementation manner, the moving picture includes a plurality of display units configured to perform:
displaying thumbnails of a plurality of moving pictures in response to a trigger operation of the action presentation portal;
And in response to the triggering operation of any thumbnail in the plurality of dynamic pictures, circularly playing the dynamic picture corresponding to the triggered thumbnail.
In another possible implementation, the display unit is configured to perform:
and responding to the triggering operation of the action display entrance, displaying an action display interface different from the video playing interface, wherein the action display interface comprises the dynamic picture.
In another possible implementation manner, the action presentation interface includes a shooting portal, and the video playing device further includes:
a photographing unit configured to perform a jump to a photographing interface in response to a trigger operation to the photographing entrance;
the shooting unit is further configured to perform shooting of a video through the shooting interface, and add a video tag identical to the video tag of the target video to the video.
In another possible implementation manner, the video playing device further includes:
a closing unit configured to perform synchronous closing of the action presentation interface in response to a closing operation of the video play interface; or alternatively, the process may be performed,
the closing unit is further configured to perform stopping playing the dynamic picture in the action display interface in response to a closing operation of the video playing interface; or alternatively, the process may be performed,
The closing unit is further configured to execute closing the action display interface in response to playing other videos in the video playing interface; or alternatively, the process may be performed,
and the closing unit is further configured to execute stopping playing the dynamic picture in the action display interface in response to the playing of other videos in the video playing interface.
In another possible implementation manner, the video identifying unit is configured to identify a video clip to which an action included in the target video belongs in a process of playing the target video by the video playing interface.
According to still another aspect of the embodiments of the present disclosure, there is provided a video playback apparatus including:
the receiving unit is configured to execute a dynamic picture sent by the receiving server, wherein the dynamic picture is generated according to a video segment after identifying the video segment to which an action in a target video belongs, and the video segment comprises a plurality of continuous video frames;
an entry display unit configured to display an action presentation entry in a video playback interface of the target video;
and a display unit configured to perform a trigger operation in response to the action presentation entry, and display the moving picture.
According to still another aspect of the embodiments of the present disclosure, there is provided a terminal including:
one or more processors;
volatile or non-volatile memory for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to perform the video playback method of the above aspect.
According to still another aspect of the embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, which when executed by a processor of a terminal, enables the terminal to perform the video playing method described in the above aspect.
According to yet another aspect of embodiments of the present disclosure, there is provided a computer program product, which when executed by a processor of a terminal, enables the terminal to perform the video playing method of the above aspect.
According to the video playing method, the video playing device, the terminal and the storage medium, in the video playing process, the generated dynamic pictures are displayed to the user, some actions in the target video are displayed to the user through the dynamic pictures, compared with the display of comments or barrages in related technologies, the content displayed to the user is enriched, interactivity is improved, in addition, the user can conveniently check some actions in the target video, video clips corresponding to the actions are prevented from being searched from the target video, and user operation is simplified.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a video playing method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating another video playing method according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating another video playing method according to an exemplary embodiment.
Fig. 4 is a schematic diagram of a video playback interface, according to an example embodiment.
FIG. 5 is a schematic diagram of an action presentation interface, shown according to an example embodiment.
FIG. 6 is a schematic diagram of another action presentation interface shown in accordance with an exemplary embodiment.
Fig. 7 is a block diagram of a video playback device according to an exemplary embodiment.
Fig. 8 is a block diagram of another video playback device according to an exemplary embodiment.
Fig. 9 is a block diagram of another video playback device according to an exemplary embodiment.
Fig. 10 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment.
Fig. 11 is a block diagram illustrating a structure of a server according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description of the present disclosure and the claims and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The user information (including but not limited to user equipment information, user personal information, etc.) related to the present disclosure is information authorized by the user or sufficiently authorized by each party.
The video playing method provided by the embodiment of the disclosure is applied to various video playing scenes.
For example, the method is applied to a playing scene of video released by a user or a live broadcast scene. In the process of playing the video, the terminal can also identify the video by adopting the video playing method provided by the embodiment of the disclosure, generate a dynamic picture and display the dynamic picture to a user besides commenting or publishing the video, thereby enriching the content displayed to the user and improving the interactivity with the user.
Fig. 1 is a flowchart of a video playing method according to an exemplary embodiment, and referring to fig. 1, the method is applied to a terminal, which may be a portable, pocket-sized, hand-held, or other type of terminal, such as a mobile phone, a computer, a tablet computer, etc., and the method includes the following steps:
101. a video clip to which an action included in the target video belongs is identified, the video clip including a plurality of consecutive video frames.
102. And displaying an action display entry in a video playing interface of the target video.
103. And responding to the triggering operation of the action presentation portal, and displaying the dynamic picture generated according to the video clip.
According to the method provided by the embodiment of the disclosure, in the video playing process, the generated dynamic pictures are displayed to the user, some actions in the target video are displayed to the user through the dynamic pictures, compared with the display of comments or barrages in the related technology, the content displayed to the user is enriched, the interactivity is improved, the user can conveniently check some actions in the target video, the video clips corresponding to the actions are prevented from being searched from the target video, and the user operation is simplified.
In one possible implementation, identifying a video segment to which an action contained in a target video belongs includes:
carrying out motion trail identification on any object in the target video to obtain a motion trail of the object and a plurality of continuous video frames corresponding to the motion trail;
and extracting video fragments to which a plurality of video frames belong from the target video.
In another possible implementation manner, motion trail identification is performed on any object in the target video to obtain a motion trail of the object and a plurality of continuous video frames corresponding to the motion trail, including:
and carrying out motion trail identification on a target part of any object in the target video to obtain a motion trail of the target part of the object and a plurality of continuous video frames corresponding to the motion trail.
In another possible implementation manner, the video playing method further includes:
determining action information corresponding to the action;
in response to a triggering operation of the action presentation portal, displaying a moving picture generated from the video clip, comprising:
and responding to the triggering operation of the action display entrance, and displaying the dynamic picture and the action information.
In another possible implementation manner, the video playing method further includes:
carrying out motion trail identification on any object in the target video to obtain the motion trail of the object;
and determining the action executed by the object according to the motion trail.
In another possible implementation manner, in response to a trigger operation of the action presentation portal, displaying a moving picture generated according to the video clip includes:
responding to the triggering operation of the action display entrance, and displaying a dynamic picture and a video label; or alternatively, the process may be performed,
and responding to the triggering operation of the action presentation portal, and displaying the dynamic picture and at least one similar video, wherein the similar video is the same as the video label of the target video.
In another possible implementation manner, the actions include a plurality of moving pictures, each moving picture is generated according to a video clip to which the corresponding action belongs, and in response to a triggering operation of the action presentation portal, displaying the moving picture generated according to the video clip includes:
And respectively displaying a plurality of dynamic pictures in response to the triggering operation of the action display entrance.
In another possible implementation manner, in response to a trigger operation of the action presentation portal, displaying a moving picture generated according to the video clip includes:
and responding to the triggering operation of the action display entrance, and circularly playing the dynamic picture.
In another possible implementation, the moving picture includes a plurality of moving pictures, and displaying the moving picture generated from the video clip in response to a trigger operation to the action presentation portal includes:
and responding to the triggering operation of the action display entrance, and sequentially playing a plurality of dynamic pictures according to the sequence of the video clips corresponding to each dynamic picture in the target video.
In another possible implementation, the moving picture includes a plurality of moving pictures, and displaying the moving picture generated from the video clip in response to a trigger operation to the action presentation portal includes:
displaying thumbnails of a plurality of moving pictures in response to a trigger operation of the action presentation portal;
and in response to triggering operation of any thumbnail in the plurality of dynamic pictures, circularly playing the dynamic picture corresponding to the triggered thumbnail.
In another possible implementation manner, in response to a trigger operation of the action presentation portal, displaying a moving picture generated according to the video clip includes:
And responding to the triggering operation of the action display entrance, displaying an action display interface different from the video playing interface, wherein the action display interface comprises a dynamic picture.
In another possible implementation manner, the action display interface includes a shooting inlet, and the video playing method further includes:
responding to triggering operation of a shooting entrance, and jumping to a shooting interface;
and shooting the video through a shooting interface, and adding the video tag which is the same as the video tag of the target video to the video.
In another possible implementation manner, the video playing method further includes:
responding to closing operation of the video playing interface, and synchronously closing the action display interface; or alternatively, the process may be performed,
responding to closing operation of the video playing interface, and stopping playing the dynamic pictures in the action display interface; or alternatively, the process may be performed,
responding to other videos played in the video playing interface, and closing the action display interface; or alternatively, the process may be performed,
and responding to the playing of other videos in the video playing interface, and stopping playing the dynamic pictures in the action display interface.
In another possible implementation, identifying a video segment to which an action included in a target video belongs includes:
and identifying the video clip to which the action contained in the target video belongs in the process of playing the target video by the video playing interface.
Fig. 2 is a flowchart illustrating another video playing method according to an exemplary embodiment, and referring to fig. 2, the method is applied to a terminal, and includes the following steps:
201. and receiving a dynamic picture sent by a server, wherein the dynamic picture is generated according to a video fragment after identifying the video fragment to which the action in the target video belongs, and the video fragment comprises a plurality of continuous video frames.
202. And displaying an action display entry in a video playing interface of the target video.
203. And responding to the triggering operation of the action display entrance, and displaying the dynamic picture.
According to the method provided by the embodiment of the disclosure, in the video playing process, the generated dynamic pictures are displayed to the user, some actions in the target video are displayed to the user through the dynamic pictures, compared with the display of comments or barrages in the related technology, the content displayed to the user is enriched, the interactivity is improved, the user can conveniently check some actions in the target video, the video clips corresponding to the actions are prevented from being searched from the target video, and the user operation is simplified.
Fig. 3 is a flowchart illustrating another video playing method according to an exemplary embodiment, referring to fig. 3, the method is applied to a terminal, and includes the following steps:
301. And the terminal identifies the motion trail of any object in the target video to obtain the motion trail of the object and a plurality of continuous video frames corresponding to the motion trail.
In the embodiment of the disclosure, the terminal identifies a video clip to which an action included in the target video belongs. The target video is a live video or a video released for a user. For example, the target video is a sports event being broadcast, or a game piece released for the user, or the like. The actions contained in the target video refer to actions performed by any object in the target video, and the video clip includes a plurality of consecutive video frames. For example, the target video is a video of a basketball game, the object is a player playing the game, and the action is an action performed by any player.
In one possible implementation manner, during the process of playing the target video by the terminal on the video playing interface, identifying a video clip to which an action included in the target video belongs. The process of playing the target video by the video playing interface refers to that the video played by the video playing interface is the target video, but the target video may be in a playing state or a pause state.
In one possible implementation manner, the terminal is provided with a target application, plays a target video through the target application, and identifies a video clip to which an action included in the target video belongs. For example, the target application is a video play application or a live application.
And the terminal identifies the target video, including identifying the motion trail of the object in the target video. The motion trail identification is performed on any object to determine the motion trail of the object, and the motion trail is formed by connecting actions in a plurality of continuous video frames, so that the continuous plurality of video frames corresponding to the motion trail can be determined by determining the motion trail. In one possible implementation, the object is identified for its motion trajectory using artificial intelligence (Artificial Intelligence, AI) techniques. For example, a trajectory recognition model is employed.
In one possible implementation manner, the terminal identifies a motion trail of a target portion of any object in the target video, and obtains a motion trail of the target portion of the object and a plurality of continuous video frames corresponding to the motion trail. Because the user is executed by some parts when executing the action, the user only needs to identify the motion trail of some parts of the object. The target portion is any portion of the subject, for example, an arm, a leg, a foot, or the like.
In another possible implementation manner, some actions need to be performed by matching multiple parts, so that the target parts include multiple parts, that is, the terminal performs motion track recognition on multiple target parts of any object in the target video to obtain motion tracks of multiple target parts of the object and multiple continuous video frames corresponding to the motion tracks. For example, the user performs a shooting action, the hand and the arm are required to be matched with each other, and the target part to be identified includes the arm and the hand.
In addition, in one possible implementation manner, the terminal identifies the target video and further includes identifying the content of the target video to obtain the video content of the target video, so as to determine the video tag of the target video according to the video content. After the terminal obtains the video tag of the target video, the video tag is displayed on a video playing interface in the playing process of the target video, or the video tag and the dynamic picture are displayed together later.
In one possible implementation, for live broadcast related to a sports event, the terminal identifies the target video, and can also identify whether there is a foul behavior of the object in the target video, whether the goal is valid, whether the game is paused, and so on.
302. The terminal extracts video clips to which a plurality of video frames belong from the target video.
In one possible implementation manner, after identifying a plurality of video frames, the terminal extracts the plurality of video frames from the target video, and splices the plurality of video frames together according to the time sequence in the target video, so as to obtain the affiliated video segment. Alternatively, a video clip comprising the plurality of video frames is truncated from the target video.
In one possible implementation, since the video clip is different from the moving picture in format, after the terminal obtains the video clip, the format of each video frame in the video clip is modified, and the moving picture is generated according to the modified video clip. For example, the format of the moving picture includes GIF (Graphics Interchange Format ), FLV (Flash Video, streaming media format), and the like.
303. And the terminal determines the action executed by the object and the action information corresponding to the action according to the motion trail.
And the terminal determines the action corresponding to the motion trail according to the motion trail. For example, in a video of a basketball game, if the motion trail of the hand of the subject is identified as a parabola, the corresponding action is determined to be a shooting action.
After determining an action performed by the object, determining action information corresponding to the action. The action information comprises professional introduction information of the action or introduction information of related personnel, and the action information is any one of text, audio or video. For sporting events, the relevant person is a professional referee or athlete. For example, if the action performed by the subject is a basket action, video introduction information of the basket action is determined, and the basket action is described in detail in the video introduction information.
In one possible implementation, the action information corresponding to each action is preset, and after determining the action executed by the object, the action information matching the action is determined. After determining the action executed by the object, the terminal sends an information acquisition request to the server, wherein the information acquisition request carries the determined action, and the server sends action information corresponding to the action to the terminal.
In one possible implementation, the terminal may also identify an object in the target video, and determine an action performed by the object in combination with the identified motion profile and the object associated with the motion profile. Taking the action of a basketball as an example, in identifying the motion trails of the arm and the hand, the motion trails of the basketball need to be identified, and whether the basketball is shot is determined, so that whether the basketball is finished by the object is determined.
304. And the terminal displays an action display entry in a video playing interface of the target video.
In one possible implementation manner, if the target video is identified by the terminal in the playing process, displaying an action display entry in the video playing interface after the terminal identifies the action; if the target video is identified by the server, the video playing interface can display the action display entrance when the target video starts to be played.
In one possible implementation manner, the action display inlet is provided with a corresponding closing button, and a user triggers the closing button to close the action display inlet; or the action display portal is provided with a corresponding hidden button, the user triggers the hidden button, the terminal hides the action display portal, and when the user triggers the hidden button again, the terminal displays the action display portal, so that the action display portal is prevented from influencing the user to watch the target video.
For example, referring to fig. 4, the action presentation portal is the pop-up button "view highlight action analysis", and a subsequent user clicks the pop-up button, i.e., a moving picture can be displayed.
305. And the terminal responds to the triggering operation of the action display entrance to display the dynamic picture and the action information.
In one possible implementation manner, after obtaining a video clip, the terminal generates a dynamic picture according to the video clip before performing a triggering operation on the action display portal; or after the terminal triggers the action display entrance, generating a dynamic picture according to the video clip, thereby reducing unnecessary operation of the terminal and avoiding generating the dynamic picture under the condition that the user does not need to display the dynamic picture.
In one possible implementation, the terminal responds to a triggering operation of the action display portal to display an action display interface different from the video playing interface, wherein the action display interface comprises a dynamic picture. Referring to fig. 5, the action display interface is displayed on the upper layer of the video playing interface in the form of a floating window, and is located at any position on the upper layer of the video playing interface, and the user can drag the action display interface to change the position of the action display interface; or, referring to fig. 6, the action display interface is displayed in an embedded form together with the video playing interface, no overlapping part exists between the action display interface and the video playing interface, shielding is avoided, and the sizes of the action display interface and the video playing interface can be adjusted; or, jumping from the video playing interface to a new interface, wherein the new interface is the video playing interface, and the new interface is similar to the action display interface in fig. 5.
When the action display interface is displayed in a floating window or embedded mode, the target video is normally played in the video playing interface or is in a pause state.
In one possible implementation, the terminal displays the moving picture and the motion information in the motion presentation interface. Optionally, displaying action information corresponding to each dynamic picture below each dynamic picture; or, the dynamic pictures belonging to the same action are put together for display, and corresponding action information is displayed below a plurality of dynamic pictures; or the action display interface comprises a picture display area and an information display area, wherein the picture display area displays a dynamic picture, and the information display area displays action information.
In one possible implementation manner, if the motion information is audio or video, when the motion information is played, the target video is automatically switched to a pause state, and after the motion information is played, the target video is automatically switched to a play state.
In one possible implementation, the moving picture includes the following display modes:
first kind: and under the condition that the terminal displays a dynamic picture, responding to the triggering operation of the action display entrance, and circularly playing the dynamic picture.
Second kind: the terminal identifies the target video, the obtained actions comprise a plurality of dynamic pictures, correspondingly, the dynamic pictures also comprise a plurality of dynamic pictures, and each dynamic picture is generated according to the video clip to which the corresponding action belongs. And the terminal responds to the triggering operation of the action display entrance and displays a plurality of dynamic pictures respectively.
Third kind: the dynamic pictures comprise a plurality of dynamic pictures, and the terminal responds to the triggering operation of the action display entrance and sequentially plays the plurality of dynamic pictures according to the sequence of the video clips corresponding to each dynamic picture in the target video. That is, a plurality of dynamic pictures are sequentially played from the first dynamic picture according to the time sequence, each dynamic picture is played once, and the first dynamic picture is continuously played after the last dynamic picture is played.
Fourth kind: the terminal responds to the triggering operation of the action display entrance, and displays the thumbnail of the plurality of dynamic pictures; and in response to triggering operation of any thumbnail in the plurality of dynamic pictures, circularly playing the dynamic picture corresponding to the triggered thumbnail. The thumbnail of the moving picture is only a screen for displaying any frame in the moving picture.
Fifth: the dynamic pictures comprise a plurality of dynamic pictures, the terminal responds to the triggering operation of the action display entrance to determine the dynamic pictures belonging to the same action type, and the dynamic pictures belonging to the same action type are displayed together, so that a user can conveniently view all the pictures of a certain action type.
In one possible implementation manner, after the terminal displays the dynamic picture and the action information, the terminal synchronously closes the action display interface in response to closing operation of the video playing interface; or the terminal responds to the closing operation of the video playing interface and stops playing the dynamic pictures in the action display interface; or the terminal responds to other videos played in the video playing interface, and closes the action display interface; or the terminal responds to the playing of other videos in the video playing interface, and stops playing the dynamic pictures in the action display interface.
In addition, in one possible implementation, the terminal identifies the target video, and after identifying the highlight action, automatically intercepts a plurality of video frames corresponding to the highlight action, and the terminal displays the video frames intercepted from the target video in response to the triggering operation of the action presentation portal.
In one possible implementation, after the terminal displays the moving picture, the user presses the moving picture for a long time, and the terminal displays a save button to save the moving picture in response to the long press operation on the moving picture. Similarly, if the truncated video frame is displayed, the frame is saved in a similar manner to the moving picture.
In one possible implementation, the terminal displays a moving picture and a video tag in response to a trigger operation to the action presentation portal. Or the terminal responds to the triggering operation of the action display entrance to display the dynamic picture and at least one similar video, wherein the similar video is the same as the video label of the target video, namely the video similar to the target video is recommended for the user. The similar video is the video which is shot by the user and is the same as the video label of the target video, or is the video of the related game.
In one possible implementation, the moving picture and the at least one similar video are displayed in an action presentation interface, the action presentation interface including a picture display area and a video display area, the moving picture being displayed in the picture display area and the at least one similar video being displayed in the video display area.
In one possible implementation, the terminal responds to a triggering operation of a shooting entrance and jumps to a shooting interface; and shooting the video through a shooting interface, adding a video tag which is the same as the video tag of the target video to the shot video, thereby taking the shot video as the same type of video of the target video, releasing the shot video, and taking the shot video as the similar video of the target video later. Wherein, the shooting entrance is a shooting button or a link.
In one possible implementation, the motion pictures, motion information, similar videos, and capture portals are displayed in a motion presentation interface. For example, different areas are included in the action presentation interface, namely a picture display area, an information display area, a video display area and a shooting entrance area in sequence from top to bottom.
It should be noted that, in the embodiment of the present disclosure, if the target video is a video published by the user, the terminal or the server identifies the target video. If the terminal identifies the target video, the terminal identifies the target video in the playing process of the target video; if the server identifies the target video, the server identifies the target video after the user uploads the target video to the server, and when the terminal plays the target video, the server sends the identification result of the target video to the terminal. If the target video is a live video, the terminal identifies the target video in the live process. The server may be a server, or a server cluster formed by a plurality of servers, or a cloud computing service center.
The other point to be described is that if the target video is a video released by the user, when the target video is identified, the complete target video can be identified, that is, the moving picture obtained by identification may include a picture corresponding to a video clip which has not yet been played; and if the target video is a live video, identifying the obtained dynamic picture as a picture corresponding to the video clip which is already played.
In another embodiment, the process of identifying the target video, generating the moving picture, and determining the action information or the video tag by the terminal may be performed by a server, and the implementation of server identification is similar to the implementation of terminal identification. In this case, the terminal receives the moving picture transmitted from the server, displays the motion presentation entry in the video playback interface of the target video, and displays the moving picture in response to a trigger operation to the motion presentation entry.
And for the action information, the terminal receives the dynamic picture and the action information sent by the server, displays an action display entry in a video playing interface of the target video, and displays the dynamic picture and the action information in response to the triggering operation of the action display entry. And for the similar videos, the terminal receives the similar videos sent by the server, displays an action display entry in a video playing interface of the target video, and displays a dynamic picture and the similar videos in response to the triggering operation of the action display entry.
In another embodiment, the terminal displays an action display entry in a video playing interface of the target video, responds to a trigger operation of the action display entry, sends an acquisition request to the server, the acquisition request carries a video identifier of the target video, the server sends a dynamic picture, action information or similar video corresponding to the target video to the terminal according to the acquisition request, and the terminal receives and displays the dynamic picture, the action information or the similar video sent by the server.
According to the method provided by the embodiment of the disclosure, in the video playing process, the generated dynamic pictures are displayed to the user, some actions in the target video are displayed to the user through the dynamic pictures, compared with the display of comments or barrages in the related technology, the content displayed to the user is enriched, the interactivity is improved, the user can conveniently check some actions in the target video, the video clips corresponding to the actions are prevented from being searched from the target video, and the user operation is simplified.
Moreover, the immersion of the user is enhanced by the mode of identification through artificial intelligence, so that more users are attracted to watch videos, more flow is introduced for the related field, and the artificial intelligence technology can be continuously learned to obtain more accurate identification results. In addition, for the sports field, the method of identifying the target video to obtain the dynamic picture and the action information enables the user to conveniently check the wonderful action, and for the user with less knowledge of sports, the user can know the current action through the displayed action information, so that the interest of the user is increased, and the user viscosity is improved.
And the similar videos of the target video are displayed, so that the user can conveniently view the similar videos, and the playing quantity of the similar videos is improved.
Fig. 7 is a block diagram of a video playback device according to an exemplary embodiment. Referring to fig. 7, the apparatus includes:
a video recognition unit 701 configured to perform recognition of a video clip to which an action included in a target video belongs, the video clip including a plurality of video frames in succession;
an entry display unit 702 configured to display an action presentation entry in a video playback interface of a target video;
a display unit 703 configured to perform a trigger operation in response to the action presentation entry, displaying a moving picture generated from the video clip.
According to the device provided by the embodiment of the disclosure, the generated dynamic pictures are displayed to the user in the video playing process, and some actions in the target video are displayed to the user through the dynamic pictures.
In one possible implementation, referring to fig. 8, the video recognition unit 701 includes:
the track recognition subunit 7011 is configured to perform motion track recognition on any object in the target video to obtain a motion track of the object and a plurality of continuous video frames corresponding to the motion track;
the clip extraction subunit 7012 is configured to perform extraction of a video clip to which a plurality of video frames belong from the target video.
In another possible implementation, referring to fig. 8, the trajectory identification subunit 7011 is configured to perform:
and carrying out motion trail identification on a target part of any object in the target video to obtain a motion trail of the target part of the object and a plurality of continuous video frames corresponding to the motion trail.
In another possible implementation, referring to fig. 8, the video playing device further includes:
an action information determining unit 704 configured to perform determination of action information corresponding to the action;
a display unit 703 configured to perform a trigger operation in response to the action presentation entry, displaying a moving picture and action information.
In another possible implementation, referring to fig. 8, the video playing device further includes:
the action determining unit 705 is configured to perform motion trail identification on any object in the target video to obtain a motion trail of the object;
The action determining unit 705 is further configured to perform an action performed by the determination object according to the motion trajectory.
In another possible implementation, the display unit 703 is configured to perform:
responding to the triggering operation of the action display entrance, and displaying a dynamic picture and a video label; or alternatively, the process may be performed,
and responding to the triggering operation of the action presentation portal, and displaying the dynamic picture and at least one similar video, wherein the similar video is the same as the video label of the target video.
In another possible implementation, the actions include a plurality of moving pictures, each moving picture being generated according to a video clip to which the corresponding action belongs, and the display unit 703 is configured to perform a triggering operation in response to the action presentation portal, and display the plurality of moving pictures, respectively.
In another possible implementation, the display unit 703 is configured to perform a loop playing of the moving picture in response to a trigger operation to the action presentation portal.
In another possible implementation manner, the moving pictures include a plurality of moving pictures, and the display unit 703 is configured to perform a triggering operation for responding to the action presentation portal, and sequentially play the plurality of moving pictures in the order of the video clip corresponding to each moving picture in the target video.
In another possible implementation, the moving picture includes a plurality of display units 703 configured to perform:
displaying thumbnails of a plurality of moving pictures in response to a trigger operation of the action presentation portal;
and in response to triggering operation of any thumbnail in the plurality of dynamic pictures, circularly playing the dynamic picture corresponding to the triggered thumbnail.
In another possible implementation, the display unit 703 is configured to perform:
and responding to the triggering operation of the action display entrance, displaying an action display interface different from the video playing interface, wherein the action display interface comprises a dynamic picture.
In another possible implementation manner, the action presentation interface includes a shooting portal, referring to fig. 8, and the video playing device further includes:
a photographing unit 706 configured to perform a jump to a photographing interface in response to a trigger operation to a photographing portal;
the shooting unit 706 is further configured to perform shooting of the video through the shooting interface, adding the same video tag as that of the target video to the video.
In another possible implementation, referring to fig. 8, the video playing device further includes:
a closing unit 707 configured to perform a synchronous closing of the action presentation interface in response to a closing operation of the video playback interface; or alternatively, the process may be performed,
A closing unit 707 further configured to perform a closing operation in response to the video playback interface, stopping playback of the moving picture in the action presentation interface; or alternatively, the process may be performed,
a closing unit 707 further configured to perform closing the action presentation interface in response to playing other video in the video playing interface; or alternatively, the process may be performed,
the closing unit 707 is further configured to perform stopping playing the moving picture in the action presentation interface in response to playing the other video in the video playing interface.
In another possible implementation manner, the video identifying unit 701 is configured to identify, during the playing of the target video by the video playing interface, a video clip to which an action included in the target video belongs.
The specific manner in which the individual units perform the operations in relation to the apparatus of the above embodiments has been described in detail in relation to the embodiments of the method and will not be described in detail here.
Fig. 9 is a block diagram of another video playback device according to an exemplary embodiment. Referring to fig. 9, the apparatus includes:
a receiving unit 901, configured to execute receiving a moving picture sent by a server, where the moving picture is generated according to a video segment after identifying a video segment to which an action in a target video belongs, and the video segment includes a plurality of continuous video frames;
An entry display unit 902 configured to display an action presentation entry in a video playback interface of a target video;
a display unit 903 configured to perform a trigger operation in response to the action presentation entry, displaying a moving picture.
According to the device provided by the embodiment of the disclosure, the generated dynamic pictures are displayed to the user in the video playing process, and some actions in the target video are displayed to the user through the dynamic pictures.
The specific manner in which the individual units perform the operations in relation to the apparatus of the above embodiments has been described in detail in relation to the embodiments of the method and will not be described in detail here.
Fig. 10 is a block diagram illustrating a structure of a terminal 1000 according to an exemplary embodiment. The terminal 1000 can be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 1000 can also be referred to by other names of user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Terminal 1000 includes: a processor 1001 and a memory 1002.
The processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1001 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1001 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1001 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and drawing of content that the display screen needs to display. In some embodiments, the processor 1001 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. Memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1002 is used to store at least one program code for execution by processor 1001 to implement the video playback method provided by the method embodiments of the present disclosure.
In some embodiments, terminal 1000 can optionally further include: a peripheral interface 1003, and at least one peripheral. The processor 1001, the memory 1002, and the peripheral interface 1003 may be connected by a bus or signal line. The various peripheral devices may be connected to the peripheral device interface 1003 via a bus, signal wire, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1004, a display 1005, a camera assembly 1006, audio circuitry 1007, a positioning assembly 1008, and a power supply 1009.
Peripheral interface 1003 may be used to connect I/O (Input/Output) related at least one peripheral to processor 1001 and memory 1002. In some embodiments, processor 1001, memory 1002, and peripheral interface 1003 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1001, memory 1002, and peripheral interface 1003 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
Radio Frequency circuit 1004 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. Radio frequency circuitry 1004 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1004 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1004 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. Radio frequency circuitry 1004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 1004 may also include NFC (Near Field Communication, short range wireless communication) related circuitry, which is not limited by the present disclosure.
The display screen 1005 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1005 is a touch screen, the display 1005 also has the ability to capture touch signals at or above the surface of the display 1005. The touch signal may be input to the processor 1001 as a control signal for processing. At this time, the display 1005 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, display 1005 may be one, disposed on the front panel of terminal 1000; in other embodiments, display 1005 may be provided in at least two, separately provided on different surfaces of terminal 1000 or in a folded configuration; in other embodiments, display 1005 may be a flexible display disposed on a curved surface or a folded surface of terminal 1000. Even more, the display 1005 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display 1005 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1006 is used to capture images or video. Optionally, camera assembly 1006 includes a front camera and a rear camera. The front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1006 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 1007 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 1001 for processing, or inputting the electric signals to the radio frequency circuit 1004 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be multiple, each located at a different portion of terminal 1000. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1001 or the radio frequency circuit 1004 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 1007 may also include a headphone jack.
The location component 1008 is used to locate the current geographic location of terminal 1000 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 1008 may be a positioning component based on the united states GPS (Global Positioning System ), the beidou system of china, the grainer positioning system of russia, or the galileo positioning system of the european union.
Power supply 1009 is used to power the various components in terminal 1000. The power source 1009 may be alternating current, direct current, disposable battery or rechargeable battery. When the power source 1009 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1000 can further include one or more sensors 1010. The one or more sensors 1010 include, but are not limited to: acceleration sensor 1011, gyroscope sensor 1012, pressure sensor 1013, fingerprint sensor 1014, optical sensor 1015, and proximity sensor 1016.
The acceleration sensor 1011 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1000. For example, the acceleration sensor 1011 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1001 may control the display screen 1005 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 1011. The acceleration sensor 1011 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 1012 may detect the body direction and the rotation angle of the terminal 1000, and the gyro sensor 1012 may collect the 3D motion of the user to the terminal 1000 in cooperation with the acceleration sensor 1011. The processor 1001 may implement the following functions according to the data collected by the gyro sensor 1012: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 1013 may be disposed on a side frame of terminal 1000 and/or on an underlying layer of display 1005. When the pressure sensor 1013 is provided at a side frame of the terminal 1000, a grip signal of the terminal 1000 by a user can be detected, and the processor 1001 performs right-and-left hand recognition or quick operation according to the grip signal collected by the pressure sensor 1013. When the pressure sensor 1013 is provided at the lower layer of the display screen 1005, the processor 1001 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1005. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1014 is used to collect a fingerprint of the user, and the processor 1001 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 1014, or the fingerprint sensor 1014 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1001 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 1014 may be disposed on the front, back, or side of terminal 1000. When a physical key or vendor Logo is provided on terminal 1000, fingerprint sensor 1014 may be integrated with the physical key or vendor Logo.
The optical sensor 1015 is used to collect ambient light intensity. In one embodiment, the processor 1001 may control the display brightness of the display screen 1005 based on the ambient light intensity collected by the optical sensor 1015. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 1005 is turned up; when the ambient light intensity is low, the display brightness of the display screen 1005 is turned down. In another embodiment, the processor 1001 may dynamically adjust the shooting parameters of the camera module 1006 according to the ambient light intensity collected by the optical sensor 1015.
Proximity sensor 1016, also known as a distance sensor, is disposed on the front panel of terminal 1000. Proximity sensor 1016 is used to collect the distance between the user and the front of terminal 1000. In one embodiment, when proximity sensor 1016 detects a gradual decrease in the distance between the user and the front face of terminal 1000, processor 1001 controls display 1005 to switch from the bright screen state to the off screen state; when proximity sensor 1016 detects a gradual increase in the distance between the user and the front of terminal 1000, processor 1001 controls display 1005 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 10 is not limiting and that terminal 1000 can include more or fewer components than shown, or certain components can be combined, or a different arrangement of components can be employed.
Fig. 11 is a block diagram illustrating a structure of a server 1100, which may be relatively widely different according to configuration or performance, may include one or more processors (Central Processing Units, CPU) 1101 and one or more memories 1102, wherein at least one program code is stored in the memories 1102, and the at least one program code is loaded and executed by the processors 1101 to implement the methods provided in the respective method embodiments described above. Of course, the server may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
The server 1100 may be used to perform the steps performed by the server in the video playback method described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, which when executed by a processor of a terminal, enables the terminal to perform the steps performed by the terminal in the video playing method described above. In this case, the storage medium may be a non-transitory computer-readable storage medium, which may be, for example, a ROM (Read Only Memory), a RAM (random access Memory), a CD-ROM (compact disc Read Only Memory), a magnetic tape, a floppy disk, an optical data storage device, or the like.
In an exemplary embodiment, a computer program product is also provided, which, when executed by a processor of the terminal, enables the terminal to perform the steps performed by the terminal in the video playing method described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (26)

1. A video playing method, characterized in that the video playing method comprises:
determining actions executed by the object and action information corresponding to the actions based on the motion trail of any object in the target video and the motion trail of the object related to the object, wherein the action information comprises professional introduction information of the actions and introduction information of related personnel;
identifying a video segment to which an action contained in the target video belongs, wherein the video segment comprises a plurality of continuous target video frames;
displaying an action display inlet in a video playing interface of the target video;
responding to the triggering operation of the action display portal, displaying an action display interface different from the video playing interface, wherein the action display interface comprises a dynamic picture generated according to the video clip, the action information and a shooting portal;
responding to the triggering operation of the shooting entrance, and jumping to a shooting interface;
And shooting a video through the shooting interface, and adding a video tag which is the same as the video tag of the target video for the video.
2. The video playing method according to claim 1, wherein the identifying the video clip to which the action included in the target video belongs includes:
performing motion trail identification on any object in the target video to obtain a motion trail of the object and a plurality of continuous video frames corresponding to the motion trail;
and extracting video fragments to which the plurality of video frames belong from the target video.
3. The video playing method according to claim 2, wherein the step of performing motion trail identification on any object in the target video to obtain a motion trail of the object and a plurality of continuous video frames corresponding to the motion trail includes:
and carrying out motion trail identification on a target part of any object in the target video to obtain a motion trail of the target part of the object and a plurality of continuous video frames corresponding to the motion trail.
4. The video playback method as recited in claim 1, wherein the video playback method further comprises:
Performing motion trail identification on any object in the target video to obtain a motion trail of the object;
and determining the action executed by the object according to the motion trail.
5. The video playback method as recited in claim 1, wherein the displaying of the moving picture generated from the video clip in response to the triggering operation of the action presentation portal comprises:
responding to the triggering operation of the action display entrance, and displaying the dynamic picture and the video tag; or alternatively, the process may be performed,
and responding to the triggering operation of the action presentation portal, and displaying the dynamic picture and at least one similar video, wherein the similar video is the same as the video label of the target video.
6. The video playback method as recited in claim 1, wherein the action includes a plurality of the moving pictures, the moving pictures include a plurality of each moving picture being generated from a video clip to which the corresponding action belongs, the displaying the moving picture generated from the video clip in response to a trigger operation of the action presentation portal includes:
and respectively displaying a plurality of dynamic pictures in response to the triggering operation of the action display entrance.
7. The video playback method as recited in claim 1, wherein the displaying of the moving picture generated from the video clip in response to the triggering operation of the action presentation portal comprises:
and responding to the triggering operation of the action display entrance, and circularly playing the dynamic picture.
8. The video playback method as recited in claim 1, wherein the moving picture includes a plurality of moving pictures, the displaying the moving picture generated from the video clip in response to a trigger operation of the action presentation portal, comprising:
and responding to the triggering operation of the action display entrance, and sequentially playing a plurality of dynamic pictures according to the sequence of the video clips corresponding to each dynamic picture in the target video.
9. The video playback method as recited in claim 1, wherein the moving picture includes a plurality of moving pictures, the displaying the moving picture generated from the video clip in response to a trigger operation of the action presentation portal, comprising:
displaying thumbnails of a plurality of moving pictures in response to a trigger operation of the action presentation portal;
and in response to the triggering operation of any thumbnail in the plurality of dynamic pictures, circularly playing the dynamic picture corresponding to the triggered thumbnail.
10. The video playback method as recited in claim 1, wherein the video playback method further comprises:
responding to the closing operation of the video playing interface, and synchronously closing the action display interface; or alternatively, the process may be performed,
responding to the closing operation of the video playing interface, and stopping playing the dynamic pictures in the action display interface; or alternatively, the process may be performed,
responding to the video playing interface to play other videos, and closing the action display interface; or alternatively, the process may be performed,
and responding to the playing of other videos in the video playing interface, and stopping playing the dynamic pictures in the action display interface.
11. The video playing method according to claim 1, wherein the identifying the video clip to which the action included in the target video belongs includes:
and identifying the video clip to which the action contained in the target video belongs in the process of playing the target video by the video playing interface.
12. A video playing method, characterized in that the video playing method comprises:
determining actions executed by the object and action information corresponding to the actions based on the motion trail of any object in the target video and the motion trail of the object related to the object, wherein the action information comprises professional introduction information of the actions and introduction information of related personnel;
Receiving a dynamic picture sent by a server, wherein the dynamic picture is generated according to a video segment after identifying the video segment to which an action in the target video belongs, and the video segment comprises a plurality of continuous video frames;
displaying an action display inlet in a video playing interface of the target video;
responding to the triggering operation of the action display portal, displaying an action display interface different from the video playing interface, wherein the action display interface comprises a dynamic picture generated according to the video clip, the action information and a shooting portal;
responding to the triggering operation of the shooting entrance, and jumping to a shooting interface;
and shooting a video through the shooting interface, and adding a video tag which is the same as the video tag of the target video for the video.
13. A video playback device, the video playback device comprising:
a video recognition unit configured to execute an action executed by an object and action information corresponding to the action based on a motion trail of any object in a target video and a motion trail of an object related to the object, wherein the action information comprises professional introduction information of the action and introduction information of related personnel;
The video identification unit is further configured to identify a video segment to which an action included in the target video belongs, wherein the video segment comprises a plurality of continuous video frames;
an entry display unit configured to display an action presentation entry in a video playback interface of the target video;
a display unit configured to execute a trigger operation for responding to the action presentation portal, and display an action presentation interface different from the video playing interface, wherein the action presentation interface comprises a dynamic picture generated according to the video clip, the action information and a shooting portal;
the display unit is further configured to execute a jump to a shooting interface in response to a trigger operation to the shooting entrance;
the display unit is further configured to perform adding a video tag identical to a video tag of the target video to the video in response to capturing the video through the capturing interface.
14. The video playback device of claim 13, wherein the video recognition unit comprises:
the track recognition subunit is configured to perform motion track recognition on any object in the target video to obtain a motion track of the object and a plurality of continuous video frames corresponding to the motion track;
And a segment extraction subunit configured to perform extraction of video segments to which the plurality of video frames belong from the target video.
15. The video playback device of claim 14, wherein the track recognition subunit is configured to perform:
and carrying out motion trail identification on a target part of any object in the target video to obtain a motion trail of the target part of the object and a plurality of continuous video frames corresponding to the motion trail.
16. The video playback device of claim 13, wherein the video playback device further comprises:
the action determining unit is configured to perform motion trail identification on any object in the target video to obtain a motion trail of the object;
the action determining unit is further configured to perform an action performed by the object according to the motion trajectory.
17. The video playback device of claim 13, wherein the display unit is configured to perform:
responding to the triggering operation of the action display entrance, and displaying the dynamic picture and the video tag; or alternatively, the process may be performed,
and responding to the triggering operation of the action presentation portal, and displaying the dynamic picture and at least one similar video, wherein the similar video is the same as the video label of the target video.
18. The video playback device according to claim 13, wherein the action includes a plurality of the moving pictures, the moving pictures include a plurality of moving pictures, each moving picture is generated from a video clip to which the corresponding action belongs, and the display unit is configured to perform a trigger operation in response to the action presentation portal, and to display the plurality of moving pictures, respectively.
19. The video playback device according to claim 13, wherein the display unit is configured to perform cyclic playback of the moving picture in response to a trigger operation of the action presentation portal.
20. The video playback device of claim 13, wherein the moving pictures include a plurality of moving pictures, and wherein the display unit is configured to perform a trigger operation for the action presentation portal in response to which a plurality of moving pictures are sequentially played in the order of the video clip corresponding to each moving picture in the target video.
21. The video playback device of claim 13, wherein the moving picture comprises a plurality of the display units configured to perform:
displaying thumbnails of a plurality of moving pictures in response to a trigger operation of the action presentation portal;
And in response to the triggering operation of any thumbnail in the plurality of dynamic pictures, circularly playing the dynamic picture corresponding to the triggered thumbnail.
22. The video playback device of claim 13, wherein the video playback device further comprises:
a closing unit configured to perform synchronous closing of the action presentation interface in response to a closing operation of the video play interface; or alternatively, the process may be performed,
the closing unit is further configured to perform stopping playing the dynamic picture in the action display interface in response to a closing operation of the video playing interface; or alternatively, the process may be performed,
the closing unit is further configured to execute closing the action display interface in response to playing other videos in the video playing interface; or alternatively, the process may be performed,
and the closing unit is further configured to execute stopping playing the dynamic picture in the action display interface in response to the playing of other videos in the video playing interface.
23. The video playback device according to claim 13, wherein the video identification unit is configured to identify a video clip to which an action included in the target video belongs in the process of playing the target video by the video playback interface.
24. A video playback device, the video playback device comprising:
a video recognition unit configured to execute an action executed by an object and action information corresponding to the action based on a motion trail of any object in a target video and a motion trail of an object related to the object, wherein the action information comprises professional introduction information of the action and introduction information of related personnel;
the video identification unit is further configured to execute a dynamic picture sent by the receiving server, wherein the dynamic picture is generated according to a video segment after identifying the video segment to which the action in the target video belongs, and the video segment comprises a plurality of continuous video frames;
an entry display unit configured to display an action presentation entry in a video playback interface of the target video;
a display unit configured to execute a trigger operation for responding to the action presentation portal, and display an action presentation interface different from the video playing interface, wherein the action presentation interface comprises a dynamic picture generated according to the video clip, the action information and a shooting portal;
The display unit is further configured to execute a jump to a shooting interface in response to a trigger operation to the shooting entrance;
the display unit is further configured to perform shooting of a video through the shooting interface, and add a video tag identical to a video tag of the target video to the video.
25. A terminal, the terminal comprising:
one or more processors;
volatile or non-volatile memory for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to perform the video playback method of any one of claims 1 to 11 or to perform the video playback method of claim 12.
26. A non-transitory computer readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of a terminal, enable the terminal to perform the video playing method of any one of claims 1 to 11 or to perform the video playing method of claim 12.
CN202011289879.8A 2020-11-17 2020-11-17 Video playing method, device, terminal and storage medium Active CN112511889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011289879.8A CN112511889B (en) 2020-11-17 2020-11-17 Video playing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011289879.8A CN112511889B (en) 2020-11-17 2020-11-17 Video playing method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN112511889A CN112511889A (en) 2021-03-16
CN112511889B true CN112511889B (en) 2023-07-07

Family

ID=74956638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011289879.8A Active CN112511889B (en) 2020-11-17 2020-11-17 Video playing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN112511889B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113645496B (en) * 2021-08-12 2024-04-09 北京字跳网络技术有限公司 Video processing method, device, equipment and storage medium
CN114363688B (en) * 2022-01-10 2023-10-31 抖音视界有限公司 Video processing method and device and non-volatile computer readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110121093A (en) * 2018-02-06 2019-08-13 优酷网络技术(北京)有限公司 The searching method and device of target object in video
CN109862388A (en) * 2019-04-02 2019-06-07 网宿科技股份有限公司 Generation method, device, server and the storage medium of the live video collection of choice specimens
CN110324717B (en) * 2019-07-17 2021-11-02 咪咕文化科技有限公司 Video playing method and device and computer readable storage medium
CN110582017B (en) * 2019-09-10 2022-04-19 腾讯科技(深圳)有限公司 Video playing method, device, terminal and storage medium

Also Published As

Publication number Publication date
CN112511889A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
CN110572722B (en) Video clipping method, device, equipment and readable storage medium
CN108769561B (en) Video recording method and device
CN110267067B (en) Live broadcast room recommendation method, device, equipment and storage medium
CN111147878B (en) Stream pushing method and device in live broadcast and computer storage medium
CN109874312B (en) Method and device for playing audio data
CN110572711B (en) Video cover generation method and device, computer equipment and storage medium
CN109729372B (en) Live broadcast room switching method, device, terminal, server and storage medium
CN108419113B (en) Subtitle display method and device
WO2019114514A1 (en) Method and apparatus for displaying pitch information in live broadcast room, and storage medium
CN110163066B (en) Multimedia data recommendation method, device and storage medium
CN113411680B (en) Multimedia resource playing method, device, terminal and storage medium
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN111711838B (en) Video switching method, device, terminal, server and storage medium
CN113490010B (en) Interaction method, device and equipment based on live video and storage medium
CN110290392B (en) Live broadcast information display method, device, equipment and storage medium
CN110933468A (en) Playing method, playing device, electronic equipment and medium
CN110750734A (en) Weather display method and device, computer equipment and computer-readable storage medium
CN112511889B (en) Video playing method, device, terminal and storage medium
CN114116053A (en) Resource display method and device, computer equipment and medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN111818367A (en) Audio file playing method, device, terminal, server and storage medium
CN114302160B (en) Information display method, device, computer equipment and medium
CN113556481B (en) Video special effect generation method and device, electronic equipment and storage medium
CN113204672B (en) Resource display method, device, computer equipment and medium
CN112866584A (en) Video synthesis method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant