CN117061814A - Video playing method, device, equipment, storage medium and program product - Google Patents

Video playing method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN117061814A
CN117061814A CN202210486102.3A CN202210486102A CN117061814A CN 117061814 A CN117061814 A CN 117061814A CN 202210486102 A CN202210486102 A CN 202210486102A CN 117061814 A CN117061814 A CN 117061814A
Authority
CN
China
Prior art keywords
video
playing
clip
video content
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210486102.3A
Other languages
Chinese (zh)
Inventor
袁佳平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210486102.3A priority Critical patent/CN117061814A/en
Publication of CN117061814A publication Critical patent/CN117061814A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4782Web browsing, e.g. WebTV

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the application discloses a video playing method, a device, equipment, a storage medium and a program product, belonging to the technical field of interface interaction. The method comprises the following steps: acquiring first video content through video elements of hypertext markup language (HTML); the first video content comprises at least two video clips; playing at least two video clips in a video playing interface; receiving a marking operation of at least one target video clip during the playing of the first video content; and playing second video content synthesized by at least one target video clip in the video playing interface in response to receiving the video synthesizing operation. The scheme can expand the manufacturing mode of the video, simplify the manufacturing difficulty of the video and improve the manufacturing efficiency of the video.

Description

Video playing method, device, equipment, storage medium and program product
Technical Field
The present application relates to the field of interface interaction technologies, and in particular, to a video playing method, apparatus, device, storage medium, and program product.
Background
With the continuous development of network technology, the video transmission capability of the network is also increased. Accordingly, how to reduce the difficulty of users to self-control videos has become a problem to be solved in network video applications.
In the related art, a user typically makes or edits a video through video making software. For example, a user may acquire one or more video clips by downloading, an image acquisition device, or by making with animation software, and then perform operations such as clipping and splicing on the video clips by using video editing software to obtain a homemade video.
However, in the above scheme, the steps of the user for making the video are complex, and professional video editing software is required to edit the video, so that the video making difficulty is high, and the video making efficiency is further affected.
Disclosure of Invention
The embodiment of the application provides a video playing method, a device, equipment, a storage medium and a program product, which can expand the video making mode, simplify the video making difficulty and improve the video making efficiency. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a video playing method, where the method includes:
acquiring first video content through video elements of hypertext markup language (HTML); the first video content comprises at least two video clips;
playing at least two video clips in the first video content in a video playing interface;
Receiving a marking operation of at least one target video clip in at least two video clips in the process of playing the first video content;
and playing second video content synthesized by at least one target video clip in the video playing interface in response to receiving the video synthesizing operation.
In another aspect, an embodiment of the present application provides a video playing device, including:
the video content acquisition module is used for acquiring first video content through video elements of hypertext markup language (HTML); the first video content comprises at least two video clips;
the first playing module is used for playing at least two video clips in the first video content in the video playing interface;
a marking module, configured to receive a marking operation on at least one target video clip of at least two video clips during the process of playing the first video content;
and the second playing module is used for playing second video content synthesized by at least one target video clip in the video playing interface in response to receiving the video synthesis operation.
In one possible implementation manner, the video playing interface comprises a fragment jump control; the first playing module is used for playing the first video data,
Circularly playing a first video clip in the video playing interface; the first video clip is any one of at least two of the video clips;
responding to the receiving of the triggering operation of the fragment jump control, and circularly playing a second video fragment in the video playing interface; the second video clip is another video clip other than the first video clip of at least two of the video clips.
In one possible implementation, the first video content is a single video composed of at least two video segments connected end to end; at least two video clips correspond to a start time point and an end time point in the first video content respectively;
the first playing module is used for playing the first video data,
periodically acquiring a first playing time point of the first video content in response to playing the first video clip in the video playing interface;
and in response to the first playing time point not being earlier than the ending time point of the first video segment in the first video content, jumping to the starting time point of the first video segment in the first video content for playing.
In one possible implementation, the second playing module is configured to,
generating a first video composition file in response to receiving a video composition operation, the first video composition file being used to indicate at least one of the target video clips;
and in response to receiving the playing operation of the second video content, playing the second video content in the video playing interface based on the first video content and the first video composite file.
In one possible implementation, at least one of the target video clips contains two or more of the target video clips, and the first video composition file is further configured to indicate a marked order of the respective target video clips;
the second playing module is used for responding to the receiving of the playing operation of the second video content, and sequentially jumping and playing the target video clips in the first video content according to the marked sequence based on the starting time point and the ending time point of the target video clips in the first video content.
In one possible implementation, the second playing module is configured to,
Reading a start time point and an end time point of a third video clip in the first video content based on the first video composition file; the third video clip is any one of the target video clips;
playing the first video content starting from a starting point in time of the third video clip in the first video content;
periodically acquiring a second playing time point of the first video content;
and in response to the second playing time point not being earlier than the ending time point of the third video segment in the first video content, reading a starting time point and an ending time point of the next target video segment in the first video content based on the first video synthesis file according to the marked sequence.
In one possible implementation, the second playing module is configured to,
in response to the second play time point not being earlier than an end time point of the third video clip in the first video content, and the third video clip being other of the respective target video clips except for the last marked video clip, reading a start time point and an end time point of a next one of the third video clips in the first video content based on the first video composition file in the marked order.
In one possible implementation, the second playing module is configured to,
and in response to the second playing time point not being earlier than the ending time point of the third video segment in the first video content, and the third video segment being the last marked video segment in each target video segment, reading the starting time point and the ending time point of the first target video segment in the first video content based on the first video synthesis file according to the marked sequence.
In one possible implementation, the apparatus further includes:
the sharing control display module is used for responding to the received video synthesis operation and displaying a sharing control in the video playing interface;
and the sharing module is used for responding to the receiving of the triggering operation of the sharing control, and sharing the first video composite file to a first target terminal so that the first target terminal plays the second video content based on the first video content and the first video composite file.
In one possible implementation, the apparatus further includes:
the third playing module is used for responding to the received second video synthesized file shared by the second target terminal and playing third video content in the video playing interface based on the first video content and the second video synthesized file;
Wherein the second video composition file is used to indicate the video clips each marked in the second target terminal.
In one possible implementation, the third playing module is configured to,
responding to a second video composition file shared by a second target terminal, and displaying a play control corresponding to the second video composition file;
and responding to the receiving of the triggering operation of the playing control, and playing third video content in the video playing interface based on the first video content and the second video composite file.
In one possible implementation manner, the video playing interface comprises a fragment mark control;
the marking module is used for receiving triggering operation of the segment marking control in the process of playing the fourth video segment in the first video content; the fourth video clip is any one of the at least one target video clip.
In one possible implementation, the apparatus further includes:
and the state modification module is used for modifying the display state of the segment marking control into a marked state in the process of playing the fourth video segment in response to the completion of marking the fourth video segment.
In one possible implementation, the second playing module is configured to,
when at least two video clips in the first video content are played in the video playing interface, responding to the fact that the currently played video clip is the last video clip in the at least two video clips, and displaying a video composition control in the video playing interface;
and playing the second video content in the video playing interface in response to receiving the video composition operation performed on the video composition control.
In another aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where at least one computer instruction is stored in the memory, where the at least one computer instruction is loaded and executed by the processor to implement a video playing method as described in the above aspect.
In another aspect, embodiments of the present application provide a computer readable storage medium having stored therein at least one computer instruction that is loaded and executed by a processor to implement the video playing method as described in the above aspect.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the video playback method provided in various alternative implementations of the above aspects.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
when the terminal plays the video content comprising at least two video clips through the video elements in the HTML, the terminal can receive the marking operation of a user on the target video clips in the at least two video clips, and when the video synthesis operation is subsequently received, the terminal can combine new video content based on each marked target video clip and play the new video content; the scheme allows a user to directly mark the video clips in the video content when watching the video content through the Web end so as to synthesize new video content, thereby expanding the video manufacturing mode, simplifying the video manufacturing difficulty and improving the video manufacturing efficiency.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;
fig. 2 is a flowchart of a video playing method according to an exemplary embodiment of the present application;
FIG. 3 is a block diagram of a video composition flow provided by an exemplary embodiment of the present application;
fig. 4 is a flowchart illustrating a video playing method according to an exemplary embodiment of the present application;
FIG. 5 is a schematic diagram of a playback interface according to the embodiment shown in FIG. 4;
FIG. 6 is a schematic diagram of video frame skipping in accordance with the embodiment of FIG. 4;
FIG. 7 is a schematic diagram of another playback interface related to the embodiment shown in FIG. 4;
FIG. 8 is a schematic diagram of video composition provided by an exemplary embodiment of the present application;
FIG. 9 is a schematic view of a spot grouping in accordance with the embodiment of FIG. 8;
FIG. 10 is a view of the video numbering of the attractions involved in the embodiment of FIG. 8;
FIG. 11 is a schematic diagram showing a playing sequence of the composite video according to the embodiment shown in FIG. 8;
fig. 12 is a block diagram of a video playback apparatus according to an exemplary embodiment of the present application;
Fig. 13 is a block diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
It should be understood that references herein to "a number" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
FIG. 1 illustrates a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application. The implementation environment may include: terminal 110, terminal 120, and server 130.
The terminal 110 may be a terminal of a video content provider, for example, the terminal 110 may be a personal computer (Personal Computer, PC), a portable computer, a personal workstation, a smart phone, a tablet computer, or the like.
The terminal 110 may have installed and running therein an application for producing video, such as a video clip-like application. In the embodiment of the present application, the application program for producing video may synthesize a plurality of video clips into a complete video, or the application program for producing video may divide/label a plurality of video clips in a complete video.
Optionally, the terminal 110 may upload the produced video content to the server 130.
The terminal 120 may be a terminal of a video consumer, for example, the terminal 120 may be a smart phone, a tablet computer, an electronic book reader, a personal computer, a portable computer, a personal workstation, a vehicle display terminal, a smart television set-top box, and the like.
The terminal 120 may have installed and running therein an application for video playback that may receive video content transmitted from the server 130 and play the video content through a video playback interface. For example, the application program for video playing may be a video player application program, an instant messaging application program, a social platform application program, a game application program, or the like.
The server 130 is provided with a video management service for receiving the video content created and uploaded by the terminal 110, storing the video content, and transmitting the video content to the terminal 120 for playing.
Only one video content provider terminal 110 is shown in fig. 1, as well as one video consumer terminal 120, but in different embodiments there may be multiple video content provider terminals and multiple video consumer terminals that may access the server 130.
Terminal 110 and terminal 120 are connected to server 130 via a wireless network or a wired network.
Server 130 includes at least one of a server, a server cluster of servers, a cloud computing platform, and a virtualization center.
In one illustrative example, server 130 includes memory 131, processor 132, user account database 133, video management module 134, and user-oriented Input/Output Interface (I/O Interface) 135. Wherein the processor 132 is configured to load instructions stored in the server 130, process data in the user account database 133 and the video management module 134; the user account database 133 is used for storing data of user accounts used by the terminals 110, 120 and other terminals, such as an avatar of the user account, a nickname of the user account, a class of the user account, and an area where the user account is located; the video management module 134 is configured to provide a video storage and delivery service for a video content provider to upload video content, and deliver the video content uploaded by the video content provider to a terminal of a video consumer for playing, etc.; user-oriented I/O interface 135 is used to establish communications exchanges of data with terminal 110 and/or terminal 120 via a wireless or wired network.
In one possible solution, when a user needs to make a video by means of video composition, it is generally necessary to prepare a plurality of videos to be composed in advance, and then compose the plurality of videos into one complete long video by an application program having a video composition function.
However, the application program with the video composition function is usually set in a server, and the terminal needs to upload a plurality of videos to be composed to the server, perform video composition by the server, and return the composite long video to the terminal. On one hand, the user operation process of the scheme is complicated, and the video synthesis efficiency is affected; on the other hand, the video content needs to be transmitted in two directions between the terminal and the server, which wastes network bandwidth.
In the embodiment of the present application, when the user plays the video content sent by the server 130 using the terminal 120, a new personalized video content may be synthesized at the terminal side based on the played video content, so as to achieve the effect of expanding the video production mode.
Fig. 2 is a flowchart illustrating a video playing method according to an exemplary embodiment of the present application. The video playing method may be performed by a computer device, which may be a terminal of a user as a video consumer; for example, the terminal may be the terminal 120 in the system shown in fig. 1 described above. As shown in fig. 2, the video playing method includes:
Step 201, acquiring first video content through video elements of HTML; the first video content comprises at least two video clips.
In the embodiment of the present application, the terminal may perform video playing through the Web (network), and specifically, the terminal may acquire the first video content through a video element (i.e., a < video > element, also referred to as a < video > tag) of a hypertext markup language (Hyper Text Markup Language, HTML).
HTML5, among other things, specifies a method of tagging that includes video via video elements that can be used to play video on the Web side. The video element may be regarded as a container for video playback, which may be assigned various playback attributes, such as an address of a video file to be played, whether to play in a loop, whether to preload, a height of a video playback interface, a width of a video playback interface, and the like. When the Web end plays the video, attributes such as the address of the video file are assigned to the video element, and the video element pulls the video data according to the address of the video file.
Step 202, playing at least two video clips in the first video content in the video playing interface.
After the video element obtains the video data of the first video content, the video data can be resolved, and the first video content is played in the video playing interface.
In the embodiment of the application, the terminal can display the video playing interface in the screen through the video element, and play the first video content provided by the server through the video playing interface.
In step 203, a marking operation for at least one target video clip of the at least two video clips is received during the playing of the first video content.
In the process of playing the first video content, if the terminal receives the marking operation of the user, the currently played video clip can be marked as the target video clip.
In an embodiment of the application, the first video content is composed of at least two video clips. And the terminal can identify the marking operation independently executed by the user on a certain video segment in the process of playing the first video content, and mark the corresponding video segment independently.
For example, in one possible implementation manner, the first video content may be a complete video, where the complete video is formed by splicing at least two video segments end to end; at least two video clips use the same playing time axis, for example, a first video content contains two video clips, the duration of the first video clip is 5s (seconds), and the duration of the second video clip is 4s, so that on the playing time axis of the first video content, video frames played in 0-4s are video frames in the first video clip, and video frames played in 5-8s are video frames in the second video clip. At this time, the first video content may correspond to a profile in which a start time point and an end time point of each video clip in the first video content are recorded, and the terminal may identify each video clip in the first video content according to the profile. For example, when the terminal receives the marking operation of the user during the process of playing the first video content, the terminal may determine the corresponding video clip according to the current playing time point in the playing time axis (the current playing time point is located between the starting time point and the ending time point of the video clip), and mark the video clip as the target video clip. For example, the start time point and the end time point of the currently played video clip are recorded, or if the clip identifier of each video clip exists, the clip identifier of the currently played video clip may also be recorded.
Alternatively, in another possible implementation manner, the first video content includes at least two independent video clips, and the playing time of each video clip is independent from each other, that is, the playing time point of each video clip is counted from 0, and each video clip is stored in an independent video file and has an independent playing time axis. Optionally, each video clip corresponds to its own clip identifier (such as a number), and the terminal may identify each video clip in the first video content according to the clip identifier of the video clip; for example, when the terminal receives the marking operation of the user in the process of playing the first video content, the terminal may determine the corresponding video clip according to the clip identifier of the currently played video clip, and mark the currently played video clip as the target video clip.
Optionally, the terminal may further identify the playing sequence of each video clip according to the clip identifier of the video clip.
In response to receiving the video composition operation, playing second video content composed of the at least one target video clip in the video playback interface, step 204.
In the embodiment of the application, the terminal can also receive the video synthesis operation of the user when playing the first video content, and at this time, the terminal can play the second video content based on the video segments marked by the user, and from the perspective of the user, the terminal can combine the video segments marked by the user in the first video content into new video content and play the new video content.
Through the scheme, in the process of playing the video content with a plurality of video clips through the video playing interface, a user can select the video clips which are needed by the user to synthesize through the mode of marking the video clips, so that new video content is combined.
In summary, according to the scheme shown in the embodiment of the present application, when playing video content including at least two video clips through video elements in HTML, the terminal may receive a marking operation of a user on a target video clip in the at least two video clips, and when subsequently receiving a video synthesis operation, may combine new video content based on each marked target video clip and play the new video content; the scheme allows a user to directly mark the video clips in the video content when watching the video content through the Web end so as to synthesize new video content, thereby expanding the video manufacturing mode, simplifying the video manufacturing difficulty and improving the video manufacturing efficiency.
The embodiment of the present application shown in fig. 2 can be applied to directly making video in a video playing interface, so as to allow a user to conveniently and quickly compose personalized video content.
Based on the solution provided by the embodiment shown in fig. 2, please refer to fig. 3, which shows a frame diagram of a video composition process provided by an exemplary embodiment of the present application. The video playback system shown in fig. 3 includes a server 31 and a terminal 32. Wherein the terminal 32 corresponds to user a. The server 31 stores a first video content 31a, where the first video content 31a includes three video clips, namely a video clip 31a1, a video clip 31a2, and a video clip 31a3.
S1, when the terminal 32 runs a Web application having a video playing function, the server 31 requests video content from the server 31 through the video element, and the server 31 transmits the first video content 31a to the terminal 32.
S2, the terminal 32 plays the first video content 31a in the video playing interface through the video element.
S3, the terminal 32 receives the marking operation of the currently played video clip by the user A through the video playing interface, and marks the video clip received with the marking operation.
S4, the terminal 32 receives the video synthesis operation of the user A through the video playing interface, synthesizes the marked video segments into second video content 33 and plays the second video content.
In the video playing process shown in fig. 3, the video playing interface of the Web terminal has a function of receiving an operation of marking a single video segment in the video content, and a function of receiving an operation of synthesizing the marked video segment, and after receiving the marking operation and the video synthesizing operation, the terminal receives the above marking operation and the video synthesizing operation, and can locally synthesize the video segment through the Web at the terminal.
Fig. 4 shows a flowchart of a video playing method according to an exemplary embodiment of the present application. The video playing method may be performed by a computer device, which may be a terminal of a user as a video consumer; for example, the terminal may be the terminal 120 in the system shown in fig. 1 described above. As shown in fig. 4, the video playing method includes:
step 401, acquiring first video content through video elements of hypertext markup language (HTML); the first video content comprises at least two video clips.
At step 402, at least two video clips in a first video content are played in a video playback interface.
In the embodiment of the application, the video playing interface is a Web-based playing interface.
In an embodiment of the present application, the video content provider may make the first video content including at least two video clips through an application program of a video making class.
For example, in one possible implementation, a video content provider may pre-fetch or make at least two video clips and then combine the at least two video clips into a first video content by means of a video editing class application.
In one possible implementation, when the first video content is a single video composed of at least two video segments connected end to end, the at least two video segments correspond to a start time point and an end time point, respectively, in the first video content; for example, the first video content also corresponds to video content attribute information (may be stored separately in a file associated with the first video content or may be stored in attribute information of the first video content), where the video content attribute information includes a start time point and an end time point of at least two video clips in the first video content.
In another possible implementation manner, when the first video content is a video content composed of at least two video clips independent of each other, the video content attribute information corresponding to the first video content may include information for indicating a play order of the at least two video clips. For example, the video content attribute information corresponding to the first video content may include a number sequence, where the number sequence is formed by arranging video segment numbers of at least two video segments in order of play order from first to last. Alternatively, the video content attribute information may be a profile associated with the at least two video clips.
In the embodiment of the application, when the terminal plays the first video content in the video playing interface, the terminal can automatically and sequentially play at least two video clips in the first video content according to the playing sequence. For example, when the first video content is a video content composed of at least two video clips independent of each other, after one video clip is played through the video playing interface, the terminal can automatically start playing the next video clip according to the number sequence in the attribute information of the video content. For another example, when the first video content is a single video, the terminal plays each video frame in the single video frame by frame through the video play interface.
In another possible implementation manner, the video playing interface comprises a fragment jump control; playing at least two video clips in the first video content in the video playing interface, including:
circularly playing the first video clip in the video playing interface; the first video clip is any one of the at least two video clips;
responding to the receiving of the triggering operation of the fragment jump control, and circularly playing the second video fragment in the video playing interface; the second video clip is another video clip other than the first video clip of the at least two video clips.
Because a condition that a user cannot determine whether to mark the video clip in the playing process of the video clip may exist due to a shorter duration of a certain video clip, in another exemplary scheme of the embodiment of the present application, in order to facilitate the user to mark the video clip later, when the terminal plays the video clip through the video playing interface, the terminal may automatically perform cyclic playing on each video clip, and meanwhile, a clip jump control is displayed in the video playing interface, through which the user can control the terminal to jump to a cyclic playing stage of another video clip.
The other video clip that jumps may be the last video clip or the next video clip of the currently playing video clip. For example, the clip jump control displayed in the video playback interface may include at least one of a control that triggers the playback of a next video clip and a control that triggers the playback of a previous video clip. And the terminal jumps to the last/next video clip to carry out cyclic play according to the clip jump control clicked by the user.
Alternatively, another video clip that is jumped to may be one that the user selects from the list of video clips. For example, a control list formed by sub-controls corresponding to each video clip can be displayed in the video playing interface, and the terminal jumps to the corresponding video clip for cyclic playing according to the sub-control clicked by the user in the control list. The control list may be displayed/hidden by triggering through another list display/hiding control, or the control list may be fixedly displayed in the video playing interface.
For example, please refer to fig. 5, which illustrates a playback interface diagram according to an embodiment of the present application. As shown in fig. 5, a certain video clip is played in the video playing interface 51, and two clip jump controls, namely a clip jump control 52a and a clip jump control 52b, are displayed below the video playing interface 51, and respectively and correspondingly jump to the previous video clip/the next video clip; correspondingly, when receiving the triggering operation of the segment skip control 52a from the user, the terminal can skip to the last video segment for cyclic playing; when receiving the triggering operation of the segment skip control 52b from the user, the terminal can skip to the next video segment for cyclic playing.
In one possible implementation manner, when the first video content is a single video formed by connecting at least two video segments end to end, the step of circularly playing the first video segment in the video playing interface may include:
periodically acquiring a first playing time point of the first video content in response to playing the first video clip in the video playing interface;
and in response to the first playing time point not being earlier than the ending time point of the first video segment in the first video content, jumping to the starting time point of the first video segment in the first video content for playing.
In the embodiment of the application, when the first video content is a single video, the terminal can realize the cyclic playing of the single video segment in a frame skipping mode according to the starting time point and the ending time point of each video segment in the first video content when playing the first video content through the video playing interface.
In the embodiment of the present application, the frame skip is also referred to as video frame skip, and refers to changing the position of a video playing head (in the embodiment of the present application, the video playing head corresponds to the currently playing video frame) in a subsequent period of time, that is, changing the video playing time. For example, please refer to fig. 6, which illustrates a schematic diagram of video frame skipping according to an embodiment of the present application. As shown in part (a) of fig. 6, the video is played to 1 st second. The position of the playing head is modified to 3 seconds, and the modified video is played continuously from the 3 rd second, so that the video frames between the 1 st second and the 3 rd second are skipped.
When the first video content is a single video, the terminal can realize cyclic playing of the single video segment in a frame skipping mode, namely, when the first video content is played to the ending time point of one video segment, the terminal jumps back to the starting time point of the video segment to continue playing. For example, as shown in part (b) of fig. 6, each time a video is played to 3 rd seconds, the position of the play head is modified to 1 st second, so that the video is always played in a loop in 1 st to 3 rd seconds. Taking the example of playing video content through the Web, the position of the video playing head can be monitored in real time (for example, 24-60 times per second) at the front end of the Web, and when the video playing head is found to reach the ending position of the current video segment, the position of the playing head is changed to return to the starting position of the current video segment.
For example, when the first video content is a single video, the terminal starts playing each video frame in the first video content frame by frame from the start time point of the video clip a through the video playing interface, reads the playing time point according to the preset frequency (for example, reads 20 times per second), compares the read playing time point with the end time point of the video clip a after each reading, and jumps to the start time point of the video clip a to play frame by frame if the end time point is reached or exceeded, thereby realizing the cyclic playing of the single video clip.
Step 403, during playing the first video content, receiving a marking operation for at least one target video clip of the at least two video clips.
In one possible implementation, the video playing interface includes a clip marker control;
during playing of the first video content, receiving a marking operation of at least one target video clip of at least two video clips, including:
and in the process of playing the fourth video clip in the first video content, receiving a triggering operation of the clip mark control. Accordingly, the terminal may mark the fourth video clip as a target video clip. That is, the fourth video clip is any one of the at least one target video clip.
In the embodiment of the application, the user can mark the currently played video clip through the clip mark control displayed in the video playing interface. For example, as shown in fig. 5, when a certain video clip is played in the video playing interface 51, a clip marking control 53 is displayed under the video playing interface 51, and the user clicks the clip marking control 53, so that the terminal can mark the currently played video clip, and the marked video clip is the target video clip.
Optionally, in the process of playing the fourth video clip, in response to the fourth video clip being in a marked state, and receiving a triggering operation of the clip marking control, marking the fourth video clip is cancelled.
In one possible implementation, in response to completing the marking of the fourth video clip, the display state of the clip marking control is modified to a marked state during playback of the fourth video clip.
In the embodiment of the application, after the user marks the currently played video clip through the clip marking control, the terminal can modify the display state of the clip marking control so as to remind the user that the video clip is marked. For example, as shown in fig. 5, after the user clicks the segment marking control 53, the terminal completes marking the currently played video segment, and sets the segment marking control 53 to a background with a special color, or fills the segment marking control 53 with a specified material, so as to modify the display state of the segment marking control to a marked state. Optionally, when the clicking operation on the clip marking control 53 is received again, the terminal restores the display state of the clip marking control to the state before marking the currently played video clip.
Optionally, the terminal may prompt the marking status of the currently played video clip by modifying the display status of the clip marking control. For example, in response to completing marking of the first video clip, the terminal displays a marked pattern in the video playback interface indicating that the first video clip is in a marked state. For example, the marked pattern may be a star pattern in the upper right corner of the video playing interface or a pattern of other graphics (such as a v pattern, etc.), and the embodiment of the present application does not limit the expression form of the marked pattern.
In addition, the user may mark the first video clip in other manners, for example, in response to receiving a specific operation on the video playing interface, the terminal marks the first video clip currently played in the video playing interface. For example, the user may perform marking of the first video clip by pressing a blank position of the video playback interface for a long time, or performing a slide-up operation in the video playback interface.
In response to receiving the video composition operation, a first video composition file is generated, the first video composition file indicating at least one target video clip, step 404.
In one possible implementation, when the terminal plays at least two video clips in the first video content in the video playing interface, the terminal responds that the currently played video clip is the last video clip in the at least two video clips, and a video composition control is displayed in the video playing interface; responsive to receiving a video composition operation performed on a video composition control, the second video content may be played in a video play interface; that is, upon receiving a video composition operation performed on the video composition control, the terminal may compose each target video clip of the at least two video clips into a second video content, that is, generate the first video composition file described above.
In the embodiment of the application, when the terminal plays the first video content through the video playing interface and plays the last video clip in the first video content, a control for triggering the synthesis of the second video content can be displayed in the video playing interface, and when the user has marked at least one target video clip through marking operation, the terminal can be triggered to generate the first video synthesis file by clicking the video synthesis control.
Optionally, when the terminal plays the first video content in the video playing interface, in response to the currently played video clip being the last video clip of the at least two video clips, and at least one marked video clip (i.e., the target video clip) exists in the first video content, the terminal displays the video composition control in the video playing interface.
Optionally, if the currently played video clip is the last video clip in the at least two video clips and the target video clip does not exist in the first video content, the terminal may not display the video composition control, or display the video composition control in a non-triggerable state in the video playing interface.
For example, please refer to fig. 7, which illustrates another playback interface diagram according to an embodiment of the present application. As shown in fig. 7, the last video clip in the video content is played in the video playing interface 71, while three controls, a clip jump control 72, a clip mark control 73, and a video composition control 74, are displayed below the video playing interface 71. After the user clicks the segment skip control 72, a video segment can be skipped and played; the user clicks the segment marking control 73 to mark the last video segment; the user clicks on the video composition control 74 and the terminal may compose the second video content.
In addition to the video composition operation triggered by the video composition control, in the embodiment of the present application, the terminal may also support other video composition operations, such as a long-press video playing interface, a slide-up video playing interface, and so on. The embodiment of the application does not limit the triggering mode of the video synthesis operation.
In response to receiving the play operation of the second video content, step 405, playing the second video content in the video playing interface based on the first video content and the first video composition file.
For example, after synthesizing the second video content (that is, generating the first video synthesis file), the terminal may display a preview control, and when receiving a trigger operation of the preview control by the user, may play the second video content in the video playing interface based on the first video content and the first video synthesis file.
In the embodiment of the application, when the second video content is synthesized, the complete video corresponding to the second video content is not required to be generated, but only a file for indicating which video segments in the first video content are contained in the second video content is required to be generated. The synthesized second video content may be implemented by the first video synthesis file and the first video content combination, that is, the generated first video synthesis file indicates which video segments in the first video content the second video content contains, and the first video file is used to provide video data required when the second video content is played. On one hand, the generation of new video files can be avoided, and storage and operation resources are saved; on the other hand, the video splicing effect can be simulated for video playing scenes (such as video playing scenes based on the Web front end) which do not support video splicing.
In one possible implementation, when the first video content is a single video and the at least one target video segment comprises two or more target video segments, the first video composition file is further used to indicate a marked order for each target video segment;
in response to receiving a play operation of the second video content, playing the second video content in a video play interface based on the first video content and the first video composition file, including:
in response to receiving a play operation of the second video content, each target video clip is sequentially jumped to be played in the first video content in the marked order based on a start time point and an end time point of each target video clip in the first video content.
In the embodiment of the application, when the user marks two or more target video clips in the playing process of the first video content, the at least two target video clips can be played in the second video content according to the marked sequence, so that the user can customize the playing sequence of a plurality of video clips in the second video content.
In one possible implementation manner, the sequentially jumping playing of each target video segment in the first video content according to the marked order based on the start time point and the end time point of each target video segment in the first video content includes:
Reading a starting time point and an ending time point of the third video segment in the first video content based on the first video synthesis file; the third video clip is any one of the respective target video clips;
playing the first video content starting from a starting point in time of the third video clip in the first video content;
periodically acquiring a second playing time point of the first video content;
in response to the second play time point not being earlier than the end time point of the third video clip in the first video content, reading a start time point and an end time point of the next target video clip in the first video content based on the first video composition file in the marked order.
In an embodiment of the present application, when there are at least two target video clips, the at least two target video clips have an order in which they are marked. For example, during playing the first video content, the user may first mark the 3 rd video clip and then jump back to the 2 nd video clip and mark the 2 nd video clip, and accordingly, when generating the first video composition file, the terminal may indicate in the first video composition file the order in which the 2 nd video clip and the 3 rd video clip are marked, for example, in the first video composition file, the clip identification of the 2 nd video clip is located after the clip identification of the 3 rd video clip, or the arrangement position of the start time point and the end time point of the 2 nd video clip is located after the arrangement position of the start time point and the end time point of the 3 rd video clip. When playing the second video content, the terminal may play the second video content based on the first video content according to the marked sequence indicated by the first video composition file, for example, the terminal plays the 3 rd video clip in the first video content first, and when the 3 rd video clip is played, the terminal jumps to the starting time point of the 2 nd video clip to start playing the 2 nd video clip.
In one possible implementation, responsive to the second play time point not being earlier than the end time point of the third video clip in the first video content, reading a start time point and an end time point of a next target video clip in the first video content based on the first video composition file in the marked order, comprising:
in response to the second play time point not being earlier than the end time point of the third video clip in the first video content, and the third video clip being other target video clips of the respective target video clips except for the last marked video clip, reading a start time point and an end time point of a next target video clip of the third video clip in the first video content based on the first video composition file in the marked order.
In the embodiment of the application, in the process of playing the second video content, before playing the last target video segment, each time playing of one target video segment is completed, the terminal can jump to the starting time point of the next target video segment in the first video content to start playing according to the playing sequence of each target video segment.
In one possible implementation, responsive to the second play time point not being earlier than the end time point of the third video clip in the first video content, reading a start time point and an end time point of a next target video clip in the first video content based on the first video composition file in the marked order, comprising:
in response to the second play time point not being earlier than the end time point of the third video clip in the first video content, and the third video clip being the last marked video clip in the respective target video clips, reading the start time point and the end time point of the first target video clip in the first video content based on the first video composition file in the marked order.
In the embodiment of the application, in the process of playing the second video content, after the last target video segment is played, the terminal can jump to the starting time point of the first target video segment to start to continue playing, that is, the terminal plays circularly by taking at least two target time-frequency segments as a whole when playing the second video content.
In another possible implementation, the terminal ends playing of the second video content in response to the second playing time point not being earlier than an ending time point of the third video clip in the first video content, and the third video clip being a last marked video clip in the respective target video clips.
The content is described by taking the first video content as a single video as an example, and optionally, when the first video content includes at least two independent video clips, the terminal can directly read the corresponding target video clip according to the clip identifier of the target video clip to play.
In another possible implementation, when there are at least two target video clips, the playing order of at least two target video clips in the second video content may also be the playing order of the at least two target video clips in the first video content.
Alternatively, in another possible implementation manner, when there are at least two target video clips, the playing order of the at least two target video clips in the second video content may be customized by the user after the terminal receives the video composition operation of the user. For example, when the terminal receives the video synthesis operation, a sequence setting interface may be displayed on an upper layer of the video playing interface, where the sequence setting interface includes at least two icons of target video clips, and an arrangement sequence between the icons of the at least two target video clips may be customized and adjusted by a user, and when the terminal receives a trigger operation of the user on a determination control in the sequence setting interface, the arrangement sequence between the icons of the at least two target video clips in the sequence setting interface is set as a play sequence of the at least two target video clips in the second video content.
Step 406, displaying the sharing control in the video playing interface.
In the embodiment of the application, after the terminal synthesizes the second video content, the terminal can share the second video content with other users (such as friends).
And step 407, in response to receiving the triggering operation of the sharing control, sharing the first video composition file to the first target terminal, so that the first target terminal plays the second video content based on the first video content and the first video composition file.
For example, when the terminal receives a click operation of the user on the sharing control, a sharing target interface may be displayed, where the sharing target interface includes an option of a sharable target, such as a sharable group or a list of sharable friends, after receiving a selection operation of the user on an option of a sharable target, the terminal sends the first video composite file to the terminal of the sharable target (for example, when the sharable target is a friend, the terminal of the sharable target sends the first video composite file to the terminal of each user in the group, and when the sharable target is a group, the terminal of the sharable target may display a play option corresponding to the first video composite file after receiving the first video composite file, and play the second video content based on the first video content and the first video composite file after receiving the play option.
In one possible implementation manner, in response to receiving the second video composition file shared by the second target terminal, playing third video content in the video playing interface based on the first video content and the second video composition file; wherein the second video composition file is used to indicate each video clip marked in the second target terminal.
In one possible implementation manner, in response to receiving the second video composition file shared by the second target terminal, playing the third video content in the video playing interface based on the first video content and the second video composition file, including:
responding to the received second video synthesized file shared by the second target terminal, and displaying a play control corresponding to the second video synthesized file;
and responding to the received triggering operation of the playing control, and playing third video content in the video playing interface based on the first video content and the second video composite file.
In the embodiment of the present application, the terminal may play the third video content synthesized and shared by other users based on the first video content, and the process is similar to the manner in which the first target terminal plays the shared second video content, which is not described herein again.
In summary, according to the scheme shown in the embodiment of the present application, when playing video content including at least two video clips through video elements in HTML, the terminal may receive a marking operation of a user on a target video clip in the at least two video clips, and when subsequently receiving a video synthesis operation, may combine new video content based on each marked target video clip and play the new video content; the scheme allows a user to directly mark the video clips in the video content when watching the video content through the Web end so as to synthesize new video content, thereby expanding the video manufacturing mode, simplifying the video manufacturing difficulty and improving the video manufacturing efficiency.
The scheme disclosed by the embodiment of the application can be applied to any scene which requires a user to self-control video through a Web terminal and share the video. For example, taking a video of a scene introduction of a virtual scene in a game application as an example, the virtual scene includes a plurality of scenes, each of the scenes may have one or more scene videos, and each of the scene videos corresponds to one of the video clips in the above embodiments. The gaming application may provide a attraction introduction video comprising a plurality of attraction videos to a user's terminal, e.g., the attraction introduction video is stitched from the plurality of attraction videos. In the process of playing the scenic spot introduction videos, a user can select one or more scenic spot videos to mark and synthesize a user-defined scenic spot introduction video, and the user-defined scenic spot introduction video can be previewed or shared with friends.
Referring to fig. 8, a schematic diagram of video composition according to an exemplary embodiment of the present application is shown. As shown in FIG. 8, the process of user-defined composition of the attraction introduction video within the game may be as follows:
step 81, preparing a sight spot video.
In the development process of the game scene, a developer can make scenic spot videos through video making software, for example, a certain game project contains 24 scenic spots in total, and 24 scenic spot videos need to be made, so that 24 video files are obtained.
Step 82, merge into an integrated video.
For example, the developer groups 24 scenery videos in step 81 according to project characteristics, for example, please refer to fig. 9, which shows a scenery grouping schematic diagram according to an embodiment of the present application, as shown in fig. 9, the developer may group the 24 scenery videos into 3 groups according to three virtual cities in the game.
In the embodiment of the application, the purpose of grouping is to reduce the size of the prefabricated video, because the user only views the scenic spots of one city at a time, the scenic spot videos which do not belong to the city do not need to be loaded, so that the network bandwidth of the user is saved. Alternatively, all 24 sight videos can be combined into one complete video.
The developer combines the 3 grouped videos in the video production software to obtain 3 integrated videos, wherein each video contains all scenic spot video contents in the city. The merging process can also record the number of each scenic spot video, the starting position, the ending position and other data of the scenic spot video in the respective integrated video, and the data are used for the subsequent video frame skipping operation.
Step 83, loading and playing the prefabricated integrated video.
After the game product is on-line, the terminal of the user can load corresponding integrated videos from the 3 integrated videos according to the selection of the user.
After the loading is completed, the terminal plays the integrated video from scratch, and the user can browse each scenic spot video in the integrated video.
Each attraction video is played in a loop so that the user has enough time to view the attraction content. The cyclic playing also uses video frame skipping, namely, when the video is played to the end position of the scenic spot, the video playing head is returned to the start position of the scenic spot, so that each cyclic playing scenic spot is identical to each independent scenic spot video in view of a user.
When the user clicks the "last station" or "next station" button in the video playing interface, the terminal may stop playing the current scenic spot video and play the last/next scenic spot video. For example, when the user clicks "next station", the terminal controls the playing head of the video to instantaneously move to the starting position of the next scenic spot, so that the terminal starts playing the content of the next scenic spot video, and the content is also circularly played in the scenic spot video. When the user clicks the last station, the terminal adjusts the position of the playing head to the starting position of the video of the last scenic spot, and the other is the same.
And step 85, recording the scenic spot video selected by the user.
When a user encounters a favorite scenic spot video, a card punching operation (for example, clicking a "punch card immediately" button) may be performed, and when a card is punched, a game application in the terminal may record the number of the current scenic spot video, for example, please refer to fig. 10, which shows a schematic diagram of scenic spot video numbers according to an embodiment of the present application. As shown in fig. 10, the user sequentially clicks on the spot videos A2, A4, A8, A6, and the game application will record the numbers of these 4 spot videos for use in subsequent composite playback.
Step 86, judging whether to generate a synthesized video; if yes, go to step 87, otherwise, return to step 84.
When the integrated video is played to the last sight video, the terminal may display a button "generate my Vlog", and after clicking the button, the game application may obtain the start position and the end position of each sight video through the sight video number recorded in step 85. Otherwise, the playing of all the scenic spot videos in the integrated video is continuously restarted.
Step 87, moving the playing video head to the starting position of the video of the shooting scenic spot.
Step 88, play the current open attraction video.
Step 89, the playing video head reaches the end position of the video of the current open scenic spot.
Step 810, judging whether all the video of the point of sale is played, if yes, proceeding to step 812, otherwise proceeding to step 811.
Step 811, the starting position of the next video of the point of interest is acquired, and step 87 is returned.
Step 812, the playback is ended.
After the data is obtained, the game application program adjusts the position of the video playing head to the starting position of the first scenic spot video A2, starts playing the video, monitors the position of the playing head in real time, adjusts the position of the video playing head to the starting position of the scenic spot video A4 when the video playing head reaches the ending position of A2, and so on until all the scenic spots after the shooting are played according to the marking sequence.
When combining the scenic spot videos which are punched by the user, no new video file is generated, and the integrated video which is prefabricated at first is still used, but in the process, the user only sees the scenic spot videos which are punched by the user, and the scenic spot videos which are not punched by the user can be skipped. Fig. 11 is a schematic diagram showing a playing sequence of a composite video according to an embodiment of the present application.
When all the video of the shooting scenic spot is played once, the video synthesized by the user can end playing or start to play circularly from the beginning. The video merging is completed at the Web end, and the video is synthesized according to the user card punching sequence.
The scheme provided by the embodiment of the application can also share the spliced video of the user to friends. For example, when the user wants to share the video synthesized after the user's own card punching to the friend, the game application side may bring the sequential data of the video of the user's card punching sight point (i.e. the first video synthesis file) in the sharing address. When friends open the sharing address, the game application program can acquire the video sequence data of the scenic spots according to the agreed format on the address. After loading the corresponding integrated video on the Web terminal of the friend, playing the integrated video according to the mode from step 87 to step 811. At this time, the Web end of the friend can see the composite video shared by others.
By means of the method, when friends share videos, the synthetic videos made by the friends do not need to be actually transmitted to the server, and storage and network bandwidth cost of the server can be greatly reduced.
The above process can be summarized as:
1) Multiple independent videos are combined into one.
2) And loading and playing the combined video, and circularly playing each independent video interval until the user selects to leave the current scenic spot.
3) The user clicks the favorite scenic spot, and the program records the number of the current scenic spot.
4) Generating a user spliced video which is still the video loaded in the step 2, so that reloading is not needed. Only then the video only plays the scenic spot interval which is checked by the user, and the scenic spot interval which is not checked by the user can be skipped.
5) And finishing playing after sequentially playing all video intervals according to the card punching sequence of the user, so that the function and effect of combining a plurality of independent videos at the Web end and automatically assembling by the user are simulated.
According to the scheme, the similar video merging effect can be realized in environments and equipment which do not support video merging, such as the Web end, and the like, and the video does not need to be uploaded to a server, so that a user can smoothly experience the process of selecting the video and merging the video, the personalized video merging effect is achieved, and the fluency of the whole interactive experience is also ensured.
Fig. 12 is a block diagram of a video playback apparatus according to an exemplary embodiment of the present application. The video playback apparatus may be applied in a computer device to perform all or part of the steps of the method as shown in fig. 2 or fig. 4. As shown in fig. 12, the video playing device includes:
a video content acquisition module 1201, configured to acquire a first video content through a video element of a hypertext markup language HTML; the first video content comprises at least two video clips;
A first playing module 1202, configured to play at least two video clips in a first video content in a video playing interface;
a marking module 1203 configured to receive a marking operation for at least one target video clip of at least two video clips during the process of playing the first video content;
and a second playing module 1204, configured to play, in response to receiving a video composition operation, second video content composed of at least one of the target video clips in the video playing interface.
In one possible implementation manner, the video playing interface comprises a fragment jump control; the first playing module 1202 is configured to,
circularly playing a first video clip in the video playing interface; the first video clip is any one of at least two of the video clips;
responding to the receiving of the triggering operation of the fragment jump control, and circularly playing a second video fragment in the video playing interface; the second video clip is another video clip other than the first video clip of at least two of the video clips.
In one possible implementation, the first video content is a single video composed of at least two video segments connected end to end; at least two video clips correspond to a start time point and an end time point in the first video content respectively;
The first playing module 1202 is configured to,
periodically acquiring a first playing time point of the first video content in response to playing the first video clip in the video playing interface;
and in response to the first playing time point not being earlier than the ending time point of the first video segment in the first video content, jumping to the starting time point of the first video segment in the first video content for playing.
In one possible implementation, the second playing module 1204 is configured to,
generating a first video composition file in response to receiving a video composition operation, the first video composition file being used to indicate at least one of the target video clips;
and in response to receiving the playing operation of the second video content, playing the second video content in the video playing interface based on the first video content and the first video composite file.
In one possible implementation, at least one of the target video clips contains two or more of the target video clips, and the first video composition file is further configured to indicate a marked order of the respective target video clips;
The second playing module 1204 is configured to, in response to receiving a playing operation of the second video content, sequentially skip-play each of the target video clips in the first video content according to the marked order based on a start time point and an end time point of each of the target video clips in the first video content.
In one possible implementation, the second playing module is configured to,
reading a start time point and an end time point of a third video clip in the first video content based on the first video composition file; the third video clip is any one of the target video clips;
playing the first video content starting from a starting point in time of the third video clip in the first video content;
periodically acquiring a second playing time point of the first video content;
and in response to the second playing time point not being earlier than the ending time point of the third video segment in the first video content, reading a starting time point and an ending time point of the next target video segment in the first video content based on the first video synthesis file according to the marked sequence.
In one possible implementation, the second playing module 1204 is configured to,
in response to the second play time point not being earlier than an end time point of the third video clip in the first video content, and the third video clip being other of the respective target video clips except for the last marked video clip, reading a start time point and an end time point of a next one of the third video clips in the first video content based on the first video composition file in the marked order.
In one possible implementation, the second playing module is configured to,
and in response to the second playing time point not being earlier than the ending time point of the third video segment in the first video content, and the third video segment being the last marked video segment in each target video segment, reading the starting time point and the ending time point of the first target video segment in the first video content based on the first video synthesis file according to the marked sequence.
In one possible implementation, the apparatus further includes:
the sharing control display module is used for responding to the received video synthesis operation and displaying a sharing control in the video playing interface;
and the sharing module is used for responding to the receiving of the triggering operation of the sharing control, and sharing the first video composite file to a first target terminal so that the first target terminal plays the second video content based on the first video content and the first video composite file.
In one possible implementation, the apparatus further includes:
the third playing module is used for responding to the received second video synthesized file shared by the second target terminal and playing third video content in the video playing interface based on the first video content and the second video synthesized file;
wherein the second video composition file is used to indicate the video clips each marked in the second target terminal.
In one possible implementation, the third playing module is configured to,
responding to a second video composition file shared by a second target terminal, and displaying a play control corresponding to the second video composition file;
And responding to the receiving of the triggering operation of the playing control, and playing third video content in the video playing interface based on the first video content and the second video composite file.
In one possible implementation manner, the video playing interface comprises a fragment mark control;
the marking module 1203 is configured to receive a triggering operation of the segment marking control during a process of playing a fourth video segment in the first video content; the fourth video clip is any one of the at least one target video clip.
In one possible implementation, the apparatus further includes:
and the state modification module is used for modifying the display state of the segment marking control into a marked state in the process of playing the fourth video segment in response to the completion of marking the fourth video segment.
In one possible implementation, the second playing module 1204 is configured to,
when at least two video clips in the first video content are played in the video playing interface, responding to the fact that the currently played video clip is the last video clip in the at least two video clips, and displaying a video composition control in the video playing interface;
And playing the second video content in the video playing interface in response to receiving the video composition operation performed on the video composition control.
In summary, according to the scheme shown in the embodiment of the present application, when playing video content including at least two video clips through video elements in HTML, the terminal may receive a marking operation of a user on a target video clip in the at least two video clips, and when subsequently receiving a video synthesis operation, may combine new video content based on each marked target video clip and play the new video content; the scheme allows a user to directly mark the video clips in the video content when watching the video content through the Web end so as to synthesize new video content, thereby expanding the video manufacturing mode, simplifying the video manufacturing difficulty and improving the video manufacturing efficiency.
Fig. 13 shows a block diagram of a computer device 1300 provided by an exemplary embodiment of the application. The computer device 1300 may be a terminal of a user, such as: smart phones, tablet computers, personal computers, and the like.
In general, the computer device 1300 includes: a processor 1301, and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. Processor 1301 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ).
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. Memory 1302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1302 is used to store at least one computer instruction for execution by processor 1301 to implement all or part of the steps performed by a terminal in the video playback method provided by the method embodiments of the present application.
In some embodiments, the computer device 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. The processor 1301, the memory 1302, and the peripheral interface 1303 may be connected by a bus or signal lines. The respective peripheral devices may be connected to the peripheral device interface 1303 through a bus, a signal line, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, a display screen 1305, a camera assembly 1306, audio circuitry 1307, and a power supply 1309.
In some embodiments, computer device 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyroscope sensor 1312, pressure sensor 1313, optical sensor 1315, and proximity sensor 1316.
Those skilled in the art will appreciate that the architecture shown in fig. 13 is not limiting as to the computer device 1300, and may include more or fewer components than shown, or may combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as a memory, comprising at least one computer instruction executable by a processor to perform all or part of the steps of the method shown in any of the embodiments of fig. 2 or 4 described above, which is performed by a terminal. For example, the non-transitory computer readable storage medium may be ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, a computer program product or a computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of the computer device, and executed by the processor, to cause the computer device to perform all or part of the steps performed by the terminal in the method of any of the embodiments of fig. 2 or fig. 4 described above.
It should be noted that, the information (including but not limited to user account information, user personal information, etc.), data (including but not limited to data for analysis, stored data, displayed data, etc.) and signals related to the present application are all authorized by the user or are fully authorized by the parties, and the collection, use and processing of the related data need to comply with relevant laws and regulations and standards of the country and region. For example, the information such as the user account number and the like related to the application is acquired under the condition of full authorization.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (18)

1. A video playing method, the method comprising:
acquiring first video content through video elements of hypertext markup language (HTML); the first video content comprises at least two video clips;
playing at least two video clips in the first video content in a video playing interface;
receiving a marking operation of at least one target video clip in at least two video clips in the process of playing the first video content;
and playing second video content synthesized by at least one target video clip in the video playing interface in response to receiving the video synthesizing operation.
2. The method of claim 1, wherein the video playback interface includes a clip jump control; the playing at least two video clips in the first video content in a video playing interface includes:
circularly playing a first video clip in the video playing interface; the first video clip is any one of at least two of the video clips;
responding to the receiving of the triggering operation of the fragment jump control, and circularly playing a second video fragment in the video playing interface; the second video clip is another video clip other than the first video clip of at least two of the video clips.
3. The method of claim 2, wherein the first video content is a single video composed of at least two of the video segments end-to-end; at least two video clips correspond to a start time point and an end time point in the first video content respectively;
the circularly playing the first video clip in the video playing interface comprises the following steps:
periodically acquiring a first playing time point of the first video content in response to playing the first video clip in the video playing interface;
and in response to the first playing time point not being earlier than the ending time point of the first video segment in the first video content, jumping to the starting time point of the first video segment in the first video content for playing.
4. The method of claim 3, wherein playing the second video content synthesized from the at least one target video clip in the video playback interface in response to receiving a video synthesis operation comprises:
generating a first video composition file in response to receiving a video composition operation, the first video composition file being used to indicate at least one of the target video clips;
And in response to receiving the playing operation of the second video content, playing the second video content in the video playing interface based on the first video content and the first video composite file.
5. The method of claim 4, wherein at least one of the target video clips comprises two or more of the target video clips, the first video composition file further being used to indicate a marked order for each of the target video clips;
the responding to the receiving of the playing operation of the second video content, playing the second video content in the video playing interface based on the first video content and the first video synthesized file, and comprises the following steps:
and in response to receiving the playing operation of the second video content, sequentially jumping and playing the target video clips in the first video content according to the marked sequence based on the starting time point and the ending time point of the target video clips in the first video content.
6. The method of claim 5, wherein sequentially jumping playback of each of the target video clips in the first video content in the marked order based on a start time point and an end time point of each of the target video clips in the first video content, comprising:
Reading a start time point and an end time point of a third video clip in the first video content based on the first video composition file; the third video clip is any one of the target video clips;
playing the first video content starting from a starting point in time of the third video clip in the first video content;
periodically acquiring a second playing time point of the first video content;
and in response to the second playing time point not being earlier than the ending time point of the third video segment in the first video content, reading a starting time point and an ending time point of the next target video segment in the first video content based on the first video synthesis file according to the marked sequence.
7. The method of claim 6, wherein said reading a next start time point and end time point of the target video clip in the first video content based on the first video composition file in the marked order in response to the second play time point not being earlier than the end time point of the third video clip in the first video content, comprises:
In response to the second play time point not being earlier than an end time point of the third video clip in the first video content, and the third video clip being other of the respective target video clips except for the last marked video clip, reading a start time point and an end time point of a next one of the third video clips in the first video content based on the first video composition file in the marked order.
8. The method of claim 6, wherein said reading a next start time point and end time point of the target video clip in the first video content based on the first video composition file in the marked order in response to the second play time point not being earlier than the end time point of the third video clip in the first video content, comprises:
and in response to the second playing time point not being earlier than the ending time point of the third video segment in the first video content, and the third video segment being the last marked video segment in each target video segment, reading the starting time point and the ending time point of the first target video segment in the first video content based on the first video synthesis file according to the marked sequence.
9. The method according to claim 4, wherein the method further comprises:
responding to the received video composition operation, and displaying a sharing control in the video playing interface;
and in response to receiving the triggering operation of the sharing control, sharing the first video composite file to a first target terminal, so that the first target terminal plays the second video content based on the first video content and the first video composite file.
10. The method according to claim 4, wherein the method further comprises:
responding to the received second video synthesized file shared by the second target terminal, and playing third video content in the video playing interface based on the first video content and the second video synthesized file;
wherein the second video composition file is used to indicate the video clips each marked in the second target terminal.
11. The method of claim 10, wherein playing third video content in the video playback interface based on the first video content and the second video composition file in response to receiving a second video composition file shared by a second target terminal, comprises:
Responding to a second video composition file shared by a second target terminal, and displaying a play control corresponding to the second video composition file;
and responding to the receiving of the triggering operation of the playing control, and playing third video content in the video playing interface based on the first video content and the second video composite file.
12. The method of claim 2, wherein the video playback interface includes a clip markup control;
the receiving, during the playing of the first video content, a marking operation for at least one target video clip of at least two video clips includes:
receiving triggering operation of the fragment marking control in the process of playing a fourth video fragment in the first video content; the fourth video clip is any one of the at least one target video clip.
13. The method according to claim 12, wherein the method further comprises:
and in response to the completion of marking the fourth video clip, modifying the display state of the clip marking control into a marked state in the process of playing the fourth video clip.
14. The method of claim 1, wherein playing the second video content synthesized from the at least one target video clip in the video playback interface in response to receiving a video synthesis operation comprises:
when at least two video clips in the first video content are played in the video playing interface, responding to the fact that the currently played video clip is the last video clip in the at least two video clips, and displaying a video composition control in the video playing interface;
and playing the second video content in the video playing interface in response to receiving the video composition operation performed on the video composition control.
15. A video playback device, the device comprising:
the video content acquisition module is used for acquiring first video content through video elements of hypertext markup language (HTML); the first video content comprises at least two video clips;
the first playing module is used for playing at least two video clips in the first video content in the video playing interface;
a marking module, configured to receive a marking operation on at least one target video clip of at least two video clips during the process of playing the first video content;
And the second playing module is used for playing second video content synthesized by at least one target video clip in the video playing interface in response to receiving the video synthesis operation.
16. A computer device comprising a processor and a memory storing instructions for execution by at least one computer, the at least one computer instructions being loaded and executed by the processor to implement the video playback method of any one of claims 1 to 14.
17. A computer readable storage medium having stored therein at least one computer instruction that is loaded and executed by a processor to implement the video playback method of any one of claims 1 to 14.
18. A computer program product, characterized in that the computer program product comprises computer instructions that are read and executed by a processor of a computer device, so that the computer device performs the video playback method of any one of claims 1 to 14.
CN202210486102.3A 2022-05-06 2022-05-06 Video playing method, device, equipment, storage medium and program product Pending CN117061814A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210486102.3A CN117061814A (en) 2022-05-06 2022-05-06 Video playing method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210486102.3A CN117061814A (en) 2022-05-06 2022-05-06 Video playing method, device, equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN117061814A true CN117061814A (en) 2023-11-14

Family

ID=88667897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210486102.3A Pending CN117061814A (en) 2022-05-06 2022-05-06 Video playing method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN117061814A (en)

Similar Documents

Publication Publication Date Title
CN111294663B (en) Bullet screen processing method and device, electronic equipment and computer readable storage medium
US20170169598A1 (en) System and method for delivering augmented reality using scalable frames to pre-existing media
US20190104325A1 (en) Event streaming with added content and context
CN112087655B (en) Method and device for presenting virtual gift and electronic equipment
CN109474844B (en) Video information processing method and device and computer equipment
US20150026573A1 (en) Media Editing and Playing System and Method Thereof
CN113038239B (en) Bullet screen setting method, device and system
JP7209037B2 (en) Control method and program
CN113573129B (en) Commodity object display video processing method and device
CN105279222A (en) Media editing and playing method and system
CN113298602A (en) Commodity object information interaction method and device and electronic equipment
CN113709542A (en) Method and system for playing interactive panoramic video
CN113975806B (en) In-game interface interaction method and device, storage medium and computer equipment
KR102212405B1 (en) Method, device and program for producing VR video contents
CN114501100A (en) Live broadcast page skipping method and system
CN111866403B (en) Video graphic content processing method, device, equipment and medium
CN111526416A (en) Video playing method, device, equipment and storage medium
CN117061814A (en) Video playing method, device, equipment, storage medium and program product
KR102188227B1 (en) Method and apparatus for providing contents complex
CN115037960B (en) Interactive video generation method and device
CN109299447A (en) Advertisement edit methods, terminal and computer readable storage medium
JP7445272B1 (en) Video processing method, video processing system, and video processing program
CN110708573B (en) Video publishing method and device
CN114866852B (en) Control showing method and device for interactive video, computer equipment and storage medium
KR101263179B1 (en) Method of setting up background image of mobile terminal using moving picture, the mobile with the Apparatus setting up background image of mobile terminal using moving picture, the system thereof and recording medium thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination