CN119583911A - Method and device for processing video file - Google Patents

Method and device for processing video file Download PDF

Info

Publication number
CN119583911A
CN119583911A CN202411586782.1A CN202411586782A CN119583911A CN 119583911 A CN119583911 A CN 119583911A CN 202411586782 A CN202411586782 A CN 202411586782A CN 119583911 A CN119583911 A CN 119583911A
Authority
CN
China
Prior art keywords
video frame
event
video
display interface
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411586782.1A
Other languages
Chinese (zh)
Inventor
刘俏
李天舒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202411586782.1A priority Critical patent/CN119583911A/en
Publication of CN119583911A publication Critical patent/CN119583911A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to the technical field of Internet application, and provides a method, a device, electronic equipment and a computer program product for processing video files. The method comprises the steps of monitoring a playing instruction of a video file, responding to the monitoring of the playing instruction, playing the video file on a display interface, monitoring a first event aiming at the video file in the playing process of the video file, wherein the first event is used for triggering the display interface to display at least one video frame, the at least one video frame is used for representing a highlight in the video file, and responding to the monitoring of the first event, displaying a video frame display page on the display interface, wherein the video frame display page is used for displaying the at least one video frame. Therefore, the user can quickly acquire the highlight in the video file, and the video consumption efficiency is improved.

Description

Method and device for processing video file
Technical Field
The present application relates to the field of internet application technologies, and in particular, to a method and apparatus for processing video files.
Background
Video consumption refers to a process in which users enjoy video contents and services by paying time costs and/or capital costs in order to meet audiovisual demands of various information such as learning, entertainment, and life. Along with diversification of video consumption platforms and enrichment of video consumption contents, how to improve video consumption efficiency becomes a problem to be solved.
Disclosure of Invention
The application provides a method and a device for processing video files, which can improve the video consumption efficiency.
In a first aspect, a method for processing a video file is provided, the method comprising monitoring a play instruction of the video file, playing the video file on a display interface in response to the monitoring of the play instruction, monitoring a first event for the video file during the playing of the video file, the first event being used for triggering the display interface to display at least one video frame, the at least one video frame being used for representing a highlight in the video file, and displaying a video frame display page on the display interface in response to the monitoring of the first event, the video frame display page being used for displaying the at least one video frame.
By means of the first event, at least one video frame used for representing the highlight in the video file is triggered to be displayed on the display interface, so that a user can quickly acquire the highlight in the video file, and video consumption efficiency is improved.
In one implementation, the first event includes a two-finger pinch gesture triggered event, a first interface element triggered event, or a voice command triggered event.
And at least one video frame in the video file is displayed on the display interface by utilizing the double-finger pinch gesture, the first interface element or the event triggered by the voice command, so that the method is simple and quick on one hand, and meets the operation habit of a user on the other hand.
In one implementation, the method further includes displaying the at least one video frame on the display interface in response to monitoring a pause operation for the video file during playback of the video file.
Considering that when the user pauses to play the video file, there may be a need to browse the highlight, at least one video frame for representing the highlight may be displayed on the display interface for selection by the user, thereby improving the user experience.
In one implementation, the position of the at least one video frame on the video playing progress bar is located near a corresponding position on the video playing progress bar at the pause time.
The user pauses playing the video file at a certain moment, possibly indicating that there is a highlight which the user wants to browse, at this moment, at least one video frame of the highlight located near the pause moment is shown for the user on the display interface, and the expectation of the user can be better met.
In one implementation mode, the method further comprises the steps of displaying a comment page for the video file on the display interface in response to the second event on the display interface when the at least one video frame is displayed on the display interface, wherein a target video frame in the at least one video frame is inserted into the comment page, and publishing comment contents taking the target video frame as comment drawings in response to the publishing operation on the comment page.
Through the second event, the target video frames are quickly inserted into comment pages to serve as comment drawings, and the comment pages inserted with the target video frames can be quickly provided for users, so that comment operations of the users are enriched, comment efficiency is improved, and interaction experience of the users is improved.
In one implementation, the second event includes an event triggered by a click operation for the target video frame, an event triggered by a sliding operation between the at least one video frame, or an event triggered by a second interface element.
The method has the advantages that the target video frames are quickly inserted into comment pages to serve as comment drawings through clicking operation of the target video frames in at least one video frame on a display interface, sliding operation among at least one video frame or operation triggered by a second interface element, so that the method is simple and quick to operate, and cognition and operation habits of a user on operation gestures are met.
In one implementation, the method further comprises displaying a snap-in page for a snap-in operation on the display interface in response to monitoring a third event on the display interface when the at least one video frame is displayed on the display interface, recording a snap-in video with the target video frame as a snap-in background in response to monitoring a recording operation for the snap-in video, and publishing the snap-in video in response to monitoring a publishing operation for the snap-in video.
Through the third event, the page for the snap-in operation taking the target video frame as the background is displayed, and the page for the snap-in operation can be provided for the user rapidly, so that the creation foundation taking the target video frame as the background is provided for the user, the user can perform the snap-in operation based on the target video frame, the interaction experience of the user is improved, and the operation of the user is simplified.
In one implementation, the third event includes an event triggered by a click operation for the target video frame, an event triggered by a sliding operation between the at least one video frame, or an event triggered by a third interface element.
The method has the advantages that through clicking operation of a target video frame in at least one video frame on the display interface, sliding operation among at least one video frame or operation triggered by a third interface element, a page of a snap-in operation taking the target video frame as a background is provided for a user, so that the method is simple and quick to operate, and on the other hand, cognition and operation habits of the user on operation gestures are met.
In one implementation, the method further includes playing, on the display interface, a highlight corresponding to a target video frame of the at least one video frame in response to detecting a fourth event while the at least one video frame is displayed on the display interface.
Through the fourth event, playing of the highlight corresponding to the target video frame in the video file is achieved, so that a user can not only quickly acquire each highlight in the video file, but also skip and play the highlight of interest, and video consumption efficiency is further improved.
In one implementation, the fourth event includes an event triggered by a click operation for the target video frame and/or an event triggered by a sliding operation between the at least one video frame.
The playing of the highlight corresponding to the target video frame in the video file is realized through clicking operation of the target video frame in at least one video frame and/or sliding operation among at least one video frame on the display interface, so that the operation is simple and quick on one hand, and the cognition and the operation habit of a user on the operation gesture are met on the other hand.
In one implementation, the method further includes displaying an edit page on the display interface in response to detecting a fifth event while the at least one video frame is displayed on the display interface, the edit page being used to edit a target video frame of the at least one video frame.
Through the fifth event, the editing page is called, so that a user can edit the target video frame in at least one video frame in the editing page, the diversified requirements of the user are met, and the participation degree and the creation experience of the user in the video consumption process are improved.
In one implementation, the method further includes displaying an interface element for triggering the fifth event while displaying at least one video frame in the video file on the display interface.
Through the interface element for triggering the fifth event, the editing of the target video frame is realized, the editing function of the video frame is more intuitively displayed, and a user can simply and quickly enter an editing page.
In one implementation, the method further includes, while the at least one video frame is displayed on the display interface, triggering at least one of downloading, praying, commenting, sharing, and collecting for a target video frame of the at least one video frame in response to detecting a sixth event.
Through the sixth event, operations such as downloading, praying, commenting, sharing and collecting of the target video frame in at least one video frame are realized, diversified requirements of users are met, and participation and interaction experience of the users in the video consumption process are improved.
In one implementation, the method further includes displaying an interface element for triggering the sixth event while the at least one video frame is displayed on the display interface.
Through the interface element for triggering the sixth event, operations such as downloading, praying, commenting, sharing, collecting and the like of the target video frame are realized, the interaction function of the video frame is more intuitively displayed, and a user can simply and rapidly perform the interaction operation.
In one implementation, when the at least one video frame is displayed on the display interface, the at least one video frame is arranged on the display interface according to the sequence of appearance of the at least one video frame in the video file.
And arranging at least one video frame on the display interface based on the sequence of the at least one video frame in the video file, which is favorable for keeping the continuity of video content, so that a user can more intuitively judge the sequence of the at least one video frame. In addition, from the technical point of view, arranging at least one video frame on the display interface based on the sequence of the at least one video frame in the video file is a relatively simple and direct implementation manner, does not need complex algorithm or logic to process the arrangement problem of the video frames, and is easier to realize on various playing platforms and devices.
In one implementation, the method further includes switching between the at least one video frame in response to monitoring a seventh event while the at least one video frame is displayed on the display interface.
Through the seventh event, the switching between at least one video frame is realized, and the user can easily view the last video frame or the next video frame without leaving the current interface, so that the browsing continuity and smoothness are maintained.
In one implementation, the seventh event includes an event triggered by a single-finger swipe gesture in a preset direction or an event triggered by a fourth interface element.
The switching between at least one video frame is realized through the event triggered by the single-finger sliding gesture in the preset direction or the event triggered by the fourth interface element, so that a user can simply and efficiently browse the video frame, the browsing efficiency of highlight clips in the video file is improved, the interactivity between the user and the content of the video frame is improved, the user can actively participate in the browsing process, and the interactive experience is improved.
In one implementation, the method further includes displaying a fifth interface element for indicating a location of the at least one video frame in a video playback progress bar when the at least one video frame is displayed on the display interface.
And when at least one video frame is displayed on the display interface, displaying a fifth interface element for indicating the position of the at least one video frame in the video playing progress bar, so that the user can know the playing progress of the video frame on the display interface, and the interactive experience of the user is improved.
In one implementation, the method further includes displaying a sixth interface element when the at least one video frame is displayed on the display interface, where the sixth interface element is used to indicate interaction information corresponding to the at least one video frame.
And when at least one video frame is displayed on the display interface, displaying a sixth interface element to indicate interaction information corresponding to the at least one video frame, so that the user can know the popularity of the video frame on the display interface, and the interaction experience of the user is improved.
In one implementation, the interaction information includes at least one of a high praise time, a number of praise corresponding to the Gao Zan time, a high comment time, a number of comments corresponding to the high comment time, a high share time, a number of shares corresponding to the high share time, a high review time, and a review number corresponding to the high review time.
The popularity of the video frames is reflected by the praise time, the cription time, the sharing time, the review time or the corresponding number of the praise time, the review time and the review time, so that the video frames are simpler and more visual.
In one implementation, the method further includes highlighting a target video frame of the at least one video frame on the display interface in response to monitoring an eighth event while the at least one video frame is displayed on the display interface.
By highlighting the target video frame in the at least one video frame on the display interface through the eighth event, more visual feedback can be provided for the user, the user can be helped to confirm that the user has accurately selected the target video frame of interest, and the accuracy and efficiency of the operation are improved.
In one implementation, the eighth event includes the target video frame being located in a middle position of the display interface or sliding to the target video frame based on a sliding operation between the at least one video frame.
The selection of the target video frame is represented by enabling the target video frame to be located at the middle position of the display interface or sliding to the target video frame based on sliding operation between at least one video frame, so that the target video frame is highlighted, and the method is simple and quick on one hand, and accords with the cognition and operation habit of a user on operation gestures on the other hand.
In one implementation, the highlight in the video file is determined based on user interaction data including at least one of the number of praise obtained, the number of comments obtained, the number of times shared, the number of times reviewed.
At least one video frame representing the highlight in the video file is extracted and transmitted based on the interactive data, such that the highlight presented to the user may be more consistent with the user's expectations.
In one implementation, the method further comprises playing a guide animation on the display interface in response to monitoring a pause operation and/or a drag video playing progress bar for the video file during playing of the video file, wherein the guide animation is used for prompting the content and/or action of the first event.
By playing the guiding picture, the content and/or the action of the first event can be timely prompted to the user, so that the user knows that the video playing progress bar can be not dragged, and the video frame for representing the highlight is quickly acquired through the first event.
In a second aspect, an apparatus for processing a video file is provided, the apparatus comprising an event monitoring unit configured to monitor a play instruction of the video file, an interface presentation unit configured to play the video file on a display interface in response to monitoring the play instruction, the event monitoring unit further configured to monitor a first event for the video file during play of the video file, the first event being used to trigger the display interface to display at least one video frame for representing a highlight in the video file, and the interface presentation unit further configured to display a video frame display page on the display interface in response to monitoring the first event, the video frame display page being used to display the at least one video frame.
In a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method as described in the first aspect or any implementation of the first aspect.
In a fourth aspect, there is provided an electronic device comprising one or more processors and a memory associated with the one or more processors, the memory for storing program instructions which, when read for execution by the one or more processors, perform a method as described in the first aspect or any implementation of the first aspect.
In a fifth aspect, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method as described in the first aspect or any implementation of the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments will be briefly described. It is to be understood that the drawings in the following description are only some embodiments of the present application and that other drawings may be derived from these drawings by one of ordinary skill in the art.
FIG. 1 is a diagram of a system architecture to which embodiments of the present application are applicable.
Fig. 2 is a flowchart of a video file playing processing method according to an embodiment of the present application.
Fig. 3 is a schematic diagram of each stage of a display interface according to an embodiment of the present application.
Fig. 4 is a schematic diagram of stages of a display interface according to another embodiment of the application.
Fig. 5 is a schematic diagram of stages of a display interface according to another embodiment of the application.
Fig. 6 is a schematic diagram of a comment page provided by an embodiment of the present application.
Fig. 7 is a schematic block diagram of an apparatus for processing video files according to an embodiment of the present application.
Fig. 8 is a schematic block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which are derived by a person skilled in the art based on the embodiments of the application, fall within the scope of protection of the application.
In order to facilitate understanding of the embodiments of the present application, a system architecture to which the embodiments of the present application may be applied will first be described by way of example. By way of example, fig. 1 shows a system architecture that may be applied to an embodiment of the present application, as shown in fig. 1, which may include a terminal device 110 and a server 120. Terminal device 110 may be connected to server 120 via a wireless network or a wired network. It should be understood that the number of terminal devices 110 and servers 120 in fig. 1 is merely illustrative. The system architecture on which embodiments of the present application are based may have any number of terminal devices 110 and servers 120, as desired for implementation.
Terminal devices 110 include, but are not limited to, smart mobile terminals, wearable devices, personal computers (personal computer, PCs), smart home devices, and the like. The smart mobile device may include, for example, a cell phone, a tablet computer, a notebook computer, a Personal Digital Assistant (PDA), an internet car terminal, etc. Wearable devices may include devices such as smart watches, smart glasses, smart bracelets, virtual Reality (VR) devices, augmented reality (augmented reality, AR) devices, mixed reality devices (i.e., devices that can support virtual reality and augmented reality), and so forth. Smart home devices may include devices such as smart televisions, smart refrigerators with display screens, and the like.
A client 112 may be provided within the terminal device 110. The client 112 involved in the embodiment of the present application may be an Application (APP) running on a terminal device, an applet, or a Web program running through a browser, etc. The client 112 may interact with the server 120 over a network to request and retrieve a media asset from the server 120 to play the media asset on a display interface of the client 112.
The server 120 may be a platform that provides media asset playback services. The server 120 may be a single server, a server group formed by a plurality of servers, or a cloud server. The cloud server is also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
The server 120 may send the media asset to the client 112 in a streaming manner, and the client 112 plays the media asset while downloading the media stream. Alternatively, the server 120 may also use a progressive sending manner, that is, the media resource is sent to the client 112, and the client 112 downloads the entire media resource and plays the media resource. In addition, the server 120 may also transmit media assets to the client 112 in other manners, e.g., the server 120 may provide media assets to the client 112 based on a content delivery network (content delivery network, CDN).
In some implementations, the client 112 may request the video file from the server, and the server transmits the video file to the client 112, where the client 112 performs the playing processing of the video file by using the method provided by the embodiment of the present application.
In the existing playing modes, if a user wants to acquire a highlight from a video file for watching, a mode of dragging a video playing progress bar is generally adopted. On one hand, the efficiency of searching for the highlight through dragging the progress bar is very low, a user may need to drag the video playing progress bar continuously for a plurality of times to search for the highlight in the video file, and on the other hand, the highlight is very easy to miss through dragging the progress bar due to the influence of the positioning precision of the video playing progress bar. These will result in less efficient video consumption and impact the user experience.
In view of the foregoing, an embodiment of the present application provides a method for processing a video file, in which a highlight is presented to a user by displaying at least one video frame of the video file for representing the highlight on a display interface in response to monitoring a first event. Because the first event triggers the extraction and penetration of at least one video frame used for representing the highlight in the video file, the highlight can be displayed for the user without repeatedly dragging a video playing progress bar by the user, and the video consumption efficiency is improved.
The highlight segments described above generally refer to portions of video content that are attractive, entertaining, educational, or have other significant value. These segments tend to excite emotional resonance of the viewer, provide a unique visual experience, or convey important information. For example, a highlight in a narrative video may typically be a climax part of a story, such as a scene of action, a scene of emotion outbreak, or an instant revealing critical information, for example, in a comedy or emotion-like video, a segment that can trigger a viewer's laught or feeling tends to be regarded as a highlight, for example, some videos are known for their exquisite picture, unique visual effect, or a shocking visual effect, the highlight in such videos typically being those parts of highest visual impact, and for example, in a sports, dance, magic, or other skill-like video, the highlight may be those instants exhibiting high skills or breaking through limits, for example, in an educational or science-like video, the highlight may be those portions that profoundly explain complex concepts, vividly exhibiting experimental results, for example, and for example, in some videos, the highlight in the video is not necessarily planned in advance, but is some of unexpected, tends to bring about surprise and surprise to the viewer. In some scenarios, highlight clips in a video file may also be referred to as high-energy clips, highlight moments, high-interaction frames, premium clips, and the like.
The video consumption refers to, for example, that a user views various types of video files, such as movies, television shows, short videos, documentaries, variety shows, etc., through terminal devices such as televisions, computers, mobile phones, etc., so as to meet the needs of entertainment, learning, social interaction, etc. Such behavior involves not only viewing of content, but also evaluation, sharing, interaction, etc. of content of a video file. The video consumption efficiency refers to the information amount, entertainment experience or learning effect obtained by the user in unit time in the process of watching the video file in a broad sense, and reflects the satisfaction degree of the content of the video file on the user demand and the input-output ratio of the user in the process of watching the video file.
It will be appreciated that in the embodiment of the present application, the at least one video frame triggered to be displayed by the first event may be, in addition to a video frame for representing a highlight, a video frame for representing other types of video segments, for example, a video frame for a specific person segment or scene segment, a video frame meeting certain conditions, or a video frame having certain common characteristics, etc. In the following, the technical solution of the embodiment of the present application will be described by taking only the example that the at least one video frame is used to represent a highlight in a video file.
Hereinafter, a method for processing a video file according to an embodiment of the present application will be described in detail with reference to fig. 2 to 4.
By way of example, fig. 2 shows a schematic flow chart of a method of processing a video file according to an embodiment of the application. The method 200 shown in fig. 2 may be performed, for example, by a terminal device, such as the terminal device 110 shown in fig. 1. As a more specific example, the method 200 shown in fig. 2 may be performed by a client installed on a terminal device (e.g., may be an Application (APP), applet, or Web program running through a browser) that supports the playing of media assets. The steps of fig. 2 are described in detail below.
Referring to fig. 2, in steps 210 to 220, a play command of a video file is monitored, and in response to the monitored play command, the video file is played on a display interface. The video file may refer to a long video or a short video. The distinction between long video and short video is mainly embodied in terms of duration, content, production mode, etc. of video. The long video is usually longer than several tens of minutes, the content is complete and deep, the content quality is relatively high, professional shooting, editing, sound effects and other teams are usually needed to finish the process together, the production period is long, and the needed resource investment is more, for example, movies, television shows, documentaries and the like belong to the long video. Short videos are usually within tens of minutes and minutes, the content is usually short, refined and mainly stands out a certain bright spot or fast information transfer, the production is relatively simple, the mobile phone or other portable equipment is often used for shooting, the later editing is relatively easy, the short videos are more focused on fast production and fast transmission, the timeliness is emphasized, and for example, some smiling videos, life recorded videos, advertisement videos, short dramas and the like belong to the short videos.
In addition, the video file may be a horizontal screen video or a vertical screen video. The term "horizontal screen video" refers to a video in which the width of the playing screen of the video file is larger than the height, and the video is more suitable for viewing on a wide-screen device (e.g., television, computer) than the natural field of view of human eyes. Vertical screen video is video with a playing picture of a video file being higher than the width, is more suitable for being watched on narrow screen devices (such as smart phones and tablet computers), and is more popular in the fields of social media, short videos and the like. In the following, the technical scheme of the embodiment of the application is described by taking a vertical screen video as an example.
With continued reference to fig. 2, during the playing of the video file, a first event for the video file is monitored for triggering the display interface to display at least one video frame of the video file for representing the highlight in the video file, and in response to the monitoring of the first event, a video frame display page for displaying the at least one video frame is displayed on the display interface. The video frame display page may be, for example, all or a partial area of the display interface.
The method comprises the steps that at least one video frame used for representing the highlight in the video file is triggered to be displayed on a display interface through the first event, which is equivalent to extracting and penetrating out the at least one video frame used for representing the highlight in the video file and showing the at least one video frame to a user together, so that the user can quickly acquire the highlight in the video file without dragging a progress bar to search the highlight, the adjustment efficiency of the video progress is improved, the highlight is not easy to miss, the video consumption efficiency is improved, and the user experience is improved.
The highlight in the video file may be associated with, for example, the user's interaction data, or the highlight in the video file may be determined based on the user's interaction data. The interactive data comprises at least one of the number of praise obtained, the number of comments obtained, the number of shared times and the number of review times. If a user obtains more praise or comments or the number of times the video frame is shared and reviewed is more, the video frame may include a more wonderful picture, an instant that can surprise a person, or have a prominent visual effect, etc. at the moment of playing a certain video frame, so that the video frame is more popular and loved by the user, and thus the video frame may be considered to include the wonderful moment. At least one video frame representing the highlight in the video file is extracted and transmitted based on the interactive data, such that the highlight presented to the user may be more consistent with the user's expectations.
The first event includes, for example, an event triggered by a first gesture (e.g., a two-finger pinch gesture), an event triggered by a first interface element, or an event triggered by a voice command (e.g., a voice command to "enter a highlight"). The two-finger pinch gesture may be a two-finger zoom-in gesture or a two-finger zoom-out gesture. And at least one video frame in the video file is displayed on the display interface by utilizing the double-finger pinch gesture, the first interface element or the event triggered by the voice command, so that the method is simple and quick on one hand, and meets the operation habit of a user on the other hand. Preferably, since the embodiment of the present application focuses on the presentation of video frames representing highlight clips in a video file, in the case that the first time is an event triggered by a two-finger pinch gesture (e.g., a two-finger pinch gesture) for a video file, the process of "aggregating" the video frames is more matched, and can bring the user with a feeling of "aggregating", and the user experience is better.
As an example, referring to fig. 3 (a) and 4 (a), if the user adopts a double-finger zoom-out gesture on the display interface, a video frame display page shown in fig. 3 (B) and 4 (B) may be entered, on which at least one video frame in the video file for representing a highlight is displayed, including, for example, video frame a, video frame B, and video frame C. Therefore, when a user wants to acquire the highlight in the video file, the highlight in the video file can be quickly called out for browsing by adopting the double-finger zoom-out gesture on the display interface without dragging the video playing progress bar.
In some implementations, the at least one video frame may be displayed at the display interface during playback of the video file in response to monitoring a pause operation for the video file. For example, during the playing process of the video file, if the user pauses the playing process of the video file, there may be a need to browse the highlight, at least one video frame for representing the highlight may be displayed on the display interface for the user to select, so that the user experience is improved. As an example, the two-finger pinch finger shown in (a) of fig. 3 and (a) of fig. 4 may be replaced with an operation for triggering suspension of a video file, such as a click on the video file or a click operation on a page element related to suspension on the display interface, and upon monitoring of the suspension operation on the display interface, a video frame display page as shown in (b) of fig. 3 and (b) of fig. 4 is entered, thereby presenting at least one video frame for representing a highlight on the video frame display page.
Wherein the at least one video frame displayed in response to the monitoring of the pause operation has a position on the video playback progress bar that is located in the vicinity of a corresponding position on the video playback progress bar at the pause time. The user pauses playing the video file at a certain moment, possibly indicating that there is a highlight which the user wants to browse, at this moment, at least one video frame of the highlight located near the pause moment is shown for the user on the display interface, and the expectation of the user can be better met. Here, the position of the at least one video frame on the video playing progress bar is located near the pause time, for example, it may mean that a time difference between the position of the at least one video frame on the video playing progress bar and a corresponding position of the pause time on the video playing progress bar is smaller than a preset value, or that the at least one video frame is at least one video frame in the video file closest to a position of the pause time on the video playing progress bar, including video frames located before and/or after the pause time.
The embodiment of the application does not limit the arrangement mode of at least one video frame for representing the highlight on the display interface. As an example, at least one video frame is illustrated in a left-to-right arrangement in fig. 3 and 4. In addition, other arrangements may be used for display. For example, for a horizontal screen video, at least one video frame can be displayed in an arrangement mode from top to bottom, and for example, more video frames can be displayed to a user at the same time on a display interface in an array arrangement mode.
In some implementations, when at least one video frame is displayed on the display interface, the at least one video frame is arranged on the display interface in the order in which it appears in the video file. This is advantageous in maintaining consistency of video content, enabling a user to more intuitively determine the order of at least one video frame. In addition, from the technical point of view, arranging at least one video frame on the display interface based on the sequence of the at least one video frame in the video file is a relatively simple and direct implementation manner, does not need complex algorithm or logic to process the arrangement problem of the video frames, and is easier to realize on various playing platforms and devices.
With continued reference to fig. 3 (B) and fig. 4 (B), among the video frame a, the video frame B, and the video frame C displayed on the display interface, the playing time corresponding to the video frame a precedes the playing time corresponding to the video frame B, and the playing time corresponding to the video frame B precedes the playing time corresponding to the video frame C, that is, the order of appearance in the video file is the video frame a, the video frame B, and the video frame C in turn, so that the video frame a, the video frame B, and the video frame C are sequentially arranged on the display interface from left to right according to the order of appearance in the video file.
In some implementations, when at least one video frame of the video file representing the highlight is displayed on the display interface, other interface elements associated with the at least one video frame, such as a fifth interface element and/or a sixth interface element, may also be displayed. The fifth interface element is for indicating a position of at least one video frame in the video playback progress bar. The sixth interface element is used for indicating interaction information corresponding to at least one video frame. In this way, the user can browse at least one video frame for representing the highlight and acquire other information related to the video frame, such as the position of the video frame in the video playing progress bar and/or the popularity of the video frame, so that a better interactive experience is brought to the user.
The interactive information includes, for example, at least one of a high praise time, the number of praise times corresponding to the high praise time, the number of share times corresponding to the high share time, the number of review times corresponding to the high share time, the high review time, and the number of review times corresponding to the high review time. As an important index for measuring the quality and influence of the content of the video file, praise, comment, share and review all reflect the popularity and user acceptance of the content of the video file, and the playing time corresponding to the more popular highlight in the video file can often receive more praise, comment, share or trigger review for more times, so that the popularity of the video frame is reflected by the high praise time, the high comment time, the high share time, the high review time or the corresponding number of each praise, comment and share or review time, which is simpler and more visual. The praise time, the high comment time, the high sharing time and the high review time may be replaced by other identical or similar expressions, such as Gao Zan frames, high comment frames, high sharing frames, high review frames, etc.
With continued reference to fig. 3 (B) and fig. 4 (B), when the display interface shown in fig. 3 (B) and fig. 4 (B) is entered by the double-finger zoom-out gesture, at least one video frame for representing a highlight in the video file is displayed on the display interface, including, for example, a video frame a, a video frame B and a video frame C, and a video playing progress bar is further displayed on the display interface, and relative positions corresponding to the video frame a, the video frame B and the video frame C are marked on the progress bar. In addition, the display interfaces shown in fig. 3 (B) and fig. 4 (B) also display interaction information corresponding to the video frame a, the video frame B and the video frame C, wherein the interaction information corresponding to the video frame a and the video frame C is "high evaluation time", and the interaction information corresponding to the video frame B is "high praise time". As an example, fig. 3 (B) and fig. 4 (B) also show the corresponding number of praise for the video frame B located in the middle of the display interface (e.g., "432 users feel praise at this time").
Here, as an example, on the display interfaces shown in (B) in fig. 3 and (B) in fig. 4, a sixth interface element for indicating interaction information is located below video frame a, video frame B, and video frame C, and a fifth interface element for indicating video frame play progress is located below the sixth interface element. Also, the bottommost navigation components of the display interface (e.g., the navigation components of "home page," "select," "message," "me," etc.) remain, and may not be hidden by the triggering of the first event (e.g., the double-finger pinch gesture). Of course, in order to adapt to the screen sizes of different terminal devices, the positions of at least one video frame, the fifth interface element and the sixth interface element displayed in the display interface may also be adjusted accordingly.
In addition, because in some cases, while playing the video file on the display interface, some users may not be aware of how they can trigger the aggregate presentation of highlight clips by the first event, or how the first event is performed. In view of this, in some implementations, a guide animation may be presented on the display interface in response to a preset trigger condition being met while the video file is being played on the display interface, the guide animation being used to prompt the content and action of the first event. The embodiment of the present application is not limited to a specific form of the guide animation, as long as the content of the first event (for example, what the first event is) and the effect of the first event (for example, what function or effect can be achieved by the first event) can be prompted.
The preset triggering conditions may include, but are not limited to, the following:
the first trigger condition is a pause operation for the video file and/or an operation for dragging the video playing progress bar.
When the user pauses playing the video file or drags the video playing progress bar, it indicates that the user may have a need to browse the highlight. At this time, the guide animation may be played on the display interface, so that the content and/or the effect of the first event may be timely prompted, so that the user knows that the video playing progress bar may not be dragged, but the video frame for representing the highlight may be quickly acquired through the first event.
And the second triggering condition is that the currently played media file is the first video played after the client is cold started.
The cold start of the client refers to that when the client is completely closed or the terminal equipment does not have a process corresponding to the client, when the user opens the client again, the cold start process needs to be triggered, namely, the process of the client is re-created, necessary resources are loaded, and operations such as page rendering, initializing other functional modules, reading configuration files and the like are performed. If the client starts playing the video immediately after cold start, the guiding animation can be played on the display interface, and the content and the action of the first event of the user are timely prompted.
And the third trigger condition is that the time interval from the last playing of the guide animation meets the preset interval requirement.
If the guiding animation is played before, but the user may forget to trigger the first event after a long time, in the embodiment of the present application, the guiding animation may be played again to the user after the time interval from the last playing of the guiding animation meets the preset interval requirement, for example, more than 7 days.
And the fourth triggering condition is that the occurrence frequency of the first event is smaller than or equal to a preset frequency threshold value.
If the user hardly triggers the first event, for example, the occurrence number of the first event is less than 2, it is stated that the user is likely to not know how to use the first event or not know the action of the first event, the guide animation can be played on the display interface, so that the user is timely prompted about the content and action of the first event.
It should be noted that the above several trigger conditions may be combined to form a new trigger condition, for example, the trigger condition may be that the user drags a video playing progress bar, the occurrence frequency of the first event is less than or equal to a preset frequency threshold, and the time interval from the last playing of the guiding animation meets a preset interval requirement, and for example, the trigger condition may be that the currently played media file is the first horizontal screen video played on the display interface after the client is cold started, the occurrence frequency of the first event is less than or equal to a preset frequency threshold, and the time interval from the last playing of the guiding animation meets a preset interval requirement.
Furthermore, the total playing times, the playing time length, the cancelling playing condition, the prohibiting playing condition and the like of the guide animation can be limited according to the user, so that the bad influence on the normal watching experience of the user caused by frequent playing or long-time playing of the guide animation is avoided. For example, the total playing time of the guide animation is limited to 4 times, that is, the guide animation is not played after the guide animation is displayed for 4 times, for example, the guide animation is displayed on the display interface for a first preset time period, for example, the playing time period is limited to 7 seconds, and the display of the guide animation on the display interface is canceled if the first event on the display interface is not detected, for example, if the user triggers the first event for more than 3 times, the user can be considered to know the content and the effect of the first event, and then the guide animation is forbidden to be played.
In some implementations, switching between at least one video frame may be performed in response to monitoring a seventh event while the at least one video frame is displayed on the display interface display. In this way, the user can easily view the previous video frame or the next video frame without leaving the current interface, thereby maintaining the continuity and smoothness of browsing.
The seventh event includes, for example, an event triggered by a single-finger swipe gesture in a preset direction, or an event triggered by a fourth interface element. The preset direction may be an arrangement direction of at least one video frame. For example, if at least one video frame is laterally arranged on the display interface, the single-finger swipe gesture may include a left single-finger swipe gesture and a right single-finger swipe gesture. The fourth interface element may be an arrow representing a preset direction. For example, the fourth interface element may include an arrow indicated to the left and an arrow indicated to the right. The switching between at least one video frame is realized through the event triggered by the single-finger sliding gesture in the preset direction or the event triggered by the fourth interface element, so that a user can simply and efficiently browse the video frame, the browsing efficiency of the highlight in the video file is improved, the interactivity between the user and the content of the video frame is increased, the user can actively participate in the browsing process, and the interactive experience is improved.
With continued reference to fig. 3 (B) and fig. 4 (B), if the user adopts a single-finger sliding gesture to the left on the display interface, the user enters the display interface shown in fig. 3 (C) and fig. 4 (C), and the position on the display interface where the video frame B is originally displayed is now displayed as the video frame C located after the video frame B. Similarly, if the user adopts a single-finger swipe gesture to the right on the display interface, the position on the display interface where the video frame B is originally displayed will be displayed as the video frame a located before the video frame B.
In some implementations, a target video frame of the at least one video frame may be highlighted on the display interface in response to the eighth event being monitored while the at least one video frame is displayed on the display interface. The target video frame may be, for example, a video frame that is more interesting to the user, and the user may switch to display a different video frame in the at least one video frame on the display interface through the aforementioned seventh event (for example, an event triggered by the single-finger sliding gesture), and select, during the switching of the at least one video frame, the target video frame that is more interesting to the user through the eighth event, where the target video frame is highlighted on the display interface. Highlighting the target video frame in the at least one video frame on the display interface can provide more visual feedback for the user, is helpful for the user to confirm that the user has accurately selected the target video frame of interest, and improves the accuracy and efficiency of the operation.
The eighth event includes, for example, the target video frame being located at an intermediate position of the display interface or sliding to the target video frame based on a sliding operation between at least one video frame. For example, the eighth event may be sliding the target video frame to an intermediate position of the display interface through a sliding operation. The selection of the target video frame is represented in the mode, so that the target video frame is highlighted, on one hand, the selection is simple and quick, and on the other hand, the cognition and the operation habit of a user on operation gestures are more met.
With continued reference to fig. 3 (C) and fig. 4 (C), assuming that the target video frame is video frame C, the user may switch the video frame displayed on the display interface from video frame B to video frame C through a single-finger swipe gesture, or, slide video frame C to an intermediate position of the display interface through a single-finger swipe gesture. At this time, the video frame C is highlighted in the display interface, for example, a border is displayed around the video frame C to identify the selected video frame C, and no borders are displayed around the video frame B and the video frame D. Meanwhile, the video frame C may be highlighted with higher display brightness than the video frame B and the video frame D, or the display brightness of the video frame B and the video frame D may be reduced.
Instead of highlighting the video frame C by framing and increasing the brightness as shown in fig. 3 (C) and fig. 4 (C), the video frame C may be highlighted in other ways. For example, video frame C is displayed at a larger picture size than video frame B and video frame D, for example, other visual guide elements or text labels outside the borders are used to point to video frame C, for example, masking layers are added to video frame B and video frame D to obscure, transparent or completely invisible, thereby highlighting video frame C, and so on.
Note that, if the seventh event for triggering video frame switching is a single-finger swipe gesture, the eighth event for triggering video frame highlighting may also be regarded as triggered by release of the single-finger swipe gesture. For example, when the target video is switched to the middle position of the display interface through the single-finger sliding gesture, the single-finger sliding gesture is stopped, the single finger is separated from the display interface, and when the single finger is separated from the display interface, the target video frame is triggered to be highlighted on the display interface.
It will be appreciated that the display interfaces shown in fig. 3 (a) and fig. 4 (a) display a play page of a video file, and the display interfaces shown in fig. 3 (b) and (c) and fig. 4 (b) and (c) display a filter page of a video frame, and in response to a first event on the play page, the client may cancel the display of the play page of the video file, display the filter page of the video frame, and switch browsing in at least one video frame through a seventh event (e.g., a single finger slide gesture triggered event) on the filter page, thereby filtering out and highlighting a target video frame.
In other implementations, in response to detecting the first event, the client may also present the play page of the video file and the filter page of the video frame to the user simultaneously. As an example, referring to (a) in fig. 5, in response to a first event (e.g., an event triggered by a double-finger zoom-out gesture) on a play page, the display interface shown in (b) in fig. 5 is entered, wherein the play page is zoomed out and placed above the display interface, while a screening page of video frames is popped up below the display interface, the screening page including, for example, at least one video frame for representing a highlight, a fifth interface element for indicating a play progress of the video frame, and a sixth interface element for indicating interaction information of the video frame.
The video content played in the playing page of the video file is linked with the video frame switching operation in the screening page of the video frame, for example, when a certain video frame is switched to the middle position of the screening page, the content played in the playing page is correspondingly switched to the highlight corresponding to the video frame. As an example, referring to (B) and (C) in fig. 5, after the user switches the video frame B to the video frame C through the single-finger sliding gesture, the content played in the playing page is correspondingly switched from the highlight corresponding to the video frame B to the highlight corresponding to the video frame C.
In the embodiment of the present application, after the user selects the target video frame from the at least one video frame for representing the highlight through the above manner, other operations may also be performed based on the target video frame. That is, the method may further include, in response to detecting a trigger event for a particular interactive operation, performing the particular interactive operation based on the target video frame. For example, the particular interactive operation may include skip play, edit, comment, snap, or other interactive operation for the target video frame. These several operations are described below, respectively.
The first operation is based on skip play of the target video frame.
In some implementations, a highlight corresponding to a target video frame in the at least one video frame may be played on the display interface in response to the fourth event being monitored while the at least one video frame is displayed on the display interface. That is, when at least one video frame is displayed on the display interface, in response to detecting the fourth event, the method jumps to the highlight corresponding to the target video frame for playing. Therefore, under the condition that a user can quickly acquire each highlight in the video file, skip play can be performed based on the highlight interested by the user, and video consumption efficiency is further improved.
In the embodiment of the application, playing the video file refers to converting the content of the video file into an image, a sound or a combination of the image and the sound which can be perceived by human beings directly, and jumping playing refers to jumping to a specific time point in the video to start playing the video file. For example, when the jump playing is performed based on the target video frames, the specific time point may be a playing time corresponding to the target video frames, or a starting playing time of a highlight corresponding to the target video frames, or some predetermined time associated with the target video frames, or the like.
The fourth event includes, for example, an event triggered by a click operation for the target video frame and/or an event triggered by a sliding operation between at least one video frame. The playing of the highlight corresponding to the target video frame in the video file is realized through clicking operation of the target video frame in at least one video frame and/or sliding operation among at least one video frame on the display interface, so that the operation is simple and quick on one hand, and the cognition and the operation habit of a user on the operation gesture are met on the other hand.
With continued reference to fig. 3, if the user wishes to view the highlight corresponding to the video frame C, the user may switch the video frame C to the middle position of the display interface through the single-finger sliding gesture shown in (b) of fig. 3, and click the video frame C as shown in (C) of fig. 3, so as to enter the display interface shown in (d) of fig. 3, and the display interface may start playing the highlight corresponding to the video frame C.
The second operation is editing for the target video frame.
In some implementations, the edit page can be displayed on the display interface in response to detecting the fifth event while the at least one video frame is displayed on the display interface. The editing page is used for editing a target video frame in at least one video frame. The editing page is called through the fifth event, so that a user can edit the target video frame in at least one video frame in the editing page, the diversified requirements of the user are met, and the participation degree and the creation experience of the user in the video consumption process are improved.
Here, the editing may include a secondary authoring (hereinafter, abbreviated as a secondary authoring). The secondary creation refers to the creation process of reprocessing, editing, synthesizing or adapting the video frame without changing the basic content of the original video frame. This process aims to create a video work with new meaning, new view angle, or new expression by rearranging, combining, modifying, or enhancing video frames.
The fifth event may be an event triggered by a seventh interface element, for example, when at least one video frame in the video file is displayed on the display interface, the seventh interface element for triggering the fifth event is displayed. The editing of the target video frame is realized through the interface element for triggering the fifth event, so that the editing function of the video frame is more intuitively displayed, and a user can simply and quickly enter an editing page.
With continued reference to fig. 4 (c), a seventh interface element (e.g., "wicresoft") may be located within the interactive region on the right side of the display interface, and if the user clicks on the seventh interface element (e.g., "wicresoft"), the user may enter the editing page shown in fig. 4 (d) and the user may secondarily author a target video frame of the at least one video frame using the relevant interface element within the editing page.
As an example, as shown in (d) of fig. 4, the editing page may include page elements such as flipping, countdown, challenges, status, beautification, etc. of the right side field, or page elements such as photographing, text, magic, etc. of the lower side, or page elements such as music of the upper side, etc. to implement the corresponding editing function. The method comprises the steps of turning over a target video frame horizontally or vertically, counting down, adding a time-decreasing visual effect in the video, creating a tension or expected atmosphere, attracting the attention of audiences, enabling users to participate in specific challenge tasks such as simulating a certain action, completing a certain task or creating video content related to a specific theme, enabling state functions to show satisfaction, happiness or positive attitude of the users on the video content or attract the attention and interests of other users, and enabling beautifying functions to be used for visually optimizing and modifying the video to improve the visual perception and quality of the video, including adjusting color parameters such as brightness, contrast, saturation and the like of the video frame, applying filter effects, performing advanced treatments such as face beautifying and background blurring and the like. Magic functions refer to adding special visual effects (e.g., fanciful, interesting, or unique visual effects) to a video frame or changing the manner in which the video is presented, etc. The text function refers to adding text descriptions to the video frames. The photographing function may add additional pictures or photographs to the video file. In addition, the music function on the page can be edited, so that the proper background music can be added for the newly created video file.
After performing all or part of the operations described above on the target video frame within the editing page, a newly authored video file may be formed and published within the platform, for example, as shown in (e) of fig. 4.
And thirdly, performing interactive operation on the target video frame.
In some implementations, the interactive operation may be triggered for a target video frame of the at least one video frame in response to detecting the sixth event while the at least one video frame is displayed on the display interface. The interactive operation may include, for example, at least one of downloading, praying, commenting, sharing, and collecting. The sixth event is used for realizing the operations of downloading, praying, commenting, sharing, collecting and the like of the target video frame in at least one video frame, so that the diversified requirements of users are met, and the participation and interaction experience of the users in the video consumption process are improved.
The sixth event may be an event triggered by the eighth interface element, for example, when at least one video frame is displayed on the display interface, the eighth interface element for triggering the sixth event is displayed. The interface element for triggering the sixth event is used for realizing operations such as downloading, praying, commenting, sharing, collecting and the like of the target video frame, so that the interaction function of the video frame is more intuitively displayed, and a user can simply and rapidly perform the interaction operation.
With continued reference to fig. 4 (c), the eighth interface element may be located in the interaction area on the right side of the display interface, for example, including interface elements corresponding to functions of downloading, praying, commenting, sharing, and the like. And if the user clicks the corresponding interface element, executing the corresponding functions of downloading, praying, commenting, sharing and the like.
Fourth operation, target video frame is taken as comment drawing.
In some implementations, when the at least one video frame is displayed on the display interface, a comment page for the video file may be displayed on the display interface in response to monitoring a second event on the display interface, the comment page having a target video frame of the at least one video frame inserted therein. Then, in response to monitoring a posting operation on the comment page, posting comment content with the target video frame as a comment drawing. That is, by the second event, it is possible to directly comment with the target video frame of the at least one video frame, and to present the target video frame as a comment drawing in the comment page. Through the second event, the target video frames are quickly inserted into comment pages to serve as comment drawings, and the comment pages inserted with the target video frames can be quickly provided for users, so that comment operations of the users are enriched, comment efficiency is improved, and interaction experience of the users is improved.
The second event may include, for example, an event triggered by a click operation for a target video frame, an event triggered by a sliding operation between at least one video frame, or an event triggered by a second interface element. The method has the advantages that the target video frames are quickly inserted into comment pages to serve as comment drawings through clicking operation of the target video frames in at least one video frame on a display interface, sliding operation among at least one video frame or operation triggered by a second interface element, so that the method is simple and quick to operate, and cognition and operation habits of a user on operation gestures are met.
As an example, if the user wants to take the video frame C of the at least one video frame representing the highlight as a comment drawing and comment, the user can switch the video frame C of the at least one video frame to the middle position of the display interface and click on the video frame C by a one-finger sliding gesture (for example, the operations shown in (b) and (C) of fig. 3 may be referred to), thereby entering the comment page shown in fig. 6, or can simultaneously display a second interface element (for example, the text "comment" is displayed in the interactive area on the right side of the display interface) while displaying the at least one video frame on the display interface, and enter the comment page shown in fig. 6 if the user clicks on the second interface element (for example, the operations shown in (b) and (C) of fig. 4 may be referred to). As shown in fig. 6, when the comment page is displayed in response to the above operation, the video frame C is automatically inserted into the comment page, and the user does not need to click on the corresponding interface element of the comment page in the conventional manner of inserting the comment picture to perform picture selection, so that the user can quickly insert the target video frame into the comment page to serve as a comment drawing. Then, the user can input the words which the user wants to express in the word input area of the comment page so as to further enrich the comment content and issue the comment, thereby issuing the comment with the video frame C.
And fifth operation, based on the time shooting operation of the target video frame.
The method has the advantages that the method is a function of enabling users to interact with video contents of other people and co-create interesting videos, and the method enriches creation experience of the users and promotes interaction and communication on a social media platform.
In some implementations, a snap-in page for a snap-in operation may be displayed on the display interface in response to the monitoring of the third event while the at least one video frame is displayed on the display interface, the snap-in page having a target video frame of the at least one video frame as a snap-in background. Then, optionally, in response to monitoring a recording operation for the live video, recording the live video with the target video frame as a live background, and in response to monitoring a release operation for the live video, releasing the live video. That is, through the third event, the user may take a time with a target video frame of the at least one video frame. For example, the interface of the photographing operation may include two areas, one of which displays the target video frame and the other of which is the shot area, and the user may adjust the ratio of the display area of the target video frame and the shot area by dragging the middle dividing line. The two areas displayed in the interface for the double-shot operation may be, for example, a layout mode such as a left-right split screen and an up-down split screen. Through the third event, the page for the snap-in operation taking the target video frame as the background is displayed, and the page for the snap-in operation can be provided for the user rapidly, so that the creation foundation taking the target video frame as the background is provided for the user, the user can perform the snap-in operation based on the target video frame, the interaction experience of the user is improved, and the operation of the user is simplified.
The third event may include, for example, an event triggered by a click operation for a target video frame, an event triggered by a sliding operation between at least one video frame, or an event triggered by a third interface element. The method has the advantages that through clicking operation of a target video frame in at least one video frame on the display interface, sliding operation among at least one video frame or operation triggered by a third interface element, a page of a shooting operation taking the target video frame as a background is provided for a user, so that the method is simple and quick to operate, and on the other hand, cognition and operation habits of the user on operation gestures are met.
The above describes that the user can perform related operations such as skip play, edit, comment, snap-in, or interactive operations, etc., based on the target video frame after selecting the target video frame from at least one video frame for representing the highlight. However, in some cases, after the user triggers the display of at least one video frame in the display interface through the first event, if the user fails to select a target video frame of interest in the at least one video frame, but wishes to exit the screening process of the video frames to continue watching the video file, the user may trigger through the ninth event, the client cancels the display of the at least one video frame in response to monitoring the ninth event, and switches back to continue playing the video file on the display interface.
For example, the ninth event may include, but is not limited to, events triggered by the following gestures:
The first gesture is a click operation on a non-display area of a video frame.
For example, for clicking on any other area than the video frames, the display of at least one video frame is canceled and the video file is switched back to continue playing on the display interface.
A second gesture, a double-finger zoom gesture that is opposite to the double-finger zoom gesture employed by the first event.
For example, the first event may employ a two-finger zoom-out gesture to trigger the display of at least one video frame on the display interface, and may employ a two-finger zoom-in gesture when exiting the display of at least one video frame and switching back to continue playing the video file on the display interface.
And thirdly, triggering a gesture of a ninth interface element on the display interface.
The display interface may present a ninth interface element while displaying the at least one video frame, and the user exits the display of the at least one video frame and switches back to continue playing the video file on the display interface by clicking on the ninth interface element (e.g., a gesture of "exit" or "x").
In addition, in order to facilitate the user's attention to the ninth interface element, the ninth interface element may be highlighted on the display interface, for example, by displaying a corresponding text, enlarging the display, using a specific color, or the like.
Further, in order to reduce the influence on the viewing experience of the user as much as possible, when the duration of the highlighting reaches the second preset duration, that is, the duration of the client responding to the highlighting reaches the second preset duration, the highlighting of the ninth interface element may be canceled.
In addition, the number of highlighting, the conditions for prohibiting highlighting, etc. may be limited, for example, the highlighting may not be performed after 3 times, and for example, the number of times the user triggers the ninth interface element exceeds 3 times, and the highlighting may be prohibited for the ninth interface element.
And fourthly, triggering the interface to jump.
The interface skip may be another interface that skips to the client, such as returning to the desktop, exiting the client, or switching to display another application, etc.
For example, after the user triggers a return to the desktop, the display of at least one video frame is exited and the video file is switched back to continue playing on the display interface. And when the user enters the display interface again, continuing to play the video file at the last play position, and if at least one video frame for representing the highlight is required to be displayed on the display interface, executing the first event again.
It should be noted that, the foregoing describes a plurality of embodiments, and that, without conflict, these embodiments may be implemented alone or in combination with each other.
The limitations of "first", "second", "third", etc. in the embodiments of the present application are not limited in size, order, number, etc. and are merely used for distinguishing between them by name. For example, "first event" and "second event" are used to distinguish the two events by name. For example, "first interface element," "second interface element," "third interface element," etc. are used to distinguish between different interface elements in terms of name.
It should also be noted that, in order to be able to detect the occurrence of each event, the client may register in advance, for the event to be monitored, a callback function or method that includes logic code that needs to be executed in response to the event. Once the event is triggered, the client automatically invokes the corresponding callback function or method. For example, for a first event, a callback function is registered in advance, which triggers the client to display at least one video frame in the video file for representing the highlight in the display interface. The response to the event in the embodiments of the present application may be implemented in this manner, which will not be described in detail later.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the scope of the application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used in the present application is merely an association relation describing the associated object, and means that three kinds of relations may exist, for example, a and/or B, and that three kinds of cases where a exists alone, while a and B exist alone, exist alone. In the present application, the character "/" generally indicates that the front and rear related objects are an or relationship.
The term "if" as used herein may be interpreted as "at..once" or "when..once" or "in response to" depending on the context. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
It should be further noted that, in the embodiments of the present application, the interface element generally refers to a visual element on the display interface, and may include a text, an icon, an image, a video, a component, and so on. By component is meant an object that results from simple packaging of data and methods, which may also be referred to as a control, including, for example, buttons, input boxes, drop-down menus, dialog boxes, navigation bars, cards, interactive icons, and the like.
The method embodiments of the present application are described above in detail with reference to fig. 1 to 6, and the apparatus embodiments of the present application are described below in detail with reference to fig. 7 to 8. It is to be understood that the description of the method embodiments corresponds to the description of the device embodiments, and that parts not described in detail can therefore be seen in the preceding method embodiments.
Fig. 7 is a schematic structural diagram of an apparatus for processing video files according to an embodiment of the present application. The apparatus shown in fig. 7 may be provided in the client 112 shown in fig. 1, for example. As shown in fig. 7, the apparatus 700 includes an interface presentation unit 710 and an event monitoring unit 720. Wherein the event monitoring unit 720 is configured to monitor a play instruction of a video file, the interface presentation unit 710 is configured to play the video file on a display interface in response to monitoring the play instruction, the event monitoring unit 720 is further configured to monitor a first event for the video file during play of the video file, the first event being used to trigger the display interface to display at least one video frame, the at least one video frame being used to represent a highlight in the video file, and the interface presentation unit 710 is further configured to display a video frame display page on the display interface in response to monitoring the first event, wherein the video frame display page is used to display the at least one video frame.
In some implementations, the first event includes a two-finger pinch gesture triggered event, a first interface element triggered event, or a voice command triggered event.
In some implementations, the interface presentation unit 710 is further configured to, when the at least one video frame is displayed on the display interface, display a comment page for the video file on the display interface in response to detecting a second event, where a target video frame of the at least one video frame is inserted in the comment page, and issue comment content with the target video frame as a comment drawing in response to detecting a posting operation on the comment page.
In some implementations, the second event includes an event triggered by a click operation for the target video frame, an event triggered by a sliding operation between the at least one video frame, or an event triggered by a second interface element.
In some implementations, the interface presentation unit 710 is further configured to, when the at least one video frame is displayed on the display interface, display a snap-in page for a snap-in operation on the display interface in response to the detection of a third event, the snap-in page taking a target video frame of the at least one video frame as a snap-in background, record a snap-in video with the target video frame as a snap-in background in response to the detection of a recording operation for the snap-in video, and release the snap-in video in response to the detection of a release operation for the snap-in video.
In some implementations, the third event includes an event triggered by a click operation for the target video frame, an event triggered by a slide operation between the at least one video frame, or an event triggered by a third interface element.
In some implementations, the interface presentation unit 710 is further configured to, when the at least one video frame is displayed on the display interface, play, in response to detecting the fourth event, a highlight corresponding to a target video frame in the at least one video frame on the display interface.
In some implementations, the fourth event includes an event triggered by a click operation for the target video frame and/or an event triggered by a slide operation between the at least one video frame.
In some implementations, the interface presentation unit 710 is further configured to, when displaying the at least one video frame on the display interface, display an edit page on the display interface in response to detecting a fifth event, the edit page being configured to edit a target video frame of the at least one video frame.
In some implementations, the interface presentation unit 710 is further configured to display an interface element for triggering the fifth event when the at least one video frame is displayed on the display interface.
In some implementations, the interface presentation unit 710 is further configured to trigger, when the at least one video frame is displayed on the display interface, at least one of downloading, praying, commenting, sharing, and collecting for a target video frame in the at least one video frame in response to detecting a sixth event.
In some implementations, the interface presentation unit 710 is further configured to display an interface element for triggering the sixth event when the at least one video frame is displayed on the display interface.
In some implementations, when the at least one video frame is displayed on the display interface, the at least one video frame is arranged on the display interface according to the sequence of appearance of the at least one video frame in the video file.
In some implementations, the interface presentation unit 710 is further configured to switch between the at least one video frame in response to detecting a seventh event while the at least one video frame is displayed on the display interface.
In some implementations, the seventh event includes an event triggered by a single-finger swipe gesture in a preset direction or an event triggered by a fourth interface element.
In some implementations, the interface presentation unit 710 is further configured to display a fifth interface element when the at least one video frame is displayed on the display interface, the fifth interface element being configured to indicate a position of the at least one video frame in a video playback progress bar.
In some implementations, the interface display unit 710 is further configured to display a sixth interface element when the at least one video frame is displayed on the display interface, where the sixth interface element is used to indicate interaction information corresponding to the at least one video frame.
In some implementations, the interaction information includes at least one of a high praise time, a number of praise corresponding to the Gao Zan time, a high comment time, a number of comments corresponding to the high comment time, a high share time, a number of shares corresponding to the high share time, a high review time, and a review number corresponding to the high review time.
In some implementations, the interface presentation unit 710 is further configured to, while the at least one video frame is displayed on the display interface, highlight a target video frame of the at least one video frame on the display interface in response to detecting an eighth event.
In some implementations, the eighth event includes the target video frame being in a middle position of the display interface or sliding to the target video frame based on a sliding operation between the at least one video frame.
In some implementations, the highlight in the video file is associated with interactive data for the user, the interactive data including at least one of a number of praise obtained, a number of comments obtained, a number of times shared, a number of times reviewed.
In some implementations, the interface presentation unit 710 is further configured to, during the playing of the video file, play a guide animation on the display interface in response to monitoring an operation of dragging a video playing progress bar, where the guide animation is used to prompt the content and/or the action of the first event.
The embodiments of the present application are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for system or apparatus embodiments, since they are substantially similar to method embodiments, the description is relatively simple, with reference to the description of method embodiments in part. The system and apparatus embodiments described above are merely illustrative, in which the elements illustrated as separate elements may or may not be physically separate, and elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present application without undue burden.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
In addition, the application also provides a computer readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements the steps of the method according to any of the preceding method embodiments.
The application also provides an electronic device comprising one or more processors and a memory associated with the one or more processors for storing program instructions that, when read for execution by the one or more processors, perform the steps of the method of any of the preceding method embodiments.
The application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of any of the preceding method embodiments.
Fig. 8 illustrates an architecture of an electronic device, which may include, among other things, a processor 810, a video display adapter 811, a disk drive 812, an input/output interface 813, a network interface 814, and a memory 820. The processor 810, video display adapter 811, disk drive 812, input/output interface 813, network interface 814, and memory 820 may be communicatively coupled via a communication bus 830.
The processor 810 may be implemented by a general-purpose CPU, a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided by the embodiments of the present application.
The memory 820 may be implemented in the form of Read Only Memory (ROM), random access memory (random access memory, RAM), static storage devices, dynamic storage devices, and the like. The memory 820 may store an operating system 821 for controlling the operation of the electronic device 800, and a Basic Input Output System (BIOS) 822 for controlling the low-level operation of the electronic device 800. In addition, a web browser 823, a data storage management system 824, a play processing device 825 for media files, and the like may also be stored. The play processing device 825 of the media file may be an application program that specifically implements the operations of the foregoing steps in the embodiment of the present application. In general, when implemented in software or firmware, the relevant program code is stored in memory 820 and executed by processor 810.
The input/output interface 813 is used to connect with an input/output module to realize information input and output. The input/output modules may be configured as components in the device (not shown in fig. 8) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
Network interface 814 is used to connect communication modules (not shown in fig. 8) to enable communication interactions of the device with other devices. The communication module may implement communication through a wired manner (e.g., USB, network cable, etc.), or may implement communication through a wireless manner (e.g., mobile network, WIFI, bluetooth, etc.).
Bus 830 includes a path for transferring information between various components of the device (e.g., processor 810, video display adapter 811, disk drive 812, input/output interface 813, network interface 814, and memory 820).
It is noted that although the above-described devices illustrate only the processor 810, video display adapter 811, disk drive 812, input/output interface 813, network interface 814, memory 820, bus 830, etc., the device may include other components necessary to achieve proper operation in an implementation. Furthermore, it will be appreciated by those skilled in the art that the apparatus may include only the components necessary to implement the present application, and not all of the components shown in the drawings.
From the above description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in software plus a necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be embodied essentially or in part in the form of a computer program product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (e.g., a personal computer, a server, or a network device, etc.) to perform the method described in the various embodiments or portions of the embodiments of the present application.
The foregoing has outlined rather broadly the more detailed description of the application in order that the detailed description of the application that follows may be better understood, and in order that the present application may be better understood, or in order that the present application may be better understood. In view of the foregoing, this description should not be construed as limiting the application.

Claims (27)

1. A method of processing a video file, the method comprising:
Monitoring a playing instruction of a video file;
responding to the monitoring of the playing instruction, and playing the video file on a display interface;
monitoring a first event aiming at the video file in the playing process of the video file, wherein the first event is used for triggering the display interface to display at least one video frame, and the at least one video frame is used for representing a highlight in the video file;
and in response to the first event being monitored, displaying a video frame display page on the display interface, wherein the video frame display page is used for displaying the at least one video frame.
2. The method of claim 1, wherein the first event comprises a two-finger pinch gesture-triggered event, a first interface element-triggered event, or a voice command-triggered event.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
And in the playing process of the video file, in response to monitoring a pause operation for the video file, displaying the at least one video frame on the display interface.
4. A method according to claim 3, wherein the position of the at least one video frame on a video playing progress bar is located in the vicinity of the corresponding position on the video playing progress bar at the time of suspension.
5. The method according to claim 1 or 2, characterized in that the method further comprises:
Displaying a comment page for the video file on the display interface in response to monitoring a second event while the at least one video frame is displayed on the display interface, the comment page having a target video frame of the at least one video frame inserted therein;
and in response to monitoring the posting operation on the comment page, posting comment content taking the target video frame as a comment drawing.
6. The method of claim 5, wherein the second event comprises an event triggered by a click operation for the target video frame, an event triggered by a sliding operation between the at least one video frame, or an event triggered by a second interface element.
7. The method according to claim 1 or 2, characterized in that the method further comprises:
When the at least one video frame is displayed on the display interface, responding to the detection of a third event, displaying a snap-in page for a snap-in operation on the display interface, wherein the snap-in page takes a target video frame in the at least one video frame as a snap-in background;
In response to monitoring a recording operation for the live video, recording the live video taking the target video frame as a live background;
and issuing the video in time in response to monitoring the issuing operation for the video in time.
8. The method of claim 7, wherein the third event comprises an event triggered by a click operation for the target video frame, an event triggered by a sliding operation between the at least one video frame, or an event triggered by a third interface element.
9. The method according to claim 1 or 2, characterized in that the method further comprises:
And when the at least one video frame is displayed on the display interface, responding to the fourth event, and playing the highlight corresponding to the target video frame in the at least one video frame on the display interface.
10. The method of claim 9, wherein the fourth event comprises an event triggered by a click operation for the target video frame and/or an event triggered by a sliding operation between the at least one video frame.
11. The method according to claim 1 or 2, characterized in that the method further comprises:
and when the at least one video frame is displayed on the display interface, responding to the detection of a fifth event, displaying an editing page on the display interface, wherein the editing page is used for editing a target video frame in the at least one video frame.
12. The method of claim 11, wherein the method further comprises:
And displaying an interface element for triggering the fifth event when the at least one video frame is displayed on the display interface.
13. The method according to claim 1 or 2, characterized in that the method further comprises:
and when the at least one video frame is displayed on the display interface, responding to the detection of a sixth event, and triggering at least one of downloading, praying, commenting, sharing and collecting for a target video frame in the at least one video frame.
14. The method of claim 13, wherein the method further comprises:
And displaying an interface element for triggering the sixth event when the at least one video frame is displayed on the display interface.
15. The method according to claim 1 or 2, wherein the at least one video frame is arranged on the display interface in the order in which the at least one video frame appears in the video file when the at least one video frame is displayed on the display interface.
16. The method according to claim 1 or 2, characterized in that the method further comprises:
and switching between the at least one video frame in response to monitoring a seventh event while the at least one video frame is displayed on the display interface.
17. The method of claim 16, wherein the seventh event comprises a single finger swipe gesture trigger event of a preset direction or a fourth interface element trigger event.
18. The method according to claim 1 or 2, characterized in that the method further comprises:
and displaying a fifth interface element when the at least one video frame is displayed on the display interface, wherein the fifth interface element is used for indicating the position of the at least one video frame in a video playing progress bar.
19. The method according to claim 1 or 2, characterized in that the method further comprises:
And displaying a sixth interface element when the at least one video frame is displayed on the display interface, wherein the sixth interface element is used for indicating interaction information corresponding to the at least one video frame.
20. The method of claim 19, wherein the interaction information comprises at least one of a high praise time, a number of praise times corresponding to the Gao Zan times, a high comment time, a number of comments corresponding to the high comment time, a high share time, a number of shares corresponding to the high share time, a high review time, and a review number corresponding to the high review time.
21. The method according to claim 1 or 2, characterized in that the method further comprises:
and when the at least one video frame is displayed on the display interface, in response to monitoring an eighth event, highlighting a target video frame in the at least one video frame on the display interface.
22. The method of claim 21, wherein the eighth event comprises the target video frame being in a middle position of the display interface or sliding to the target video frame based on a sliding operation between the at least one video frame.
23. The method of claim 1 or 2, wherein the highlight in the video file is associated with interactive data of the user, the interactive data comprising at least one of a number of praise obtained, a number of comments obtained, a number of times shared, a number of times reviewed.
24. The method according to claim 1 or 2, characterized in that the method further comprises:
and in the playing process of the video file, responding to monitoring a pause operation and/or a drag operation of a video playing progress bar for the video file, playing a guide animation on the display interface, wherein the guide animation is used for prompting the content and/or the action of the first event.
25. An apparatus for processing video files, the apparatus comprising:
the event monitoring unit is configured to monitor a playing instruction of the video file;
the interface display unit is configured to respond to the monitoring of the playing instruction and play the video file on the display interface;
the event monitoring unit is further configured to monitor a first event for the video file during playing of the video file, wherein the first event is used for triggering the display interface to display at least one video frame, and the at least one video frame is used for representing a highlight in the video file;
The interface presentation unit is further configured to display a video frame display page on the display interface in response to the monitoring of the first event, the video frame display page being for displaying the at least one video frame.
26. An electronic device, comprising:
One or more processors, and
A memory associated with the one or more processors for storing program instructions that, when read for execution by the one or more processors, perform the steps of the method of any of claims 1 to 24.
27. A computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of any one of claims 1 to 24.
CN202411586782.1A 2024-11-07 2024-11-07 Method and device for processing video file Pending CN119583911A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411586782.1A CN119583911A (en) 2024-11-07 2024-11-07 Method and device for processing video file

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411586782.1A CN119583911A (en) 2024-11-07 2024-11-07 Method and device for processing video file

Publications (1)

Publication Number Publication Date
CN119583911A true CN119583911A (en) 2025-03-07

Family

ID=94802268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411586782.1A Pending CN119583911A (en) 2024-11-07 2024-11-07 Method and device for processing video file

Country Status (1)

Country Link
CN (1) CN119583911A (en)

Similar Documents

Publication Publication Date Title
JP7195426B2 (en) Display page interaction control method and apparatus
US12022161B2 (en) Methods, systems, and media for facilitating interaction between viewers of a stream of content
US10143924B2 (en) Enhancing user experience by presenting past application usage
US20140019865A1 (en) Visual story engine
RU2698158C1 (en) Digital multimedia platform for converting video objects into multimedia objects presented in a game form
CN101950578B (en) Method and device for adding video information
US20110258545A1 (en) Service for Sharing User Created Comments that Overlay and are Synchronized with Video
US11343595B2 (en) User interface elements for content selection in media narrative presentation
US20100241962A1 (en) Multiple content delivery environment
WO2018086468A1 (en) Method and apparatus for processing comment information of playback object
US12262095B2 (en) Information presentation method and apparatus, device and storage medium
US9558784B1 (en) Intelligent video navigation techniques
EP2702482A1 (en) Method and system for secondary content distribution
US9564177B1 (en) Intelligent video navigation techniques
CN113079244B (en) Hot event display method of application program and electronic equipment
Smith Motion comics: the emergence of a hybrid medium
CN114697721B (en) Bullet screen display method and electronic equipment
US10418065B1 (en) Intellimark customizations for media content streaming and sharing
CN113298602A (en) Commodity object information interaction method and device and electronic equipment
CN113495664A (en) Information display method, device, equipment and storage medium based on media information stream
Bassbouss et al. Interactive 360° video and storytelling tool
CN115605837A (en) Game console application with action fob
US12099711B2 (en) Video picture display adjustment method and apparatus, device, medium, and program product
CN119583911A (en) Method and device for processing video file
CN115834967A (en) Method, apparatus, electronic device, and storage medium for generating multimedia content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination