CN116962793A - Video dotting method, device, terminal, storage medium and program product - Google Patents

Video dotting method, device, terminal, storage medium and program product Download PDF

Info

Publication number
CN116962793A
CN116962793A CN202210400152.5A CN202210400152A CN116962793A CN 116962793 A CN116962793 A CN 116962793A CN 202210400152 A CN202210400152 A CN 202210400152A CN 116962793 A CN116962793 A CN 116962793A
Authority
CN
China
Prior art keywords
video
node
dotting
interface
target node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210400152.5A
Other languages
Chinese (zh)
Inventor
张晨俐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210400152.5A priority Critical patent/CN116962793A/en
Publication of CN116962793A publication Critical patent/CN116962793A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the application discloses a video dotting method, a video dotting device, a video dotting terminal, a video dotting storage medium and a video dotting program product, and belongs to the field of man-machine interaction. The method comprises the following steps: responding to dotting operation in a video playing interface, and adding a target node mark corresponding to a target node in a video progress bar based on video playing progress; responding to the note adding operation of the target node, and generating note content corresponding to the target node; in response to a first trigger operation on the target node mark, a preview window is displayed at the target node mark. The application provides a video dotting mode, which enables a user to automatically mark a key point for a video when watching the video, supports the user to add notes for video nodes, can improve the autonomy of video dotting of the user, and can reduce the operation cost to a certain extent.

Description

Video dotting method, device, terminal, storage medium and program product
Technical Field
The embodiment of the application relates to the technical field of man-machine interaction, in particular to a video dotting method, a video dotting device, a video dotting terminal, a video dotting storage medium and a video dotting program product.
Background
The video dotting refers to the playing time corresponding to the important video content marked in the progress bar of the video player, and the user can quickly jump to the important content to watch by clicking the node in the progress bar.
In the related art, a video player platform marks hot fragments for hot videos, and a progress bar with nodes is generated by background configuration of background operators of the platform. The user can skip the video to the progress corresponding to the node to play by clicking the position of the node, but other related operations cannot be performed based on the node.
However, due to the limitation of operation cost, not all videos can configure video nodes for audiences, and operators can point possible hot spot fragments in the videos based on experience, so that video watching and marking requirements of various users cannot be met.
Disclosure of Invention
The embodiment of the application provides a video dotting method, a video dotting device, a video dotting terminal, a video dotting storage medium and a video dotting program product, which can improve the autonomy of video dotting of a user, are convenient to operate, improve user experience and can reduce operation cost to a certain extent. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a video dotting method, where the method includes:
Responding to dotting operation in a video playing interface, and adding a target node mark corresponding to a target node in a video progress bar based on video playing progress;
responding to the note adding operation of the target node, and generating note content corresponding to the target node;
and responding to a first triggering operation of the target node mark, and displaying a preview window at the target node mark, wherein the preview window comprises a video picture thumbnail corresponding to the target node and the note content.
In another aspect, an embodiment of the present application provides a video dotting device, including:
the node adding module is used for responding to dotting operation in the video playing interface and adding a target node mark corresponding to a target node in the video progress bar based on the video playing progress;
the note adding module is used for responding to the note adding operation of the target node and generating note content corresponding to the target node;
and the preview module is used for responding to the first triggering operation of the target node mark, displaying a preview window at the target node mark, wherein the preview window comprises a video picture thumbnail corresponding to the target node and the note content.
In another aspect, an embodiment of the present application provides a terminal, including a processor and a memory; the memory stores a computer program, and the processor executes the computer program to implement the steps of the video dotting method according to the aspect.
In another aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when loaded and executed by a processor, implements the steps of the video dotting method as described in the above aspects.
According to one aspect of the present application there is provided a computer program product comprising a computer program which when executed by a processor performs the steps of the video dotting method provided in various alternative implementations of the above aspects.
The technical scheme provided by the embodiment of the application at least comprises the following beneficial effects:
in the embodiment of the application, a video dotting mode is provided, so that a user can automatically mark key points for the video when watching the video, and the user can conveniently and quickly find key points when watching the video later. And, the user is supported to add notes to the video, and the emotion of the video content is recorded. Compared with a unified dotting mode of a platform, personalized video dotting and note adding can improve autonomy of video dotting of a user, so that the video dotting accords with self requirements of the user, and the video dotting method is convenient to operate and can improve user experience. And the user can automatically click and record, so that the operation cost can be reduced to a certain extent.
Drawings
FIG. 1 is a schematic diagram of a real-time environment shown in an exemplary embodiment of the present application;
FIG. 2 is a flow chart of a video dotting method shown in an exemplary embodiment of the application;
FIG. 3 is a schematic diagram of a video playback interface according to an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of a preview window shown in an exemplary embodiment of the present application;
FIG. 5 is a flow chart of a video dotting method shown in another exemplary embodiment of the application;
FIG. 6 is a schematic diagram of a video playback interface according to another exemplary embodiment of the present application;
FIG. 7 is a schematic diagram of a node movement trace shown in accordance with an exemplary embodiment of the present application;
FIG. 8 is a schematic diagram of a video playback interface according to another exemplary embodiment of the present application;
FIG. 9 is a schematic diagram of a preview window shown in another exemplary embodiment of the present application;
FIG. 10 is a schematic diagram of a node editing interface shown in accordance with an exemplary embodiment of the present application;
FIG. 11 is a schematic diagram illustrating a node adjustment operation according to an exemplary embodiment of the present application;
FIG. 12 is a schematic diagram of a node sharing interface, according to an exemplary embodiment of the present application;
FIG. 13 is a schematic diagram of comment content shown in an exemplary embodiment of the application;
FIG. 14 is a diagram of a node editing interface, shown in accordance with an exemplary embodiment of the present application;
FIG. 15 is a schematic diagram of a note viewing operation shown in accordance with an exemplary embodiment of the present application;
FIG. 16 is a flowchart of a video dotting method shown in another exemplary embodiment of the application;
FIG. 17 is a schematic diagram of data types shown in an exemplary embodiment of the present application;
FIG. 18 is a data acquisition flow chart illustrating an exemplary embodiment of the present application;
FIG. 19 is a block diagram of a video playback interface according to an exemplary embodiment of the present application;
FIG. 20 is a node rendering flow diagram illustrating an exemplary embodiment of the present application;
FIG. 21 is a flowchart of a video dotting method shown in another exemplary embodiment of the application;
FIG. 22 is a block diagram of a video dotting device shown in one exemplary embodiment of the application;
fig. 23 is a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
References herein to "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
FIG. 1 illustrates a schematic diagram of an implementation environment provided by one embodiment of the present application. The implementation environment may include: a terminal 110 and a background server 120, wherein, a client terminal with video playing function is installed in the terminal 110. Only one terminal 110 is shown, and more terminals may be involved in interacting with the background server in a practical application environment.
The terminal 110 is configured to display a video playing interface, where the video playing interface includes a dotting control and a video progress bar. When receiving the trigger operation of the dotting control, the terminal 110 creates a target node based on the video playing progress and adds a target node mark at the corresponding position of the video progress bar. In addition, the user may add note content to the target node through the terminal 110. The terminal 110 transmits target node data including information of note content, node correspondence time, and the like to the background server 120. The background server is used for carrying out association storage on the target node data, the video identification and the user account identification. When the user closes the video and plays the video again through the terminal 110, the terminal 110 obtains the history node data from the background server 120 and displays the node mark corresponding to the history node in the video playing interface, so that the user can check the previously added node and the note content.
Referring to fig. 2, a flowchart of a video dotting method according to an exemplary embodiment of the application is shown. The embodiment is described by taking the method performed by a terminal with a video playing function as an example, and the method includes the following steps:
in step 201, in response to the dotting operation in the video playing interface, adding a target node mark corresponding to the target node in the video progress bar based on the video playing progress.
The terminal plays video content through a video playing interface, when a dotting operation of a user is received, the terminal creates a new video node (namely a target node) based on the current video playing progress, and a target node mark corresponding to the target node is displayed on the interface, so that the user can know that the playing progress corresponds to the node. In one possible implementation, the target node mark is displayed corresponding to the video playing progress corresponding to the target node, for example, the terminal displays the target node mark in an overlaying manner on the video progress bar based on the progress bar position corresponding to the video playing progress, or displays the target node mark on one side of the video progress bar and points to the progress bar position corresponding to the video playing progress. And subsequently, when the user closes the video playing interface and triggers the playing of the video again, the terminal automatically displays the target node mark.
For example, when the terminal receives a dotting operation when the video is played to 00:58, the terminal creates a target node with the playing time point of 00:58, and adds a target node mark at a position corresponding to 00:58 in the video progress bar.
Optionally, the clicking operation may be a triggering operation of a specific control in the video playing interface, or may be a shortcut operation (such as a long press operation, a double click operation, etc.) that meets a preset condition.
Schematically, as shown in fig. 3, a video progress bar 301 is displayed in the video playing interface, and as the video is played, the terminal continuously updates the playing progress indicated by the progress bar. When the dotting operation is accepted, the terminal adds a target node mark 302 at a position corresponding to the current video playing progress in the video progress bar 301.
Step 202, generating note content corresponding to the target node in response to the note adding operation to the target node.
In one possible implementation, the video node supports adding notes, and by adding notes to the node, a user can record the perception of the corresponding video content at the target node when viewing the video for subsequent quick review or sharing.
Illustratively, the note content may include text, pictures, animations, video clips, etc. information.
If other nodes (such as nodes added when the video is watched last time) exist in the video which is currently played, the user can also perform note adding operation or note modifying operation on the other nodes.
In step 203, in response to the first trigger operation on the target node mark, a preview window is displayed at the target node mark, where the preview window includes a video frame thumbnail and note content corresponding to the target node.
The node and note of the video are recordings of the video key content or interesting content by the user, so in order to facilitate the user to view the video content and note content at the node, in one possible implementation, the terminal marks out the display preview window at the target node when receiving the first trigger operation for the target node mark. For example, when a first trigger operation for a target node mark at the video playing time of 00:58 is received, the terminal displays a thumbnail of a video screen at the time of 00:58 and note content through a preview window.
Optionally, the node mark corresponding to the video node to which the note content is added is different from the node mark corresponding to the video node to which the note content is not added. For example, a pencil image is additionally displayed in the node mark corresponding to the video node added with the note content.
Optionally, the preview window is displayed superimposed over the video playing interface. To reduce the impact on video content, the preview window size is smaller than the video play window size and is displayed near the target node mark.
When the terminal screen is a touch screen, the first triggering operation can be a pressing operation, a clicking operation, a dragging operation or the like of the target node mark; when the terminal screen is a non-touch screen, the first trigger operation may be a hover operation of controlling an input device pointer (such as a mouse pointer) over the target node mark, a click operation, a press operation, or a drag operation of the target node mark through the input device, or the like. And the terminal continuously displays the preview window within a preset time period (for example, 1 s) after the first triggering operation is finished. Optionally, the operation type of the first trigger operation is different from that of the play progress adjustment operation, for example, the first trigger operation is a mouse hover operation, and the play progress adjustment operation is a mouse click operation. Or, the first triggering operation and the operation type of the play progress adjusting operation are the same, for example, when receiving the clicking operation on the target node mark, the terminal displays an operation prompt popup window, and the operation prompt popup window comprises a note viewing control for triggering and displaying a preview window and a progress adjusting control for triggering and adjusting the play progress, so that the first triggering operation is the clicking operation on the target node mark and the clicking operation on the note viewing control, and the play progress adjusting operation is the clicking operation on the target node mark and the clicking operation on the progress adjusting control. The embodiment of the present application is not limited thereto.
Illustratively, as shown in FIG. 4, when a first trigger operation is received for the target node mark 401, the terminal displays a preview window 402. The preview window 402 displays a video frame thumbnail corresponding to the target node and some or all of the note content.
In summary, in the embodiment of the present application, a video dotting manner is provided, so that a user can automatically mark a key point for a video when watching a video, and can conveniently and quickly find a key point when watching the video later. And, the user is supported to add notes to the video, and the emotion of the video content is recorded. Compared with a unified dotting mode of a platform, personalized video dotting and note adding can improve autonomy of video dotting of a user, so that the video dotting accords with self requirements of the user, and the video dotting method is convenient to operate and can improve user experience. And the user can automatically click and record, so that the operation cost can be reduced to a certain extent.
The node in the embodiment of the application is a video node which is added by a user independently. In one possible implementation, the user may make note additions and modifications, node position adjustments, and node sharing operations to the self-added nodes.
Referring to fig. 5, a flowchart of a video dotting method according to another exemplary embodiment of the application is shown. The embodiment is described by taking the method performed by a terminal with a video playing function as an example, and the method includes the following steps:
In step 501, in response to the dotting operation in the video playing interface, adding a target node mark corresponding to the target node in the video progress bar based on the video playing progress.
In one possible implementation, a dotting control is included in the video playback interface, and the user adds a node by triggering the dotting control.
And the terminal displays dotting prompt information on the peripheral side of the dotting control. Optionally, the terminal displays the dotting prompt information when detecting that the video playing interface is displayed for the first time in the life cycle of the user, or the terminal displays the dotting prompt information each time the video playing interface is displayed. The terminal starts a timer after displaying the dotting prompt information, and automatically stops displaying the prompt information when the time length of the timer is up, or stops displaying the prompt information after receiving the triggering operation of the dotting control. The embodiment of the present application is not limited thereto.
As shown in fig. 6, a dotting control 601 is displayed in the video playback interface. When the terminal displays the video playing interface, firstly judging the life cycle of the user and judging whether to enter the video playing interface for the first time. Specifically, the terminal judges whether to enter the video playing interface for the first time based on a local storage value localstroge for indicating the floating layer state. If it is determined that the current login account enters the video playing interface for the first time, a prompt message 602, such as "tag, fast key point, is displayed through the video playing interface.
Step 501 specifically includes the following steps 501a to 501b (not shown in the figure):
in step 501a, a marker movement trajectory is determined in response to a trigger operation on a dotting control.
The starting point of the mark moving track is the display position of the dotting control or the progress bar position indicated by the dotting control, and the end point of the mark moving track is the progress bar position corresponding to the video playing progress.
In one possible implementation, the video dotting corresponds to a dotting action, and the action content is that the node mark jumps from the position of the dotting control or the progress bar indicated by the dotting control to the current playing position of the video progress bar. The progress bar position indicated by the dotting control can be the same progress bar position as the abscissa of the dotting control in the video playing interface or other preset progress bar positions pointed by the dotting control.
Optionally, the mark moving track is a track of a straight line, a curve or other preset shapes for connecting the dotting control and the position of the progress bar. The mark movement track is not visible or the terminal displays the mark movement track.
Schematically, when receiving a trigger operation of the dotting control, the terminal generates a Bezier curve based on the display position of the dotting control and the position of a progress bar corresponding to the video playing progress, and determines the calculated Bezier curve as a mark moving track. Fig. 7 shows a third-order bezier curve, and the terminal can form a more complex mark moving track by connecting multiple sections of bezier curves, so as to achieve a complex animation effect. The cubie-bezier function in the terminal cascading style sheet (Cascading Style Sheets, CSS) writes the Bezier curve without manual computation by the developer. After the node mark is moved, the terminal can describe the state of the node mark when the node mark is positioned at different display positions by setting css-keyframes key frames, and set a timer to control the floating window to display the node mark.
In step 501b, the target node mark is controlled to move to the end point along the mark moving track.
Illustratively, as shown in fig. 8, when a trigger operation of the dotting control 801 is received, the terminal determines a mark moving track 802 based on the position of the dotting control 801 and the position of a progress bar corresponding to the current video playing progress, and then controls the target node mark 803 to move along the mark moving track. After the mark moving trajectory 803 moves to the end point, the terminal displays the target node mark 803 at that position.
Step 502, responding to a second triggering operation for the target node mark, and displaying a preview window, wherein the preview window comprises a video picture thumbnail and a node editing control.
In one possible implementation, the user adds notes to the node through the preview window. The second triggering operation may be a pressing operation, a clicking operation, a dragging operation, or the like, of the target node mark; when the terminal screen is a non-touch screen, the second triggering operation may be a hover operation of controlling an input device pointer (such as a mouse pointer) over the target node mark, a click operation, a press operation, or a drag operation of the target node mark through the input device, or the like. And the terminal continuously displays the preview window within a preset time period (for example, 1 s) after the second triggering operation is finished. Optionally, the operation type of the first trigger operation is different from the operation type of the second trigger operation, or the operation type of the first trigger operation is the same as the operation type of the second trigger operation. The embodiment of the present application is not limited thereto.
As shown in fig. 9, when receiving the second trigger operation for the target node mark, the terminal displays a preview window 901. The preview window 901 displays a thumbnail of a video picture and a preset text content of "pause-free note" and, in addition, a node editing control 902, such as "add" is displayed in the preview window 901. The user displays a node editing interface by triggering the node editing control 902, and performs editing operation on the target node from the node editing interface. Wherein the editing operation includes a note adding operation.
Step 503, responding to the triggering operation of the node editing control, and displaying the node editing interface corresponding to the target node.
Optionally, the node editing interface is displayed on the upper layer of the video playing interface in a superimposed manner, and the covered video playing interface pauses playing video. Step 503 specifically includes the following steps 503a to 503b (not shown in the figure):
in step 503a, in response to the triggering operation of the node editing control, playing of the video content in the video playing interface is paused.
Before the terminal displays the node editing interface, the player in the video playing interface is paused. After the editing or sharing operation of the user is completed and the node editing interface is closed, the terminal continues to play the video through the video playing interface. And part of video content is prevented from being missed when the user performs editing and sharing operations. In order to ensure that after a user closes the interface editing interface, the terminal can continue to play the video based on the progress of the pause play, respond to the triggering operation of the node editing control, pause the play of the video content in the video playing interface and record the video playing progress. And then recovering video playing based on the recorded video playing progress.
Step 503b, superimposing and displaying the node editing interface above the video playing interface.
Optionally, the display size of the node editing interface is smaller than the display size of the video playing interface, or the display size of the node editing interface is consistent with the display size of the video playing interface.
Schematically, fig. 10 shows a node editing interface 1001. The left side of the node editing interface 1001 is used for circularly playing video clips before and after the target node, the right side is used for generating note content, and the lower side of the node editing interface contains a sharing control for triggering sharing operation.
Step 504, determining a preview period based on the playing time and the preview duration corresponding to the target node.
In a possible implementation manner, the terminal further plays the video clips before and after the target node through the node editing interface so that the user can adjust the node.
The terminal first determines a preview period based on the play time and the preview duration corresponding to the target node. Illustratively, the terminal determines the front 30s and the rear 30s of the play time corresponding to the target node as the preview period. For example, if the play time corresponding to the target node is 00:58, the preview period is 00:28 to 01:28.
Step 505, playing the video content corresponding to the preview period through the video preview area of the node editing interface.
The video preview area is displayed with a preview progress bar corresponding to the preview period, and the preview progress bar comprises a target node mark.
As shown in fig. 10, the node editing interface 1001 includes a video preview area on the left side, a video preview window 1005 and a preview progress bar 1006 are displayed in the video preview area, and a target node mark 1007 is displayed in the preview progress bar 1006.
In another possible implementation, the size of the video preview window may be adjustable, for example, the terminal may collapse the note editing area, adjust the video preview window to play the video full screen.
Step 505 comprises the steps of:
in the video preview area, video content of the preview period is played through a player instance of the multiplexed video playback interface.
Because of the limitation of the player software tool development kit (Software Development Kit, SDK), the terminal needs to upload the complete video identifier (vid) to play the corresponding video, while the preview segment is generated based on the target node of the user, and the corresponding vid does not exist, and the terminal cannot acquire the video content corresponding to the preview period from the background server alone. Therefore, the node editing interface adopts multiplexing the current player instance and customizing the display mode of the progress bar.
However, since the terminal plays the video content corresponding to the preview period through the player instance of the multiplexed video playing interface, if the multiplexing of the player instance is directly performed, the video playing progress of the video playing interface is disordered after the user closes the node editing interface. For example, after the terminal displays the node editing interface, the terminal automatically and circularly plays the video content in the preview period, if the terminal continues to play the video through the player SDK after the user closes the node editing interface, the corresponding playing time at this time is not the video playing time when the user triggers the node editing control, so that the user may need to manually return to the original video playing progress. The solution to this problem is: the terminal records the video playing progress when entering the node editing interface, and notifies the player sdk to jump to the corresponding progress when exiting the node editing interface.
In one possible implementation, the terminal edits and plays the video content of the preview period according to the automatic cycle through the nodes, and the user can trigger the pause control in the video preview area to pause the playing of the video. In addition, after the user closes the node editing interface, the terminal returns to the player of the video playing interface and automatically continues playing the video. Step 505 may be followed by the following steps one and two (not shown):
Step one, in response to triggering operation of a play pause control in a video preview area, the video content of the video preview area is paused.
As shown in fig. 10, a play pause control 1008 is also displayed in the video preview area. Upon receiving a trigger operation to play pause control 1008, the terminal pauses playing video content within the video preview area.
And step two, responding to closing operation of the node editing interface, closing the node editing interface and recovering video playing of the video playing interface.
Specifically, in response to a closing operation of the node editing interface, the terminal closes the node editing interface and starts playing the video from the recorded video playing progress in the video playing interface.
The terminal records the video playing progress when entering the node editing interface, and notifies the player sdk to jump to the corresponding progress when exiting the node editing interface, so that the playing progress of the two interfaces is not interfered with each other, and the continuous playing of the video is automatically carried out. For example, when the video in the video playing interface is played to 00:58, the user triggers the node editing control, and when the preview segment in the node editing interface is played to 01:10, the user closes the node editing interface, so that the terminal can continue playing the video from 00:58 in the video playing interface.
In step 506, in response to the position adjustment operation for the target node mark in the preview progress bar, the target node is updated based on the node position indicated by the position adjustment operation.
In a possible implementation manner, the user may adjust the display position of the target node mark by dragging the target node mark in the preview progress bar, so as to update the playing time corresponding to the target node.
Optionally, when the position adjustment operation of the target node mark is received, the terminal automatically pauses the playing of the preview video. The position adjustment operation may be a drag operation on the target node mark, a click operation on the preview progress bar, or a manual input operation at the node play time, etc. As shown in fig. 11, when the mouse hovers to the target node mark, a prompt bubble "draggable adjustment mark position" appears, the bubble disappears after the mouse does not leave for 3s, and the bubble immediately disappears after the mouse leaves; the user presses the target node mark through the mouse to drag and move, and simultaneously, the video window picture is paused and switched into the video picture corresponding to the position of the target node mark; and after the dragging is finished, the video window does not resume playing.
Therefore, when the position of the target node is unsuitable, the user does not need to delete the target node and add again, and only needs to adjust through the node editing interface.
In step 507, the note content is generated in response to a text input operation of the text editing area in the node editing interface.
The node editing interface displays a text editing area, and a user can add note content or modify existing note content in the text editing area.
In one possible implementation, the text editing area is bound in two directions through a V model (V-model) to obtain the note content input by the user.
Schematically. As shown in fig. 10, a text editing area 1002 is displayed on the right side in the node editing interface 1001. When the user does not add a node note, a prompt for adding a note, such as "feel at this moment of writing", is displayed in the text editing area 1002. Also displayed below text editing area 1002 are a tab adding control 1003 and a history tab adding control 1004. The user can manually input the topic label by triggering the label adding control 1003, or can quickly insert the topic label used before by triggering the history label adding control 1004, for example "# is better in remembering and is not as good as a rotten pen point. If the history label does not exist, the terminal displays the topic label of the recommended topic.
The topic tag may be used for other users to search for notes. The topic presentation can judge whether the user uses the topic function for the first time by reading the localstroge, if yes, the terminal stores the current topic into the localstroge. And matching and replacing the topics manually input by the user by adopting a (# \s. + $) regular expression.
It should be noted that, in the embodiment shown in the figure, the terminal performs steps 504 to 506 first, and then performs step 507. In other possible embodiments, the terminal may perform steps 504 to 506 after performing step 507, or the terminal may not perform the contents of steps 504 to 506 (i.e., perform step 507 and subsequent steps directly after step 503). The application is not limited in this regard.
And step 508, responding to the triggering operation of the sharing control in the node editing interface, and displaying the node sharing interface.
In one possible implementation, the node editing operation further includes a sharing operation. The node sharing interface comprises a node sharing graph and at least one mode selection control, wherein the node sharing graph comprises at least one of the following components: video picture thumbnail, note content, moment corresponding to a target node, video title, graphic code (such as two-dimensional code) and account identification of a current login account, and sharing modes corresponding to the different modes of selection control are different. Specifically, the terminal generates a node sharing graph through an htmlTocanvases framework. Wherein the size of the shared graph is related to the note content, and the more the note content is, the larger the size (e.g., the higher the height) of the shared graph is.
Schematically, as shown in fig. 12, a node sharing interface is shown. The node sharing method comprises a node sharing graph and 5 mode selection controls below the node sharing graph. The node sharing graph comprises a video picture thumbnail and a note content, wherein the art of seeing and speaking by a teacher is thinking art, the art of life is art, and people can only stay in love of life, an account number mark (user head portrait and nickname) and a time corresponding to a target node of 0 minutes and 58 seconds. And the user can select a control in a triggering mode to share the information in the node sharing graph.
In step 509, in response to the triggering operation of the target mode selection control, a target sharing mode and a target sharing object corresponding to the control are selected according to the target mode, and node information sharing is performed.
The sharing manner corresponding to the different manner selection controls is different, and the specific sharing content may also be different, in one possible implementation, step 509 includes the following steps:
in step 509a, in response to the triggering operation of the comment sharing control, a video comment is published in the comment area of the current video, where the video comment includes a video picture thumbnail and note content.
In one possible implementation, the mode selection control includes a comment sharing control. When receiving triggering operation of the comment sharing control, the terminal shares the video picture thumbnail and the note content to the comment area in the form of video comments. FIG. 13 illustrates the effect of sharing node notes to comment areas. Other users can skip to the corresponding video playing progress to watch through triggering the links in the node sharing comments in the comment area. Specifically, the terminal rapidly shares the note content to the comment area by calling an interface provided by the comment area iframe.
Step 509b, in response to the triggering operation of the local save control, saving the node sharing graph to the local folder.
When receiving the triggering operation of the local storage control, the terminal calls up the local folder of the video playing client to store the picture. Specifically, the terminal uses a file input stream (FileReader) to save the node sharing map to the local folder by acquiring the blob temporary link.
In step 509c, in response to the triggering operation of the communication application control, the node sharing graph is sent to the target terminal of the target sharing object by calling the login account in the communication application, and the target terminal is used for obtaining the link of the current video through scanning the graphic code and playing the video from the target node in the video playing interface.
The terminal is also provided with a control for node sharing through other applications, namely a communication application control. The node sharing graph shared by the terminal in the mode contains graphic code information. Specifically, the terminal generates a two-dimensional code through a two-dimensional code library (Quick Response Code, QRCode).
In another possible implementation manner, when the touch position is located in the communication application control (such as hovering a mouse or pressing a finger), the terminal displays a picture acquisition graphic code at the communication application control, and the user can scan the picture acquisition graphic code through other devices to acquire a node sharing graph and share the node sharing graph, so that the problem that the current login device is inconvenient to login the communication application control is solved.
In addition, the node sharing interface also comprises a link copy control. When receiving the triggering operation of the link copy control, the terminal acquires the current video playing link, adds sharing parameters, records the link source and generates a node sharing link.
In summary, the node editing interface includes three main areas, as shown in fig. 14, including a video preview area, a text editing area, and a control display area (including a sharing control and a save control). The video preview area is also overlaid and displayed with a custom progress bar, and the text editing area is mainly composed of a plurality of lines of text input controls (textarea), and further comprises topic selection and recommended areas.
In step 510, in response to the first trigger operation on the target node mark, a preview window is displayed at the target node mark, where the preview window includes a video frame thumbnail and note content corresponding to the target node.
Specifically, step 510 includes the steps of:
in response to a first trigger operation for the target node mark, a portion of the note content or the whole note content is displayed in the preview window.
In one possible implementation, the terminal may display the entire note content through a preview window. The display size of the preview window may vary with the information amount of the note content, such as increasing the size of the preview window with increasing note content and decreasing the size of the preview window with decreasing note content. Or, the character size, the picture size and other note formats of the note content in the preview window can be changed along with the information amount of the note content, for example, when the note content is more, the terminal correspondingly reduces the character size or reduces the picture size of the note text, and when the note content is less, the terminal enlarges the character size or enlarges the picture size and the like.
In another possible implementation, the note content may have a large number of words, and if displayed in full, the preview window may be too large to affect the display of the video content. The terminal displays a preview window based on the number of preview words (e.g., 10 words) or parameters of the number of pictures, the size of the pictures, etc., i.e., only a portion of the note content (e.g., the first 10 words of the note content) is displayed. Optionally, the terminal may also adjust the display size of the preview window based on the note content in this case. The embodiment of the present application is not limited thereto.
The video dotting method provided by the embodiment of the application further comprises the following steps:
and responding to the triggering operation of the node editing control, and displaying the complete note content through a text editing area of the node editing interface.
As shown in fig. 15, since the preview window cannot show the detailed content, only a part of the note content is shown, namely, a teacher is seen, and a user can feel the talking art …, so that the user can check the complete note content by triggering the node editing control in the preview window to enter the node editing interface.
In the embodiment of the application, the node editing interface supports the note editing operation, the node position adjusting operation and the node information sharing operation, so that a user can perform personalized setting on the added nodes, modify or supplement the note content, accurately adjust the node position and share the nodes and notes to other people, the function of the video nodes is fully played, and the user is helped to mark and share the video key content.
In one possible implementation manner, in terms of implementation of the video dotting technology, compared to the manner in which background dotting data is pulled by the player SDK in the related art, the embodiment of the application decouples node data from the player SDK, and the node data is separately acquired by the page rendering tool.
Referring to fig. 16, a flowchart of a video dotting method according to another exemplary embodiment of the application is shown. The embodiment is described by taking the method performed by a terminal with a video playing function as an example, and the method includes the following steps:
in step 1601, a video playback interface is displayed based on the historical dotting data.
The history dotting data are used for indicating history nodes added in the current video by the current login account, and node marks corresponding to the history nodes are displayed in the video progress bar.
In one possible implementation manner, after the user adds the video node, the terminal sends the corresponding node information to the background or stores the corresponding node information locally, and records the corresponding node information as the historical dotting data as the historical node. When the terminal receives the video playing operation, before the video playing interface is displayed, the terminal firstly acquires the current login account number and the historical dotting data corresponding to the video, and displays the historical nodes in the video playing interface so as to be convenient for a user to check.
Schematically, fig. 17 shows the relevant data sent by the terminal to the background server after creating a new node. vuid represents a user identity, timepoint represents a dotting time, notes represents a note content, the name represents a note theme, and vid represents an identity of a video to which the node belongs. The data in the figure only represents all data which can be used in the development process, and does not represent specific data class definitions.
In one possible embodiment, step 1601 includes the following steps 1601a to 1601c (not shown in the figures):
in step 1601a, in response to a video playing operation of the current video, the player SDK is controlled to perform player initialization, and the page rendering tool is controlled to obtain historical dotting data.
The video nodes in the related technology are created by background operators, and all users watching the video can view the nodes of the video, namely the node data belong to the background operation data, so that the video nodes are coupled with the player SDK deeply. When the video playing interface is displayed, the terminal firstly acquires data (such as video frame data, control data and dotting data) related to the interface through the player SDK, and after the initialization of the player SDK is completed, the player SDK sends the related data to a page rendering tool through an internal interface to conduct interface rendering, so that the page in the related technology can start to acquire node data after waiting for the initialization of the player SDK to be completed, and then conduct page rendering, and the rendering is slow.
The dotting data of the embodiment of the application belongs to user data, is stored in a background server in association with a user account, and can be decoupled from a player SDK. In one possible implementation manner, another interface is provided in the client, so that the page rendering tool can directly acquire the historical dotting data from the background server through the interface, and the page rendering tool can directly render the historical nodes after the initialization of the player SDK is completed. Step 1601a specifically includes the following steps three to five:
And thirdly, controlling a page rendering tool to acquire an account identifier of the current login account and a video identifier of the current video.
And step four, based on the account identification and the video identification, controlling the page rendering tool to send a data acquisition request to a background server through a second interface, wherein the background server is used for feeding back historical dotting data based on the data acquisition request.
In one possible implementation, since the node created by the user belongs to the user data, the background server stores the historical dotting data in association with the user's account identification and video identification. And the terminal acquires the historical dotting data by sending the account identification and the video identification. Specifically, the terminal controls the page rendering tool to send a data acquisition request to the background server through the second interface and receive the historical dotting data.
And fifthly, controlling the page rendering tool to receive the historical dotting data sent by the background server through the second interface.
The client includes other interfaces in addition to the interfaces in the player SDK for providing authentication services and data query services. The terminal first configures interface services at the unified access layer. When a data acquisition request of a browser is received, the unified access layer extracts data (cookie) stored on a local terminal of a user and verifies the cookie, wherein the data (cookie) is carried in the request. If the authentication is not passed, the unified access layer refuses the request to the browser; if the authentication is passed, the unified access layer forwards the request to the background server. The background server extracts the corresponding account identification, video identification and the like from the cookie, inquires the node and note information of the user, and splices and returns the node and note information to the client. Fig. 18 shows a process in which a client requests historical dotting data to a backend server: step 1801, receiving a video playing operation by a client; step 1802, the client authenticates based on the cookie; step 1803, the authentication failure client returns null data; step 1804, if authentication is successful, the client queries the background server for the community statement and community variable definition (unit) data to obtain video information, and combines the user-defined historical dotting data; in step 1805, the background server responds to the client with the historical dotting data.
In step 1601b, in response to the player initialization being completed, the player SDK is controlled to send video playing data to the page rendering tool through the first interface.
The video playing data comprise background data such as video frame data, control data and the like, and the data are required to be sent to a page rendering tool by the player SDK after the initialization of the player SDK is completed.
In step 1601c, the page rendering tool is controlled to render and display the video playback interface based on the video playback data and the historical dotting data.
And after the initialization of the player SDK is completed, sending the video playing data to a page rendering tool, and informing the page rendering tool to start rendering. At this point the page rendering tool renders the bottom content based on the video play data and the top content based on the historical dotting data. In one possible embodiment, step 1601c specifically includes the following steps six to seven:
and step six, controlling the page rendering tool to render a player container layer based on the video playing data, wherein the player container layer is used for displaying video content.
And step seven, responding to the completion of the rendering of the player container layer, controlling a page rendering tool to superimpose a rendering dotting layer on the player container layer based on the historical dotting data, wherein the dotting layer is used for displaying the node mark added by the current login account.
As shown in fig. 19, the dotting mask layer is located above the player container layer. The preview mask layer for displaying the preview window is the same layer as the dotting mask layer.
In one possible implementation, since the number of notes may be too large when the user performs dotting and note adding on a certain video, if the player sdk finishes loading the video, it notifies the page to render all document object models (Document Object Model, DOM), and a large number of rendering tasks will cause short-lived clamping of the page. To solve this difficulty, the embodiment of the application provides two rendering modes: firstly, recording time nodes of all note information when the page acquires all note data, only rendering all dotting DOMs on a progress bar, and rendering note preview DOMs of corresponding time when the page progress time is notified by a player SDK; secondly, only rendering dotting DOM after the page acquires the note data, and only rendering corresponding note preview DOM when the user hovers to the corresponding node mark. As shown in fig. 20, the rendering flow includes: step 2001, requesting node information; step 2002, recording playing time corresponding to each history node; step 2003, the player SDK schedule informs the page to render the note DOM, or step 2004, renders the corresponding note DOM when a first trigger operation for marking the node is received.
In step 1602, in response to the third trigger operation for marking the target history node, video is played from the video playing progress corresponding to the target history node.
In one possible implementation, the node mark displayed by the terminal through the video playing interface can be used for jumping to the corresponding video playing progress in addition to previewing video pictures and note content. And the user can quickly jump to the video playing progress corresponding to the target history node through a third triggering operation.
Illustratively, when the terminal screen is a touch screen, the third triggering operation may be a pressing operation, a clicking operation, a dragging operation, or the like, on the target node mark; when the terminal screen is a non-touch screen, the third triggering operation may be a hover operation of controlling an input device pointer (such as a mouse pointer) over the target node mark, a click operation, a press operation, or a drag operation of the target node mark through the input device, or the like.
Optionally, the third triggering operation is the same as the first triggering operation, for example, when receiving a click operation on the target node mark, the terminal displays an operation prompt popup window, and the operation prompt popup window includes a note viewing control for triggering and displaying a preview window and a progress adjustment control for triggering and adjusting a playing progress, where the first triggering operation is a click operation on the target node mark and a click operation on the note viewing control, and the third triggering operation is a click operation on the target node mark and a click operation on the progress adjustment control. Alternatively, the third trigger operation is different from the first trigger operation in operation type, for example, the first trigger operation is a long press operation, and the third trigger operation is a click operation. That is, the operation types of the first trigger operation, the second trigger operation and the third trigger operation are all different, or the operation types of the three are all the same, or the operation types of part of the three are the same. The embodiment of the present application is not limited thereto.
In step 1603, in response to the dotting operation in the video playing interface, adding a target node mark corresponding to the target node in the video progress bar based on the video playing progress.
In step 1604, in response to the note adding operation to the target node, note content corresponding to the target node is generated.
In step 1605, in response to the first trigger operation on the target node mark, a preview window is displayed at the target node mark, where the preview window includes a video frame thumbnail and note content corresponding to the target node.
For the specific implementation of step 1603 to step 1605, reference may be made to step 201 to step 203, and the description of the embodiment of the present application is omitted again.
In the embodiment of the application, the dotting data is decoupled from the player SDK, the page rendering tool directly acquires the historical dotting data from the background server through the second interface, and the data is not required to be acquired after the player SDK is initialized, so that the interface rendering efficiency can be improved.
In connection with the various embodiments described above, FIG. 21 provides a complete flow of video dotting:
step 2101, a video play operation is received.
Step 2102, reading the local storage and determining the life cycle.
Step 2103, request historical dotting data from the background.
And 2104, if the dotting control is displayed for the first time, displaying prompt information and recording the floating state to a local storage.
Step 2105, setting a display duration of the timer control prompt message.
Step 2106, format the historical dotting data by the data processing module and render the video playing interface.
Step 2107, a trigger operation for a dotting control is received.
In step 2108, the target node mark is added through css dynamic effect, and the node data is synchronized to the background.
At step 2109, a first trigger operation is received for marking a target node.
Step 2110, displaying the preview window, and setting a timer to control the display duration of the preview window.
Step 2111, a trigger operation for the node edit control is received.
In step 2112, the preview segment is intercepted by the player SDK and the node editing interface is rendered by the vue component.
In step 2113, the player SDK controls the playing of the preview clip and detects a node adjustment operation.
Step 2114, a trigger operation for the save control is received.
Step 2115, the QR Code is passed to generate a graphic Code.
Step 2116, performing bidirectional data binding through the v-model, and obtaining the note content.
Step 2117, the selectable topic label is displayed.
In step 2118, a topic addition operation is received.
In step 2119, in response to the node sharing operation, a node sharing graph is generated according to the DOM structure.
Fig. 22 is a block diagram showing a video dotting device according to an exemplary embodiment of the present application, which includes the following structure.
The node adding module 2201 is configured to respond to a dotting operation in the video playing interface, and add a target node mark corresponding to the target node in the video progress bar based on the video playing progress;
a note adding module 2202, configured to generate note content corresponding to the target node in response to a note adding operation to the target node;
the preview module 2203 is configured to respond to a first trigger operation on the target node mark, and display a preview window at the target node mark, where the preview window includes a video frame thumbnail corresponding to the target node and the note content.
Optionally, the note adding module 2202 is further configured to:
responding to a second triggering operation for the target node mark, displaying the preview window, wherein the preview window comprises the video picture thumbnail and a node editing control;
responding to the triggering operation of the node editing control, and displaying a node editing interface corresponding to the target node;
And determining the note content in response to a text input operation of a text editing area in the node editing interface.
Optionally, the note adding module 2202 is further configured to:
in response to the first trigger operation of the target node mark, displaying part of the note content or all of the note content in the preview window;
and responding to the triggering operation of the node editing control, and displaying the complete note content through the text editing area of the node editing interface.
Optionally, the apparatus further includes a node editing module configured to:
determining a preview period based on the play time and the preview time corresponding to the target node;
playing video content corresponding to the preview time period through a video preview area of the node editing interface, wherein a preview progress bar corresponding to the preview time period is displayed in the video preview area, and the preview progress bar contains the target node mark;
and in response to a position adjustment operation on the target node mark in the preview progress bar, updating the target node based on the node position indicated by the position adjustment operation.
Optionally, the node editing module is further configured to:
Responding to the triggering operation of the node editing control, and suspending playing of the video content in the video playing interface;
the node editing interface is displayed above the video playing interface in a superposition mode;
in response to triggering operation of a play pause control in the video preview area, pausing the video content of the video preview area;
and responding to closing operation of the node editing interface, closing the node editing interface and recovering video playing of the video playing interface.
Optionally, the node editing module is further configured to:
responding to the triggering operation of the node editing control, suspending playing the video content in the video playing interface and recording the video playing progress;
in the video preview area, playing the video content of the preview period through multiplexing a player instance of the video playing interface;
closing the node editing interface in response to closing operation of the node editing interface;
and playing video from the recorded video playing progress in the video playing interface.
Optionally, the node editing module is further configured to:
responding to the triggering operation of the sharing control in the node editing interface, displaying a node sharing interface, wherein the node sharing interface comprises a node sharing graph and at least one mode selection control, and the node sharing graph comprises at least one of the following components: the video picture thumbnail, the note content, the moment corresponding to the target node, the video title, the graphic code and the account identifier of the current login account, and the sharing modes corresponding to the different modes selection control are different;
And responding to the triggering operation of the target mode selection control, selecting a target sharing mode and a target sharing object corresponding to the control according to the target mode, and sharing node information.
Optionally, the node editing module is further configured to:
responding to triggering operation of comment sharing control, and publishing video comments in a comment area of the current video, wherein the video comments comprise the video picture thumbnail and the note content;
responding to the triggering operation of the local storage control, and storing the node sharing graph into a local folder;
and responding to the triggering operation of the communication application control, and sending the node sharing graph to a target terminal of the target sharing object by calling a login account in the communication application, wherein the target terminal is used for acquiring the link of the current video by scanning the graphic code and playing the video from the target node in the video playing interface.
Optionally, the video playing interface includes a dotting control;
the node adding module 2201 is further configured to:
determining a mark moving track in response to triggering operation of the dotting control, wherein the starting point of the mark moving track is the display position of the dotting control or the progress bar position indicated by the dotting control, and the end point of the mark moving track is the progress bar position corresponding to the video playing progress;
And controlling the target node mark to move to the end point along the mark moving track.
Optionally, the device further includes a display module, configured to:
displaying the video playing interface based on historical dotting data, wherein the historical dotting data are used for indicating historical nodes added in a current video by a current login account, and node marks corresponding to the historical nodes are displayed in a video progress bar;
and responding to a third triggering operation for marking the target historical node, and starting to play the video from the video playing progress corresponding to the target historical node.
Optionally, the display module is further configured to:
responding to the video playing operation of the current video, controlling a player Software Development Kit (SDK) to initialize a player, and controlling a page rendering tool to acquire the historical dotting data;
controlling the player SDK to send video playing data to the page rendering tool through a first interface in response to the completion of player initialization;
and controlling the page rendering tool to render and display the video playing interface based on the video playing data and the historical dotting data.
Optionally, the display module is further configured to:
Controlling the page rendering tool to acquire an account identifier of the current login account and a video identifier of the current video;
based on the account identification and the video identification, controlling the page rendering tool to send a data acquisition request to a background server through a second interface, wherein the background server is used for feeding back the historical dotting data based on the data acquisition request;
and controlling the page rendering tool to receive the historical dotting data sent by the background server through the second interface.
Optionally, the display module is further configured to:
controlling the page rendering tool to render a player container layer based on the video playing data, wherein the player container layer is used for displaying video content;
and controlling the page rendering tool to superimpose a rendering dotting mask layer on the player container layer based on the historical dotting data in response to the completion of rendering of the player container layer, wherein the dotting mask layer is used for displaying the node mark added by the current login account.
In summary, a video dotting manner is provided, so that a user can automatically mark key points for a video when watching the video, and the user can conveniently and quickly find key points when watching the video later. And, the user is supported to add notes to the video, and the emotion of the video content is recorded. Compared with a unified dotting mode of a platform, personalized video dotting and note adding can improve autonomy of video dotting of a user, so that the video dotting accords with self requirements of the user, and the video dotting method is convenient to operate and can improve user experience. And the user can automatically click and record, so that the operation cost can be reduced to a certain extent.
Referring to fig. 23, a block diagram illustrating a structure of a terminal 2300 according to an exemplary embodiment of the application is illustrated.
In general, the terminal 2300 includes: a processor 2301 and a memory 2302.
The processor 2301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 2301 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 2301 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a central processor (Central Processing Unit, CPU), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 2301 may be integrated with an image processor (Graphics Processing Unit, GPU) for use in connection with rendering and rendering of content to be displayed by the display screen. In some embodiments, the processor 2301 may also include an artificial intelligence (Artificial Intelligence, AI) processor for processing computing operations related to machine learning.
Memory 2302 may include one or more computer-readable storage media, which may be tangible and non-transitory. Memory 2302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 2302 is used to store at least one instruction for execution by processor 2301 to implement the methods provided by embodiments of the present application.
In some embodiments, the terminal 2300 may further optionally include: peripheral interface 2303.
Peripheral interface 2303 may be used to connect Input/Output (I/O) related at least one peripheral device to processor 2301 and memory 2302. In some embodiments, the processor 2301, memory 2302 and peripheral interface 2303 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 2301, the memory 2302 and the peripheral interface 2303 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when loaded and executed by a processor, implements the steps of the video dotting method described in the above embodiments.
According to one aspect of the present application there is provided a computer program product comprising a computer program which, when executed by a processor, performs the steps of the video dotting method provided in various alternative implementations of the above aspects.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable storage medium. Computer-readable storage media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of the related data is required to comply with the relevant laws and regulations and standards of the relevant countries and regions. For example, account identification, note content, historical node data and the like involved in the present application are all acquired under the condition of sufficient authorization.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but rather, the application is to be construed as limited to the appended claims.

Claims (17)

1. A method of video dotting, the method comprising:
responding to dotting operation in a video playing interface, and adding a target node mark corresponding to a target node in a video progress bar based on video playing progress;
responding to the note adding operation of the target node, and generating note content corresponding to the target node;
And responding to a first triggering operation of the target node mark, and displaying a preview window at the target node mark, wherein the preview window comprises a video picture thumbnail corresponding to the target node and the note content.
2. The method of claim 1, wherein generating the note content corresponding to the target node in response to the note adding operation to the target node comprises:
responding to a second triggering operation for the target node mark, displaying the preview window, wherein the preview window comprises the video picture thumbnail and a node editing control;
responding to the triggering operation of the node editing control, and displaying a node editing interface corresponding to the target node;
and determining the note content in response to a text input operation of a text editing area in the node editing interface.
3. The method of claim 2, wherein the displaying a preview window at the target node mark in response to the first trigger operation on the target node mark comprises:
in response to the first trigger operation of the target node mark, displaying part of the note content or all of the note content in the preview window;
The method further comprises the steps of:
and responding to the triggering operation of the node editing control, and displaying the complete note content through the text editing area of the node editing interface.
4. The method according to claim 2, wherein after the node editing interface corresponding to the target node is displayed in response to the triggering operation of the node editing control, the method further comprises:
determining a preview period based on the play time and the preview time corresponding to the target node;
playing video content corresponding to the preview time period through a video preview area of the node editing interface, wherein a preview progress bar corresponding to the preview time period is displayed in the video preview area, and the preview progress bar contains the target node mark;
and in response to a position adjustment operation on the target node mark in the preview progress bar, updating the target node based on the node position indicated by the position adjustment operation.
5. The method of claim 4, wherein the displaying the node edit interface corresponding to the target node in response to the triggering operation of the node edit control comprises:
Responding to the triggering operation of the node editing control, and suspending playing of the video content in the video playing interface;
the node editing interface is displayed above the video playing interface in a superposition mode;
after the video content corresponding to the preview period is played through the video preview area of the node editing interface, the method further includes:
in response to triggering operation of a play pause control in the video preview area, pausing the video content of the video preview area;
and responding to closing operation of the node editing interface, closing the node editing interface and recovering video playing of the video playing interface.
6. The method of claim 5, wherein the pausing playing the video content in the video playing interface in response to the triggering operation of the node editing control comprises:
responding to the triggering operation of the node editing control, suspending playing the video content in the video playing interface and recording the video playing progress;
the playing the video content corresponding to the preview period through the video preview area of the node editing interface comprises the following steps:
In the video preview area, playing the video content of the preview period by multiplexing a player instance of the video playing interface, the responding to the closing operation of the node editing interface, closing the node editing interface and resuming the video playing of the video playing interface, including:
closing the node editing interface in response to closing operation of the node editing interface;
and playing video from the recorded video playing progress in the video playing interface.
7. The method according to claim 2, wherein after the node editing interface corresponding to the target node is displayed in response to the triggering operation of the node editing control, the method further comprises:
responding to the triggering operation of the sharing control in the node editing interface, displaying a node sharing interface, wherein the node sharing interface comprises a node sharing graph and at least one mode selection control, and the node sharing graph comprises at least one of the following components: the video picture thumbnail, the note content, the moment corresponding to the target node, the video title, the graphic code and the account identifier of the current login account, and the sharing modes corresponding to the different modes selection control are different;
And responding to the triggering operation of the target mode selection control, selecting a target sharing mode and a target sharing object corresponding to the control according to the target mode, and sharing node information.
8. The method of claim 7, wherein the responding to the triggering operation of the target mode selection control, according to the target sharing mode and the target sharing object corresponding to the target mode selection control, performing node information sharing includes:
responding to triggering operation of comment sharing control, and publishing video comments in a comment area of the current video, wherein the video comments comprise the video picture thumbnail and the note content;
responding to the triggering operation of the local storage control, and storing the node sharing graph into a local folder;
and responding to the triggering operation of the communication application control, and sending the node sharing graph to a target terminal of the target sharing object by calling a login account in the communication application, wherein the target terminal is used for acquiring the link of the current video by scanning the graphic code and playing the video from the target node in the video playing interface.
9. The method according to any one of claims 1 to 8, wherein the video playing interface comprises a dotting control;
The responding to the dotting operation in the video playing interface adds a target node mark corresponding to a target node in the video progress bar based on the video playing progress, and the responding comprises the following steps:
determining a mark moving track in response to triggering operation of the dotting control, wherein the starting point of the mark moving track is the display position of the dotting control or the progress bar position indicated by the dotting control, and the end point of the mark moving track is the progress bar position corresponding to the video playing progress;
and controlling the target node mark to move to the end point along the mark moving track.
10. The method according to any one of claims 1 to 8, wherein, in response to the dotting operation in the video playing interface, before adding the target node mark corresponding to the target node in the video progress bar based on the video playing progress, the method further comprises:
displaying the video playing interface based on historical dotting data, wherein the historical dotting data are used for indicating historical nodes added in a current video by a current login account, and node marks corresponding to the historical nodes are displayed in a video progress bar;
the method further comprises the steps of:
and responding to a third triggering operation for marking the target historical node, and starting to play the video from the video playing progress corresponding to the target historical node.
11. The method of claim 10, wherein displaying a video playback interface based on the historical dotting data comprises:
responding to the video playing operation of the current video, controlling a player Software Development Kit (SDK) to initialize a player, and controlling a page rendering tool to acquire the historical dotting data;
controlling the player SDK to send video playing data to the page rendering tool through a first interface in response to the completion of player initialization;
and controlling the page rendering tool to render and display the video playing interface based on the video playing data and the historical dotting data.
12. The method of claim 11, wherein the controlling the page rendering tool to obtain the historical dotting data comprises:
controlling the page rendering tool to acquire an account identifier of the current login account and a video identifier of the current video;
based on the account identification and the video identification, controlling the page rendering tool to send a data acquisition request to a background server through a second interface, wherein the background server is used for feeding back the historical dotting data based on the data acquisition request;
And controlling the page rendering tool to receive the historical dotting data sent by the background server through the second interface.
13. The method of claim 11, wherein the controlling the page rendering tool to render and display the video playback interface based on the video playback data and the historical dotting data comprises:
controlling the page rendering tool to render a player container layer based on the video playing data, wherein the player container layer is used for displaying video content;
and controlling the page rendering tool to superimpose a rendering dotting mask layer on the player container layer based on the historical dotting data in response to the completion of rendering of the player container layer, wherein the dotting mask layer is used for displaying the node mark added by the current login account.
14. A video dotting device, said device comprising:
the node adding module is used for responding to dotting operation in the video playing interface and adding a target node mark corresponding to a target node in the video progress bar based on the video playing progress;
the note adding module is used for responding to the note adding operation of the target node and generating note content corresponding to the target node;
And the preview module is used for responding to the first triggering operation of the target node mark, displaying a preview window at the target node mark, wherein the preview window comprises a video picture thumbnail corresponding to the target node and the note content.
15. A terminal comprising a processor and a memory; the memory having stored therein a computer program, characterized in that the processor when executing the computer program performs the steps of the video dotting method as claimed in any one of claims 1 to 13.
16. A computer readable storage medium having stored thereon a computer program, which when loaded and executed by a processor performs the steps of the video dotting method as claimed in any one of claims 1 to 13.
17. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the video dotting method as claimed in any one of claims 1 to 13.
CN202210400152.5A 2022-04-15 2022-04-15 Video dotting method, device, terminal, storage medium and program product Pending CN116962793A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210400152.5A CN116962793A (en) 2022-04-15 2022-04-15 Video dotting method, device, terminal, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210400152.5A CN116962793A (en) 2022-04-15 2022-04-15 Video dotting method, device, terminal, storage medium and program product

Publications (1)

Publication Number Publication Date
CN116962793A true CN116962793A (en) 2023-10-27

Family

ID=88453496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210400152.5A Pending CN116962793A (en) 2022-04-15 2022-04-15 Video dotting method, device, terminal, storage medium and program product

Country Status (1)

Country Link
CN (1) CN116962793A (en)

Similar Documents

Publication Publication Date Title
CN111294663B (en) Bullet screen processing method and device, electronic equipment and computer readable storage medium
CN104461318B (en) Reading method based on augmented reality and system
US9715899B2 (en) Intellimarks universal parallel processes and devices for user controlled presentation customizations of content playback intervals, skips, sequencing, loops, rates, zooms, warpings, distortions, and synchronized fusions
CN103034395B (en) Equipment and method for Asynchronous Communication
CN104540028B (en) A kind of video beautification interactive experience system based on mobile platform
CN108924622B (en) Video processing method and device, storage medium and electronic device
JP5543055B2 (en) Display control method, display control apparatus, and program
CN107728905B (en) Bullet screen display method and device and storage medium
JP5149552B2 (en) Display control apparatus and display control method
CN111064999B (en) Method and system for processing virtual reality input
CN110402426A (en) Image processing apparatus, method and program
US20230317117A1 (en) Video generation method and apparatus, device, and storage medium
CN111581564B (en) Webpage synchronous communication method implemented by Canvas
CN113298602A (en) Commodity object information interaction method and device and electronic equipment
JP2007041861A (en) Content editing device, computer-readable program, and recording medium with the same recorded
US10418065B1 (en) Intellimark customizations for media content streaming and sharing
CN116962793A (en) Video dotting method, device, terminal, storage medium and program product
JP4574113B2 (en) Device for creating file for displaying additional information superimposed on display screen and magnetic recording medium
CA2916295A1 (en) Method and apparatus for mixing media tracks
WO2018231585A1 (en) Systems and processes for generating a digital content item
JP2006349845A (en) Electronic book display device
CN115499672B (en) Image display method, device, equipment and storage medium
JP4944371B2 (en) Video editing apparatus and method
WO2022062899A1 (en) Vr scene-based video previewing method, electronic device, and storage medium
KR20130049673A (en) Method and system for providing object information based on spot image of video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40099440

Country of ref document: HK