CN116489465A - Method and device for generating video notes for video and electronic equipment - Google Patents

Method and device for generating video notes for video and electronic equipment Download PDF

Info

Publication number
CN116489465A
CN116489465A CN202310403297.5A CN202310403297A CN116489465A CN 116489465 A CN116489465 A CN 116489465A CN 202310403297 A CN202310403297 A CN 202310403297A CN 116489465 A CN116489465 A CN 116489465A
Authority
CN
China
Prior art keywords
video
note
target object
notes
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310403297.5A
Other languages
Chinese (zh)
Inventor
于洋
李娅娅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202310403297.5A priority Critical patent/CN116489465A/en
Publication of CN116489465A publication Critical patent/CN116489465A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to a method, a device and electronic equipment for generating video notes for videos, belonging to the technical field of computers, wherein the method comprises the following steps: the method comprises the steps of displaying a target control for generating video notes for a video on a playing interface of the video, responding to triggering operation of the target control, displaying a video note editing area on the playing interface, facilitating target objects to implement various editing operations in the video note editing area so as to generate video notes of the video, and storing the video notes into a video note set of the target objects, so that the target objects can view the video notes of the video in the video note set through an object information interface, the generation of the video notes when watching the video is realized, and the man-machine interaction efficiency and the user experience are improved.

Description

Method and device for generating video notes for video and electronic equipment
Technical Field
The disclosure relates to the field of computer technology, and in particular relates to a method and device for generating video notes for videos and electronic equipment.
Background
With the rapid development of internet technology, video watching has become a common entertainment mode, and for video creators, when watching videos, the users can take the videos of interest as own creation materials, so that the users are convenient to enrich own creation contents.
In the related art, when watching videos, users often collect videos of interest or helpful to their own creation into favorites, and when the users want to create videos, find corresponding videos from the favorites to watch the creation points of learning the videos.
However, in the above method, a great amount of videos are often collected in the favorites of the user, wherein many videos are not added by the user for the authoring requirement, and the user often needs to repeatedly watch the videos to memorize the authoring key points of the videos, so that the human-computer interaction efficiency is low, and the user experience is poor.
Disclosure of Invention
The invention provides a method, a device and electronic equipment for generating video notes for videos, which can realize the generation of video notes when watching videos, and improve the man-machine interaction efficiency and the user experience. The technical scheme of the present disclosure is as follows.
According to a first aspect of embodiments of the present disclosure, there is provided a method of generating video notes for a video, the method comprising:
displaying a target control on a playing interface of a video, wherein the target control is used for generating video notes for the video;
Responding to the triggering operation of a target object on the target control, and displaying a video note editing area aiming at the video on the playing interface;
generating a video note of the video in response to editing operation of the target object in the video note editing area, and storing the video note into a video note set of the target object;
and displaying a first control on an object information interface of the target object, wherein the first control is used for viewing the video note set, and displaying the video note set in response to the triggering operation of the target object on the first control.
According to the method, the target control for generating the video note for the video is displayed on the playing interface of the video, the video note editing area is displayed on the playing interface in response to the triggering operation of the target control, the target object can conveniently implement various editing operations in the video note editing area to generate the video note of the video, and the video note is stored in the video note set of the target object, so that the target object can view the video note of the video in the video note set through the object information interface, the video note is generated when the video is watched, and the man-machine interaction efficiency and the user experience are improved.
In some embodiments, the displaying, on the playing interface, a video note editing area for the video in response to a triggering operation of the target control by the target object includes:
and responding to the triggering operation of the target object on the target control, playing the video in a video playing area of the playing interface, and displaying the video note editing area on the playing interface, wherein the video playing area and the video note editing area are not overlapped with each other.
By the method, video watching and video note editing are achieved, man-machine interaction efficiency is effectively improved, and user experience is further improved.
In some embodiments, the generating the video note of the video in response to the editing operation performed by the target object in the video note editing area includes:
and responding to the label editing operation of the target object in the video note editing area, and acquiring the note label of the video note.
By editing the note labels, the subsequent classification of the video notes is facilitated, so that the target object can quickly find out the corresponding video notes through the note labels, and the man-machine interaction efficiency is improved.
In some embodiments, the obtaining, in response to a tag editing operation performed by the target object in the video note editing area, a note tag of the video note includes at least one of:
displaying at least one first note label in the video note editing area, and responding to the triggering operation of the target object on the target note label in the at least one first note label, wherein the target note label is used as the note label of the video note;
and responding to the label input operation of the target object in the video note editing area, and acquiring the note label of the video note.
By displaying at least one selectable note label in the video note editing area, the target object can directly select the corresponding note label without manual editing input, so that the operation flow is simplified, and the man-machine interaction efficiency is improved. Through label input operation, the target object can input the note label of the video note according to own demand, thereby meeting personalized demand and improving user experience.
In some embodiments, the at least one first note tag comprises at least one of:
Default note labels;
a note tag of a historical video note generated by an object other than the target object for the video;
a note tag generated based on video content of the video;
issuing a note tag generated by an object of the video for the video;
and the existing note labels in the video note set.
In some embodiments, the method further comprises at least one of:
responding to the view operation of the target object on the video note, and displaying the video note;
responding to the editing operation of the target object on the video note, and displaying the edited video note;
and deleting the video notes from the video note set in response to the deleting operation of the target object on the video notes.
In some embodiments, the displaying the video note in response to a viewing operation of the video note by the target object includes:
and responding to the viewing operation of the target object on the video note, playing the video in a first area, and displaying the video note in a second area, wherein the first area and the second area are not overlapped.
By the method, the target object can watch the video corresponding to the video note through looking up the video note, so that the man-machine interaction efficiency is improved.
In some embodiments, the video note set includes a plurality of sub video note sets, each of the sub video note sets having a different note label, and the displaying the video note set in response to the triggering operation of the target object on the first control includes:
and responding to the triggering operation of the target object on the first control, and displaying a plurality of sub-video note sets according to note labels corresponding to the sub-video note sets.
By the method, the video notes are classified and displayed according to the note labels, so that the efficiency of searching the video notes by the target object is improved, namely the man-machine interaction efficiency is improved.
According to a second aspect of embodiments of the present disclosure, there is provided an apparatus for generating video notes for video, the apparatus comprising:
a first display unit configured to perform displaying a target control on a play interface of a video, the target control being used to generate a video note for the video;
a second display unit configured to perform a trigger operation of the target control in response to a target object, and display a video note editing area for the video on the playback interface;
A generation unit configured to perform an editing operation performed in response to the target object in the video note editing area, generate a video note of the video, and store the video note into a video note set of the target object;
and the third display unit is configured to display a first control on the object information interface of the target object, wherein the first control is used for viewing the video note set, and the video note set is displayed in response to the triggering operation of the target object on the first control.
In some embodiments, the second display unit is configured to perform:
and responding to the triggering operation of the target object on the target control, playing the video in a video playing area of the playing interface, and displaying the video note editing area on the playing interface, wherein the video playing area and the video note editing area are not overlapped with each other.
In some embodiments, the generating unit is configured to perform:
and responding to the label editing operation of the target object in the video note editing area, and acquiring the note label of the video note.
In some embodiments, the generating unit is configured to perform:
displaying at least one first note label in the video note editing area, and responding to the triggering operation of the target object on the target note label in the at least one first note label, wherein the target note label is used as the note label of the video note;
and responding to the label input operation of the target object in the video note editing area, and acquiring the note label of the video note.
In some embodiments, the at least one first note tag comprises at least one of:
default note labels;
a note tag of a historical video note generated by an object other than the target object for the video;
a note tag generated based on video content of the video;
issuing a note tag generated by an object of the video for the video;
and the existing note labels in the video note set.
In some embodiments, the apparatus further comprises at least one of:
a fourth display unit configured to perform a viewing operation of the video notes in response to the target object, to display the video notes;
A fifth display unit configured to perform an editing operation of the video notes in response to the target object, and display the edited video notes;
and a deleting unit configured to perform a deleting operation of the video notes in response to the target object, the video notes being deleted from the video note set.
In some embodiments, the fourth display unit is configured to perform:
and responding to the viewing operation of the target object on the video note, playing the video in a first area, and displaying the video note in a second area, wherein the first area and the second area are not overlapped.
In some embodiments, the video note set includes a plurality of sub video note sets, each of the sub video note sets having a different note tag, and the third display unit is configured to perform:
and responding to the triggering operation of the target object on the first control, and displaying a plurality of sub-video note sets according to note labels corresponding to the sub-video note sets.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising:
One or more processors;
a memory for storing the processor-executable program code;
wherein the processor is configured to execute the program code to implement the method of generating video notes for video described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium comprising: the program code in the computer readable storage medium, when executed by a processor of an electronic device, enables the electronic device to perform the method of generating video notes for video described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the above-described method of generating video notes for video.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic illustration of an implementation environment for a method of generating video notes for video provided by an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method of generating video notes for video provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart of another method of generating video notes for video provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a playback interface provided by an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a video note editing area provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of generating video notes provided by an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of viewing video notes provided by an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of an object information interface provided by an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a display of a set of video notes provided by an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of another display video note set provided by an embodiment of the present disclosure;
FIG. 11 is a schematic diagram of a display of a set of video notes provided by an embodiment of the present disclosure;
FIG. 12 is a schematic diagram of editing video notes provided by an embodiment of the present disclosure;
FIG. 13 is a schematic diagram of deleting video notes provided by an embodiment of the present disclosure;
FIG. 14 is a block diagram of an apparatus for generating video notes for video provided by an embodiment of the present disclosure;
fig. 15 is a block diagram of a terminal according to an embodiment of the present disclosure.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present disclosure are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, video notes and the like referred to in the embodiments of the present disclosure are all acquired with sufficient authorization.
Fig. 1 is a schematic view of an implementation environment of a method for generating video notes for video according to an embodiment of the disclosure, referring to fig. 1, the implementation environment includes: a terminal 101 and a server 102. The terminal 101 and the server 102 are directly or indirectly connected through wired or wireless communication, which is not limited by the embodiment of the present disclosure.
The terminal 101 is at least one of a smart phone, a smart watch, a desktop computer, a laptop computer, a virtual reality terminal, an augmented reality terminal, a wireless terminal, and a laptop portable computer. Terminal 101 may be referred to generally as one of a plurality of terminals, with embodiments of the present disclosure being illustrated only by terminal 101. Those skilled in the art will recognize that the number of terminals may be greater or lesser. The terminal 101 is installed and operated with an application program for providing functions of video browsing, video distribution, and the like. For example, the application may take the form of a web application, a applet, a client, or the like, to which the present disclosure is not limited. Where a applet refers to a program that runs on other applications, such as an applet. Illustratively, the terminal 101 is a terminal used by a target object, and an application program running by the terminal 101 takes a client form as an example, wherein an account number of the target object is logged in the client, and the target object can browse short videos through the client, issue short videos authored by the user, and the like.
The server 102 is an independent physical server, or a server cluster or a distributed file system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like. The server 102 is used to provide background services for applications running on the terminal 101. Illustratively, taking an application program running on the terminal 101 as an example in the form of a client, the target object can browse the video through the client, and the terminal 101 sends a corresponding interactive data update request to the server 102 in response to various interactive operations (such as praise, comment, collection, etc.) performed on the video browsed by the target object, so that the server 102 updates the interactive state of the corresponding video. In addition, in the embodiments of the present disclosure, the target object is also able to generate video notes for the video being browsed through the client. Illustratively, the terminal 101 sends a video note generated for a video by a target object to the server 102 to cause the server 102 to associate the video note with an account of the target object, thereby storing the video note in a set of video notes for the target object. It should be noted that the examples herein are illustrative only, and that the server 102 may also include other functional servers to provide more comprehensive and diverse services.
The following describes a method for generating video notes for video provided in an embodiment of the present disclosure.
Fig. 2 is a flow chart of a method of generating video notes for video provided by an embodiment of the present disclosure. As shown in fig. 2, the method is applied to a terminal, and illustratively the method includes steps 201 to 204 described below.
In step 201, the terminal displays a target control on a playing interface of a video, where the target control is used to generate a video note for the video.
In the embodiment of the disclosure, an application program supporting functions such as video browsing and video publishing is operated on the terminal, and an account number of a target object is logged on the application program. The terminal displays an application interface of the application program, responds to video browsing operation of a target object on the application interface, displays a playing interface of the video, and displays a target control on the playing interface. Wherein the target control may take the form of a button, a floating window, or the like, to which the present disclosure is not limited. In addition, the video browsing mode is not limited in this disclosure, and may be single-column browsing, double-column browsing, or the like.
Video notes are used to record relevant information of a video, such as content summaries of the video, creation points of the video (e.g., dense laughter points, novel transition ways, music stuck points, etc.), some perceptions caused by content based on the video, etc., and the disclosure is not limited thereto.
In step 202, the terminal responds to the triggering operation of the target object on the target control, and a video note editing area for the video is displayed on the playing interface.
In the embodiment of the present disclosure, the video note editing area is used to provide an editing function for video notes, and this function can also be understood as a function for video notes. Illustratively, the terminal generates a video note of the video in response to various editing operations performed by the target object on the video note editing region. For example, the target object inputs note content in the video note editing area, adds a note tag to the video note, and the like, to which the present disclosure is not limited. In addition, when the terminal displays the video note editing area on the playing interface, the video is in a playable state, that is, the target object can take notes on the video while watching the video through the playing interface.
In step 203, the terminal generates a video note of the video in response to the editing operation performed by the target object in the video note editing area, and stores the video note in the video note set of the target object.
In the embodiment of the disclosure, the video note set of the target object is used for storing the video notes generated based on the target object, so that the target object can view the generated video notes from the video note set later, and the man-machine interaction efficiency is improved. For example, taking the case that the note content of the video note includes the creation gist of the video as an example, under the condition that the target object wants to create the video by itself, the creation gist of the corresponding video can be learned by looking at the video note, so that the creation content of the user is enriched, the quality of the created video is improved, and the video conversion rate is further improved. The terminal sends the video note to a server, and the server associates the video note with an account of the target object, so that the video note is stored in a video note set of the target object.
In step 204, the terminal displays a first control on an object information interface of the target object, where the first control is used for viewing the video note set, and displays the video note set in response to a triggering operation of the target object on the first control.
In the embodiment of the present disclosure, the object information interface of the target object may also be understood as a personal homepage of the target object. The terminal displays an application interface of the application program, and displays an object information interface of the target object in response to an object information interface viewing operation of the target object on the application interface, and displays a first control on the object information interface, so that the terminal can display a video note set in response to a triggering operation of the first control. Wherein the first control may take the form of a button, a floating window, or the like, to which the present disclosure is not limited.
In summary, in the method for generating video notes for a video provided in the embodiments of the present disclosure, a target control for generating video notes for the video is displayed on a playing interface of the video, a video note editing area is displayed on the playing interface in response to a trigger operation on the target control, so that a target object can conveniently implement various editing operations in the video note editing area to generate video notes for the video, and the video notes are stored in a video note set of the target object, so that the target object can view video notes of the video in the video note set through an object information interface, thereby realizing generation of video notes when viewing the video, and improving man-machine interaction efficiency and user experience.
Having briefly described the method for generating video notes for video provided by the present disclosure through fig. 2, embodiments of the present disclosure are described in detail below with reference to fig. 3.
FIG. 3 is a flow chart of another method of generating video notes for video provided by an embodiment of the present disclosure. As shown in fig. 3, the method is applied to a terminal, and illustratively the method includes the following steps 301 to 308.
In step 301, the terminal displays a target control on a playing interface of a video, where the target control is used to generate a video note for the video.
In the embodiment of the present disclosure, this step is the same as the above step 201, and thus will not be repeated.
In some embodiments, the terminal displays a first prompting message on the playing interface, where the first prompting message is used to prompt a function corresponding to the target control. For example, under the condition that an application program running on the terminal is updated in version to provide a function corresponding to the target control, the terminal displays the first prompt message near the position of the target control on the playing interface, so that a target object can timely acquire a new function of the application program, and the man-machine interaction efficiency is improved.
Referring to fig. 4, fig. 4 is a schematic diagram of a playing interface according to an embodiment of the disclosure. As shown in fig. 4, a video is being played on the playing interface 400, and a plurality of controls, such as a praise control, a comment control, a share control, etc. for interacting with the video are displayed on the playing interface 400, which is not limited to this disclosure. Illustratively, taking a comment control as an example, a terminal responds to the triggering operation of a target object on the comment control, a comment editing area aiming at the video is displayed, after the comment content is input by the target object, the terminal sends the comment content input by the target object to a server, and the server updates the interaction state of the video based on the comment content, namely, associates the comment content with the object for publishing the video. In addition, the plurality of controls also includes a target control 401, i.e., "note taking" (e.g., in the form of a button and configured with dynamic special effects, etc., without limitation) for generating video notes for the video. In addition, the first prompting message 402 is displayed on the playing interface 400, and is used for prompting that the current application program of the target object is updated, so that a brand new "note taking" function is provided, and the man-machine interaction efficiency is improved. It should be noted that the playing interface shown in fig. 4 is only a schematic illustration, and the number, layout, form, etc. of the controls on the playing interface are not limited in this disclosure.
In step 302, the terminal responds to the triggering operation of the target object on the target control, and a video note editing area for the video is displayed on the playing interface.
In the embodiment of the disclosure, the terminal responds to the triggering operation of the target object on the target control, plays the video in the video playing area of the playing interface, and displays the video note editing area on the playing interface, wherein the video playing area and the video note editing area are not overlapped with each other, that is, the target object can record notes through the video note editing area while the video playing area plays the video. For example, the playing interface is divided into an upper area and a lower area, the upper area is a video playing area for playing the video, and the lower area is a video note editing area for providing a video note editing function. By the method, video watching and video note editing are achieved, man-machine interaction efficiency is effectively improved, and user experience is further improved. Of course, in other embodiments, the terminal can also superimpose and display the video note editing area on the playing interface, for example, in a form of a floating window or the like, that is, the terminal pauses playing the video in response to the triggering operation of the target object on the target control, and superimposes and displays the video note editing area on the playing interface. It should be noted that, the present disclosure does not limit how the terminal displays the video note editing area, and can be configured according to the needs in practical applications.
In some embodiments, the video note editing area provides an editing function for note content and note labels of video notes, so that a target object can edit the note content and the note labels through the video note editing area, personalized requirements are met, and the video notes can be conveniently classified subsequently in a mode of editing the note labels, so that the target object can quickly find corresponding video notes through the note labels, and the man-machine interaction efficiency is improved.
Referring to fig. 5 schematically, fig. 5 is a schematic diagram of a video note editing area provided by an embodiment of the present disclosure. As shown in fig. 5, in response to a triggering operation of a target object on a target control 401 on a playing interface 400, the terminal displays a video playing area 501 and a video note editing area 502 on the playing interface 400, where the two areas do not overlap with each other. Wherein video is played in the video play area 501, and video note editing functions, such as editing functions for note content and note labels, are provided in the video note editing area 502.
In step 303, the terminal generates a video note of the video in response to the editing operation performed by the target object in the video note editing area, and stores the video note in the video note set of the target object.
In the embodiment of the disclosure, the terminal generates the video note of the video in response to the editing operation of the target object in the video note editing area, and the method comprises the following steps: the terminal obtains note content of the video note in response to a note input operation performed by the target object in the video note editing area, wherein the note input operation includes keyboard input, voice input and the like, and the note content may be text, pictures, expression packages and the like, and the disclosure is not limited thereto.
In some embodiments, the terminal obtains a note tag of the video note in response to a tag editing operation performed by the target object in the video note editing area. Illustratively, the process of obtaining the note tag by the terminal includes at least one of:
the method comprises the steps that a first type of terminal displays at least one first note label in a video note editing area, and responds to triggering operation of a target object on a target note label in the at least one first note label, and the target note label is used as a note label of a video note. The terminal displays at least one selectable note label in the video note editing area, so that the target object can directly select the corresponding note label without manual editing input, the operation flow is simplified, and the man-machine interaction efficiency is improved.
Illustratively, the at least one first note tag includes at least one of: the present disclosure is not limited to this, as default note tags, note tags for historical video notes generated for the video by objects other than the target object, note tags generated based on the video content of the video, note tags generated for the video by objects that post the video, note tags already in the video note collection of the target object (or note tags that have been created by the target object), and so forth.
And secondly, the terminal responds to the label input operation of the target object in the video note editing area to acquire the note label of the video note. Namely, the target object can input the note label of the video note according to own requirements, so that personalized requirements are met, and user experience is improved.
In some embodiments, after the terminal generates the video note of the video and stores the video note in the video note set of the target object, a second prompt message is displayed on the playing interface, where the second prompt message is used to prompt that the video note has been successfully stored, so that the target object can timely learn whether the current video note has been successfully stored, and user experience is improved.
The above step 303 is illustrated with reference to fig. 6. Fig. 6 is a schematic diagram of generating video notes provided by an embodiment of the present disclosure. As shown in fig. 6, a video note editing area 502 is displayed on the playback interface 400, and at least one selectable first note tab is displayed in the video note editing area 502: "default", "segment", "transition", "music" and control "custom", so that the target object can select the note tab of the video note, or manually enter the note tab, as desired. In addition, the terminal responds to the note input operation of the target object in the video note editing area 502, obtains the note content "laughing point dense" of the video note, responds to the confirmation operation (such as the trigger operation of the "complete" control) of the video note, generates the video note of the video, displays the second prompt message 601 "successfully saved" on the playing interface 400, and can be viewed on the personal homepage. It should be noted that, the foregoing "personal homepage" is an object information interface of the target object, and the target object can view the generated video notes through the object information interface, and this process will be described in the subsequent embodiments, which are not described herein.
Through the steps 301 to 303, for the video being played, a function of taking notes for the video is provided, so that the terminal can respond to various operations of the target object under the condition that the target object views a certain video, generate video notes of the video, and store the video notes into a video note set of the target object, so that the target object can conveniently view corresponding video notes subsequently.
In addition, under the condition that the target object browses other videos through an application program running on the terminal, if the similarity between the video content of the other videos and the video content of the video accords with the condition, the terminal can display a target prompt message on a playing interface of the other videos, wherein the target prompt message is used for prompting video notes of the similar videos, and the video notes of the video are displayed in response to triggering operation of the target prompt message. By the method, the target object can know the corresponding video notes in time when browsing similar videos, and therefore the man-machine interaction efficiency is improved.
In some embodiments, after generating the video note, the terminal responds to a triggering operation of the target object on the target control again, displays a video note viewing area for the video on a playing interface, and displays the video note in the video note viewing area. In other embodiments, the terminal displays an editing control for re-editing the video note in the video note viewing area, and in response to a triggering operation of the editing control by the target object, displays the video note editing area for the video so that the target object re-edits the video note. Referring to fig. 7 schematically, fig. 7 is a schematic diagram of viewing video notes provided by an embodiment of the present disclosure. As shown in fig. 7, the terminal displays a video note viewing area 701 on the playing interface 400 in response to a trigger operation of "taking notes" on the target control 401 on the playing interface 400, and displays a video note editing area 502 in response to a trigger operation of an editing control 702 (e.g., "editing notes") on the video note viewing area 701, so that the target object can edit the video notes of the video again.
In other embodiments, the target object is also able to view the generated video notes via the object information interface, and this process is described below in terms of steps 304 and 305.
In step 304, the terminal displays a first control on the object information interface of the target object, where the first control is used to view the video note set of the target object.
In the embodiment of the present disclosure, the object information interface of the target object may also be understood as a personal homepage of the target object. The terminal displays an application interface of the application program, and responds to an object information interface viewing operation of a target object on the application interface, displays an object information interface of the target object, and displays a first control on the object information interface. Wherein the first control may take the form of a button, a floating window, or the like, to which the present disclosure is not limited.
In some embodiments, the terminal displays a third prompting message on the object information interface, where the third prompting message is used to prompt a function corresponding to the first control. It should be understood that the third hint message is the same as the first hint message, and thus will not be described in detail. Referring to fig. 8 schematically, fig. 8 is a schematic diagram of an object information interface provided by an embodiment of the present disclosure. As shown in fig. 8, a first control 801, i.e., "notes," is displayed on the object information interface 800 for viewing a set of video notes for a target object. In addition, a third prompting message 802 "can look at a inspiration note" is displayed on the object information interface 800, so as to prompt the target object, thereby improving the man-machine interaction efficiency and the user experience. In some embodiments, the terminal displays the third prompting message when the version of the application program is updated to provide the function corresponding to the target control, or displays the third prompting message each time the terminal generates a new video note, which is not limited to this.
In step 305, the terminal responds to the triggering operation of the target object on the first control to display the video note set.
In the embodiment of the disclosure, the terminal responds to the triggering operation of the target object on the first control, and displays each video note in the video note set in a list form. In some embodiments, based on the foregoing step 303, the terminal provides a function of generating note labels for video notes, and accordingly, in this step, the video note set includes a plurality of sub video note sets, note labels corresponding to the respective sub video note sets are different, and the terminal responds to a trigger operation of the target object on the first control, and displays the plurality of sub video note sets according to the note label corresponding to the respective sub video note sets. By the method, the video notes are classified and displayed according to the note labels, so that the efficiency of searching the video notes by the target object is improved, namely the man-machine interaction efficiency is improved.
Referring to fig. 9 schematically, fig. 9 is a schematic diagram showing a video note set provided in an embodiment of the present disclosure. As shown in fig. 9, in response to a triggering operation of the target object on the first control 801 on the object information interface 800, the terminal displays a plurality of sub-video note sets, such as a sub-video note set corresponding to a note tag "default", a sub-video note set corresponding to a note tag "segment sub", and so on, according to the note tags corresponding to the respective sub-video note sets. In addition, the terminal displays a creation control for the note label, so that a target object can conveniently create a new note label, and when video notes are generated for other videos later, the new note label is displayed in a video note editing area, and the disclosure is not limited to the new note label. Of course, if the video note set is empty, the terminal may display a fourth prompting message on the object information interface, for prompting that the target object is currently temporarily free of video notes. Referring to fig. 10, fig. 10 is a schematic diagram of another display of a video note set according to an embodiment of the disclosure, and as shown in fig. 10, if the video note set is empty, the terminal displays a fourth prompt message 1001 "pause no inspiration note" on the object information interface 800.
In some embodiments, after displaying the set of video notes, the terminal can process the video notes accordingly in response to various operations performed by the target object on any one of the video notes. Illustratively, taking the video notes generated by the steps 301 to 303 as an example, the terminal can also perform at least one of the following steps 306 to 308, and it should be noted that the execution sequence of the following steps 306 to 308 is not limited in the embodiments of the disclosure.
In step 306, the terminal displays the video note in response to a viewing operation of the video note by the target object.
In some embodiments, the terminal plays the video corresponding to the video note in the first area in response to the viewing operation, and displays the video note in the second area, where the first area and the second area do not overlap each other, for example, the first area and the second area are displayed on the object information interface in a top-bottom distribution manner, and the disclosure is not limited thereto. By the method, the target object can watch the video corresponding to the video note while watching the video note, so that the man-machine interaction efficiency is improved. Of course, the terminal may also display the video note first, obtain the video corresponding to the video note in response to the video viewing operation for the video note, and play the video so that the target object can view the video, which is not limited in this disclosure.
Referring to fig. 11 schematically, fig. 11 is a schematic diagram showing a video note set provided in an embodiment of the present disclosure. As shown in fig. 11, taking an example that the terminal displays a video note set on the object information interface 800 according to a note tag, the terminal responds to a triggering operation of a target object on any one sub video note set (such as a sub video note set corresponding to a note tag by default), and displays a video note list of the sub video note set. Then, the terminal displays the note content of the video note in response to the viewing operation (such as clicking operation) on the video note 1101 in the list, that is, the viewing state of the video note displayed by the terminal, where the terminal plays the video in the first area and displays the video note 1101 in the second area, and the first area and the second area do not overlap each other.
In step 307, the terminal displays the edited video note in response to the editing operation of the target object on the video note.
Referring to fig. 12 schematically, fig. 12 is a schematic diagram of editing video notes provided by an embodiment of the present disclosure. As shown in fig. 12, taking an example that the terminal displays a video note set on the object information interface 800 according to a note tag, the terminal responds to a triggering operation of a target object on any one sub video note set (such as a sub video note set corresponding to a note tag by default) to display a video note list of the sub video note set. Then, the terminal responds to the editing operation (such as clicking an "edit" button to enter an edit state) on the video notes 1101 in the list, and displays the edited video notes, that is, the terminal displays the edit state of the video notes. It should be understood that in the editing state of the video note, the terminal may play the video in the first area, and display the edited video note in the second area, which will not be described herein.
In step 308, the terminal deletes the video note from the set of video notes in response to the deletion operation of the target object on the video note.
Referring to fig. 13 schematically, fig. 13 is a schematic diagram of deleting video notes provided in an embodiment of the disclosure. As shown in fig. 13, taking the video note 1101 shown in fig. 11 and 12 as an example, the terminal displays a fifth hint message 1301 "whether the current note needs to be deleted" in response to a delete operation (e.g., clicking a "delete" button) on the video note 1101, and displays a sixth hint message 1302 "the note has been deleted" in response to a confirm operation on the fifth hint message 1301, and deletes the video note from the video note set. Illustratively, the terminal transmits a deletion request for the video note to the server in response to the confirmation operation of the fifth hint message 1301, so that the server deletes the video note from the video note set of the target object based on the deletion request.
In summary, in the method for generating video notes for a video provided in the embodiments of the present disclosure, a target control for generating video notes for the video is displayed on a playing interface of the video, a video note editing area is displayed on the playing interface in response to a trigger operation on the target control, so that a target object can conveniently implement various editing operations in the video note editing area to generate video notes for the video, and the video notes are stored in a video note set of the target object, so that the target object can view video notes of the video in the video note set through an object information interface, thereby realizing generation of video notes when viewing the video, and improving man-machine interaction efficiency and user experience.
Fig. 14 is a block diagram of an apparatus for generating video notes for video provided by an embodiment of the present disclosure. Referring to fig. 14, the apparatus includes a first display unit 1401, a second display unit 1402, a generation unit 1403, and a third display unit 1404.
A first display unit 1401 configured to perform displaying a target control on a playback interface of a video, the target control being used to generate a video note for the video;
a second display unit 1402 configured to perform a trigger operation of the target control in response to a target object, display a video note editing area for the video on the playback interface;
a generating unit 1403 configured to perform an editing operation performed in response to the target object in the video note editing area, generate a video note of the video, and store the video note into a video note set of the target object;
the third display unit 1404 is configured to perform displaying a first control on the object information interface of the target object, where the first control is used for viewing the video note set, and displaying the video note set in response to a triggering operation of the first control by the target object.
In some embodiments, the second display unit 1402 is configured to perform:
And responding to the triggering operation of the target object on the target control, playing the video in a video playing area of the playing interface, and displaying the video note editing area on the playing interface, wherein the video playing area and the video note editing area are not overlapped with each other.
In some embodiments, the generating unit 1403 is configured to perform:
and responding to the label editing operation of the target object in the video note editing area, and acquiring the note label of the video note.
In some embodiments, the generating unit 1403 is configured to perform:
displaying at least one first note label in the video note editing area, and responding to the triggering operation of the target object on the target note label in the at least one first note label, and taking the target note label as the note label of the video note;
and responding to the label input operation of the target object in the video note editing area, and acquiring the note label of the video note.
In some embodiments, the at least one first note tag includes at least one of:
default note labels;
a note tag of the historical video note generated by the object other than the target object for the video;
A note tag generated based on video content of the video;
issuing a note tag generated by an object of the video for the video;
existing note labels in the video note set.
In some embodiments, the apparatus further comprises at least one of:
a fourth display unit configured to perform a viewing operation of the video note in response to the target object, to display the video note;
a fifth display unit configured to perform an editing operation of the video note in response to the target object, and display the edited video note;
and a deleting unit configured to perform a deleting operation of the video note in response to the target object, the video note being deleted from the video note set.
In some embodiments, the fourth display unit is configured to perform a viewing operation of the video notes in response to the target object, play the video in a first area, and display the video notes in a second area, the first area and the second area not overlapping each other.
In some embodiments, the video note set includes a plurality of sub video note sets, each of the sub video note sets having a different note tag, the third display unit configured to perform:
And responding to the triggering operation of the target object on the first control, and displaying a plurality of sub-video note sets according to the note labels corresponding to the sub-video note sets.
In the device for generating video notes for videos provided by the embodiment of the disclosure, the target control for generating video notes for the videos is displayed on the playing interface of the videos, the video note editing area is displayed on the playing interface in response to the triggering operation of the target control, so that a target object can conveniently implement various editing operations in the video note editing area to generate video notes for the videos, and the video notes are stored in the video note set of the target object, so that the target object can view the video notes of the videos in the video note set through the object information interface, the generation of the video notes when the videos are watched is realized, and the man-machine interaction efficiency and the user experience are improved.
It should be noted that: the apparatus for generating video notes for video according to the above embodiment is only exemplified by the division of the above functional modules when generating video notes for video, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the functions described above. In addition, the device for generating video notes for video provided in the above embodiment belongs to the same concept as the method embodiment for generating video notes for video, and the detailed implementation process of the device is referred to the method embodiment, which is not repeated here.
In an exemplary embodiment, an electronic device is also provided that includes a processor and a memory for storing at least one computer program that is loaded and executed by the processor to implement the method of generating video notes for video in embodiments of the present disclosure.
Taking an electronic device as an example of a terminal, fig. 15 is a block diagram of a structure of a terminal according to an embodiment of the present disclosure. The terminal 1500 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 1500 can also be referred to as a user device, portable terminal, laptop terminal, desktop terminal, and the like.
In general, the terminal 1500 includes: a processor 1501 and a memory 1502.
The processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1501 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1501 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and rendering of content to be displayed by the display screen. In some embodiments, the processor 1501 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1502 may include one or more computer-readable storage media, which may be non-transitory. Memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1502 is used to store at least one program code for execution by processor 1501 to implement the method of generating video notes for video provided by the method embodiments in the present disclosure.
In some embodiments, the terminal 1500 may further optionally include: a peripheral interface 1503 and at least one peripheral device. The processor 1501, memory 1502 and peripheral interface 1503 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 1503 via a bus, signal lines, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1504, a display screen 1505, a camera assembly 1506, audio circuitry 1507, a positioning assembly 1508, and a power supply 1509.
A peripheral interface 1503 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 1501 and the memory 1502. In some embodiments, processor 1501, memory 1502, and peripheral interface 1503 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1501, the memory 1502, and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1504 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1504 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 1504 may also include NFC (Near Field Communication, short range wireless communication) related circuits, which are not limited in this application.
Display 1505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When display screen 1505 is a touch display screen, display screen 1505 also has the ability to collect touch signals at or above the surface of display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. At this point, display 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1505 may be one, disposed on the front panel of the terminal 1500; in other embodiments, the display 1505 may be at least two, respectively disposed on different surfaces of the terminal 1500 or in a folded design; in other embodiments, display 1505 may be a flexible display disposed on a curved surface or a folded surface of terminal 1500. Even more, the display 1505 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display screen 1505 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1506 is used to capture images or video. Optionally, the camera assembly 1506 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 1506 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuitry 1507 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, inputting the electric signals to the processor 1501 for processing, or inputting the electric signals to the radio frequency circuit 1504 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 1500. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 1507 may also include a headphone jack.
The positioning component 1508 is for positioning a current geographic location of the terminal 1500 to enable navigation or LBS (Location Based Service, location-based services).
The power supply 1509 is used to power the various components in the terminal 1500. The power supply 1509 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1509 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1500 also includes one or more sensors 1510. The one or more sensors 1510 include, but are not limited to: acceleration sensor 1511, gyro sensor 1512, pressure sensor 1513, optical sensor 1514, and proximity sensor 1515.
The acceleration sensor 1511 may detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1500. For example, the acceleration sensor 1511 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1501 may control the display screen 1505 to display the user interface in a landscape view or a portrait view based on the gravitational acceleration signal acquired by the acceleration sensor 1511. The acceleration sensor 1511 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 1512 may detect a body direction and a rotation angle of the terminal 1500, and the gyro sensor 1512 may collect 3D motion of the terminal 1500 by a user in cooperation with the acceleration sensor 1511. The processor 1501, based on the data collected by the gyro sensor 1512, may implement the following functions: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 1513 may be disposed on a side frame of the terminal 1500 and/or under the display 1505. When the pressure sensor 1513 is disposed on the side frame of the terminal 1500, a grip signal of the user on the terminal 1500 may be detected, and the processor 1501 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 1513. When the pressure sensor 1513 is disposed at the lower layer of the display screen 1505, the processor 1501 realizes control of the operability control on the UI interface according to the pressure operation of the user on the display screen 1505. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 1514 is used to collect the ambient light intensity. In one embodiment, processor 1501 may control the display brightness of display screen 1505 based on the intensity of ambient light collected by optical sensor 1514. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1505 is turned up; when the ambient light intensity is low, the display luminance of the display screen 1505 is turned down. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1514.
A proximity sensor 1515, also referred to as a distance sensor, is typically provided on the front panel of the terminal 1500. The proximity sensor 1515 is used to collect the distance between the user and the front of the terminal 1500. In one embodiment, when the proximity sensor 1515 detects a gradual decrease in the distance between the user and the front of the terminal 1500, the processor 1501 controls the display 1505 to switch from the on-screen state to the off-screen state; when the proximity sensor 1515 detects that the distance between the user and the front surface of the terminal 1500 gradually increases, the processor 1501 controls the display screen 1505 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 15 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
In an exemplary embodiment, a computer readable storage medium is also provided, e.g. a memory, comprising program code executable by a processor of a terminal to perform the above method of generating video notes for video. Alternatively, the computer readable storage medium may be a Read-only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Compact-disk Read-only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program which, when executed by a processor, implements the above-described method of generating video notes for video.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method of generating video notes for a video, the method comprising:
displaying a target control on a playing interface of a video, wherein the target control is used for generating video notes for the video;
Responding to the triggering operation of a target object on the target control, and displaying a video note editing area aiming at the video on the playing interface;
generating a video note of the video in response to editing operation of the target object in the video note editing area, and storing the video note into a video note set of the target object;
and displaying a first control on an object information interface of the target object, wherein the first control is used for viewing the video note set, and displaying the video note set in response to the triggering operation of the target object on the first control.
2. The method of generating video notes for a video of claim 1, wherein said displaying a video note editing area for the video on the playback interface in response to a triggering operation of the target control by a target object comprises:
and responding to the triggering operation of the target object on the target control, playing the video in a video playing area of the playing interface, and displaying the video note editing area on the playing interface, wherein the video playing area and the video note editing area are not overlapped with each other.
3. The method of generating video notes for a video of claim 1, wherein said generating video notes for said video in response to editing operations performed by said target object in said video notes editing area comprises:
and responding to the label editing operation of the target object in the video note editing area, and acquiring the note label of the video note.
4. The method of generating video notes for video according to claim 3, wherein said obtaining a note tag for said video notes in response to a tag editing operation performed by said target object in said video note editing area comprises at least one of:
displaying at least one first note label in the video note editing area, and responding to the triggering operation of the target object on the target note label in the at least one first note label, wherein the target note label is used as the note label of the video note;
and responding to the label input operation of the target object in the video note editing area, and acquiring the note label of the video note.
5. The method of generating video notes for video according to claim 4, wherein said at least one first note tag comprises at least one of:
Default note labels;
a note tag of a historical video note generated by an object other than the target object for the video;
a note tag generated based on video content of the video;
issuing a note tag generated by an object of the video for the video;
and the existing note labels in the video note set.
6. The method of generating video notes for video according to claim 1, further comprising at least one of:
responding to the view operation of the target object on the video note, and displaying the video note;
responding to the editing operation of the target object on the video note, and displaying the edited video note;
and deleting the video notes from the video note set in response to the deleting operation of the target object on the video notes.
7. The method of generating video notes for video according to claim 6, wherein said displaying said video notes in response to a viewing operation of said video notes by said target object comprises any one of:
and responding to the viewing operation of the target object on the video note, playing the video in a first area, and displaying the video note in a second area, wherein the first area and the second area are not overlapped.
8. The method of generating video notes for a video of claim 1, wherein the video note set includes a plurality of sub video note sets, each of the sub video note sets having a different note label, the displaying the video note set in response to a triggering operation of the first control by the target object, comprising:
and responding to the triggering operation of the target object on the first control, and displaying a plurality of sub-video note sets according to note labels corresponding to the sub-video note sets.
9. An apparatus for generating video notes for video, the apparatus comprising:
a first display unit configured to perform displaying a target control on a play interface of a video, the target control being used to generate a video note for the video;
a second display unit configured to perform a trigger operation of the target control in response to a target object, and display a video note editing area for the video on the playback interface;
a generation unit configured to perform an editing operation performed in response to the target object in the video note editing area, generate a video note of the video, and store the video note into a video note set of the target object;
And the third display unit is configured to display a first control on the object information interface of the target object, wherein the first control is used for viewing the video note set, and the video note set is displayed in response to the triggering operation of the target object on the first control.
10. The apparatus for generating video notes for video according to claim 9, wherein the second display unit is configured to perform:
and responding to the triggering operation of the target object on the target control, playing the video in a video playing area of the playing interface, and displaying the video note editing area on the playing interface, wherein the video playing area and the video note editing area are not overlapped with each other.
11. An electronic device, the electronic device comprising:
one or more processors;
a memory for storing the processor-executable program code;
wherein the processor is configured to execute the program code to implement the method of generating video notes for video as claimed in any one of claims 1 to 8.
12. A computer readable storage medium, characterized in that program code in the computer readable storage medium, when executed by a processor of an electronic device, enables the electronic device to perform the method of generating video notes for video according to any of the claims 1 to 8.
CN202310403297.5A 2023-04-14 2023-04-14 Method and device for generating video notes for video and electronic equipment Pending CN116489465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310403297.5A CN116489465A (en) 2023-04-14 2023-04-14 Method and device for generating video notes for video and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310403297.5A CN116489465A (en) 2023-04-14 2023-04-14 Method and device for generating video notes for video and electronic equipment

Publications (1)

Publication Number Publication Date
CN116489465A true CN116489465A (en) 2023-07-25

Family

ID=87217045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310403297.5A Pending CN116489465A (en) 2023-04-14 2023-04-14 Method and device for generating video notes for video and electronic equipment

Country Status (1)

Country Link
CN (1) CN116489465A (en)

Similar Documents

Publication Publication Date Title
CN110708596A (en) Method and device for generating video, electronic equipment and readable storage medium
CN110248236B (en) Video playing method, device, terminal and storage medium
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN113411680B (en) Multimedia resource playing method, device, terminal and storage medium
CN112261481B (en) Interactive video creating method, device and equipment and readable storage medium
CN113395566B (en) Video playing method and device, electronic equipment and computer readable storage medium
CN113407291A (en) Content item display method, device, terminal and computer readable storage medium
CN110750734A (en) Weather display method and device, computer equipment and computer-readable storage medium
CN111628925B (en) Song interaction method, device, terminal and storage medium
CN112131422A (en) Expression picture generation method, device, equipment and medium
CN113204672B (en) Resource display method, device, computer equipment and medium
CN113609358B (en) Content sharing method, device, electronic equipment and storage medium
CN113886611A (en) Resource display method and device, computer equipment and medium
CN112004134B (en) Multimedia data display method, device, equipment and storage medium
CN115129211A (en) Method and device for generating multimedia file, electronic equipment and storage medium
CN111694535B (en) Alarm clock information display method and device
KR20110136589A (en) Mobile terminal and operating method thereof
CN116489465A (en) Method and device for generating video notes for video and electronic equipment
CN113377271A (en) Text acquisition method and device, computer equipment and medium
CN115022721B (en) Content display method and device, electronic equipment and storage medium
CN113722040B (en) Work processing method, device, computer equipment and medium
CN116304355B (en) Object-based information recommendation method and device, electronic equipment and storage medium
CN113220203B (en) Activity entry display method, device, terminal and storage medium
CN115002549B (en) Video picture display method, device, equipment and medium
CN112818205B (en) Page processing method, device, electronic equipment, storage medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination