CN110381382B - Video note generation method and device, storage medium and computer equipment - Google Patents
Video note generation method and device, storage medium and computer equipment Download PDFInfo
- Publication number
- CN110381382B CN110381382B CN201910666679.0A CN201910666679A CN110381382B CN 110381382 B CN110381382 B CN 110381382B CN 201910666679 A CN201910666679 A CN 201910666679A CN 110381382 B CN110381382 B CN 110381382B
- Authority
- CN
- China
- Prior art keywords
- video
- target
- note
- playing
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 238000003860 storage Methods 0.000 title claims abstract description 53
- 239000012634 fragment Substances 0.000 claims description 106
- 230000001960 triggered effect Effects 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 11
- 238000012790 confirmation Methods 0.000 claims description 8
- 238000003491 array Methods 0.000 claims description 7
- 238000013467 fragmentation Methods 0.000 claims description 3
- 238000006062 fragmentation reaction Methods 0.000 claims description 3
- 238000005520 cutting process Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 24
- 239000003550 marker Substances 0.000 description 5
- 239000000284 extract Substances 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
- H04N21/4334—Recording operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Television Signal Processing For Recording (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application relates to a video note generation method, a video note generation device, a storage medium and computer equipment, wherein the method comprises the following steps: when a target video editing operation occurs, determining the starting and ending time of the target video in an original video; acquiring note information added to the target video on the playing page of the original video; generating an index array corresponding to the target video based on the start-stop time and the note information; and correspondingly storing the index array and the video identifier of the original video to obtain a video note corresponding to the target video. The scheme provided by the application can improve the video note making efficiency and is convenient for video note storage management.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video note generation method and apparatus, a storage medium, and a computer device.
Background
Compared with characters and pictures, the video carries richer and more expressive information, so that the video is more and more popular with users. The user can watch videos anytime and anywhere through a video client or a content browsing client running on the terminal. For some videos, a user often desires to take a viewing note of a video that exhibits a highlight in a segment of the entire original video being viewed.
However, based on most video clients or content browsing clients at present, users can only passively watch the original video published by the video resource publisher, and video note making is not supported. In a traditional video-based method for taking notes for watching, a user is mainly separated from a video client or a content browsing client while watching a video, and the main content of the video is recorded in a notebook or office software; or, recording the content by storing the screen capture in the video. This way, not only complex operation and note content are scattered, are unfavorable for storage management.
Disclosure of Invention
In view of the above, it is necessary to provide a video note generating method, an apparatus, a storage medium, and a computer device for solving the technical problems of complicated video production operations and scattered note contents.
A video note generation method, comprising:
when a target video editing operation occurs, determining the starting and ending time of the target video in an original video;
acquiring note information added to the target video on the playing page of the original video;
generating an index array corresponding to the target video based on the start-stop time and the note information;
and correspondingly storing the index array and the video identifier of the original video to obtain a video note corresponding to the target video.
A video note generation apparatus comprising:
the target video cutting module is used for determining the starting and stopping time of the target video in the original video when the target video editing operation occurs;
a note information adding module, configured to obtain note information added to the target video on the playing page of the original video;
the video note storage module is used for generating an index array corresponding to the target video based on the start-stop time and the note information; and correspondingly storing the index array and the video identifier of the original video to obtain a video note corresponding to the target video.
A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to perform the steps of the above-described video note generation method.
A computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the video note generation method described above.
According to the video note generation method, the video note generation device, the computer readable storage medium and the computer equipment, in the original video playing process, the video note of the target video can be triggered to be made; according to the triggered target video editing operation, the starting and ending time of the target video in the original video and the note information added aiming at the target video can be determined; video notes can be generated based on the starting and stopping time, the note information and the video identification of the original video, so that a user can make the video notes without separating from a client, and the video note efficiency is improved; the video notes made directly based on the client are all associated with the original video, so that the video notes are convenient to store and manage. In addition, only the index array generated based on the start-stop time and the note information is stored correspondingly with the video identifier of the original video, the video resource is not stored repeatedly, the target video can be played directly based on the video note, and the original video and the target video multiplex a plurality of image frames.
Drawings
FIG. 1 is a diagram of an application environment for a video note generation method in one embodiment;
FIG. 2 is a flow diagram of a video note generation method in one embodiment;
FIG. 3 is a diagram illustrating the determination of the start-stop time of a target video in an original video according to an embodiment;
FIG. 4 is a schematic diagram of a page of a playing page of an original video during making a video note in one embodiment;
FIG. 5 is a schematic diagram of a page showing target video playback based on video notes in one embodiment;
FIG. 6 is a schematic diagram of video note generation and playing of a target video based on a video note in one embodiment;
FIG. 7 is a diagram illustrating a playing period corresponding to a video slice of an original video in an embodiment;
FIG. 8 is a diagram illustrating video slices of a target video generated from an original video, in one embodiment;
FIG. 9 is a flow diagram of a video note generation method in one particular embodiment;
FIG. 10 is a block diagram of the structure of a video note generation apparatus in one embodiment;
FIG. 11 is a block diagram of the structure of a video note taking apparatus in another embodiment;
FIG. 12 is a block diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
FIG. 1 is a diagram of an application environment for a video note generation method in one embodiment. The video note generation method is applied to a video note generation system. The video note generation system includes a terminal 110 and a server 120. The terminal 110 and the server 120 are connected through a network. The terminal 110 may be operated with a video client or a content browsing client (hereinafter collectively referred to as a client). A user can watch an original video through a client running on the terminal 110, and during the watching process, the video note generation method can be adopted to make a video note, and the video note is stored in the server 120. Subsequently, the user can pull the corresponding video resource from the server 120 according to the video note at the terminal 110, and play back and watch the video clip with the note information based on the video resource. The terminal 110 may specifically be a desktop terminal or a mobile terminal, and the mobile terminal may specifically be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 120 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
As shown in FIG. 2, in one embodiment, a video note generation method is provided. The embodiment is mainly illustrated by applying the method to the terminal 110 in fig. 1. Referring to fig. 2, the video note generating method specifically includes the following steps:
s202, when the target video editing operation occurs, determining the starting and ending time of the target video in the original video.
The original video is original video data distributed by a video resource distributor, and may be played in a client running on a terminal, for example, a movie, a tv series, or a recorded video. The video format of the original video may be MPEG, AVI, MOV, ASF, WMV, or the like supported by the client. The target video is a video segment of the original video that is of interest to the user. One or more target videos can be cut from one original video. The start-stop time of the target video in the original video comprises a start time and an end time. The starting time is the time when the target video in the original video starts playing, and the ending time is the time when the target video in the original video finishes playing. As can be seen, the start-stop time is used to determine which video segment in the original video the target video is.
For example, referring to fig. 3, fig. 3 is a schematic diagram illustrating determining the start-stop time of a target video in an original video in an embodiment. As shown in fig. 3, the total playing time length corresponding to the original video is 02:05:20, and if the target video is to be generated according to the video data corresponding to the time interval from 01:01:25 to 01:01:35, the user may respectively specify that the start time Sx of the target video to be generated in the original video is 01:01:25 and the end time Ex is 01:01: 35. There may be an intersection of the playing periods of different target videos, i.e., the start time or end time of one target video may be between the start and end times of another target video.
The target video editing operation is an operation which acts on a playing page of the original video and can trigger the video note making of the target video. The trigger operation may specifically be a touch operation, a cursor operation, a key operation, a shake operation, a voice operation, or the like. The touch operation may be a touch click operation, a touch press operation, or a touch slide operation, for example, a double-click operation, a long-press operation, or a slide operation of a preset track. The touch operation may be a single touch operation or a multi-touch operation. The cursor operation may be an operation of controlling a cursor to click or an operation of controlling a cursor to press. The key operation may be a virtual key operation or a physical key operation, etc.
In one embodiment, when the original video is played, the client displays a note making control on a playing page of the original video. The target video editing operation may be a trigger operation that acts on the note-making control. Or when the operation of stopping playing the original video is detected, inquiring information whether a video note needs to be made is displayed on the playing page, and the target video editing operation may be a trigger operation for confirming the inquiring information. There are many ways of target video editing operation, and the details are not repeated herein.
Specifically, the user may search for the video name of the original video on the home page of the client or trigger the video cover of the original video to enter the playing page of the original video. Referring to fig. 4, fig. 4 is a schematic page diagram illustrating a playing page of an original video when a video note is made in one embodiment. The playing page 402 shows a progress bar 4042 for indicating the current playing time progress of the original video in real time and a progress indication point 4044. The playing progress of the original video can be accelerated or slowed down by adjusting the position of the progress indication point on the progress bar.
Further, when a target video editing operation occurs, the user can specify the start-stop time of the target video to be generated in the original video based on the progress bar on the playing page. For example, after the user is triggered to make a video note, the client displays two progress indicators based on the progress bar. The progress indication mark may be a progress indication point, or may be other marks that are located in the progress knowledge point, such as a line perpendicular to the progress bar, which is not limited thereto. One progress indication mark is positioned at the time position where the original video starts to play in the progress bar and is marked as a preamble indication mark; and the other progress indication mark is positioned at the time position of finishing playing the original video in the progress bar and is marked as a subsequent indication mark. The user can determine the starting time of the target video expected to be generated in the original video by adjusting the position of the preamble indicator mark in the progress bar, and can determine the ending time of the target video expected to be generated in the original video by adjusting the position of the subsequent preamble indicator mark in the progress bar.
In one embodiment, when a target video editing operation occurs, the client presents a remarks panel 404 on the play page 402 of the original video. The remark panel may be a page area temporarily added in the playing page for inputting note information, or may be another page different from the playing page, such as a pop-up window displayed in front of the playing page in a floating manner. The user can also specify the start-stop time by respectively entering the start time and the end time of the target video in the original video in the remark panel. If the user is the start-stop time specified based on the progress bar, the terminal automatically fills and displays the start-stop time on the remark panel, and the user only needs to confirm or modify the start-stop time.
It should be noted that, in the video note making process, the client may pause playing the original video until the video note making is completed and then continue playing the original video. The client can also continuously play the original video according to the playing progress adjusted by the user in the video note making process, and the method is not limited.
And S204, acquiring note information added to the target video on the playing page of the original video.
Note information refers to information that is configured for a target video by a user and facilitates subsequent faster understanding of video content. The format of the note information may be text, voice, or image, etc. In the present embodiment, the note information includes a video title and a video memo. The video title refers to a name configured for the target video by the user. The video remarks refer to summaries, postviews or other contents related to the video contents recorded by the user after watching the target video. The video remarks may also be associated files pulled from a local or other computer device, such as text files, audio files, or video files.
In one embodiment, the client allocates a default video title to the target video according to a preset rule, and displays the default video title on the remark panel. The preset rule may be to splice the start-stop time of the target video with the video name of the original video. For example, the default video title corresponding to the target video <3:00,5:42> in the original video "code entry course" may be "code entry course 3: 00" or "code entry course (3:00,5: 42)", and so on. Of course, the default video title may be assigned in other manners, which is not limited in this respect.
Specifically, when a confirmation operation for the default video title occurs on the remark panel, the client stores the default video title as the final video title of the target video. When a modification operation on the default video title occurs on the remark panel, the client stores the modified video title. The user can configure and obtain the final video title of the target video only by confirming the default video title or modifying the default video title, and the video note generation efficiency can be improved. The terminal obtains a video remark recorded by a user based on a remark panel, and stores the recorded video remark and the start-stop time of the target video in a correlation manner according to a preset time frequency or when the user closes to exit the remark panel or plays a page.
In one embodiment, the obtaining of the note information added to the target video on the playing page of the original video comprises: acquiring a video remark recorded on the basis of a remark panel in a playing page; when image frame association operation occurs, correspondingly storing the video remarks and the playing time of the currently displayed image frame of the playing page to obtain note information; the note information includes the video notes and the play time of the associated image frames.
The original video comprises a plurality of image frames which are arranged according to playing time. The image frame association operation refers to an operation which can associate the currently entered video remark to a specific certain image frame based on the remark panel trigger. When a store operation for a video annotation occurs, a query prompt is presented for the associated image frame. The inquiry prompt of the associated image frame refers to prompt information which is displayed when the remark panel is closed and whether the currently-entered video remark needs to be associated to a specific image frame. The query prompt may be a control (denoted as an association control) presented on the memo panel that can trigger the association of the video memo entered by the current memo panel to the currently presented image frame on the play page. The image frame association operation may be a trigger operation on an association control. The query prompt may also be a different page from the notes panel that contains the confirm option and cancel option, such as a pop-up window or the like that is shown in front of the notes panel. Accordingly, the image frame association operation may be a selection operation of a confirmation option.
In one embodiment, when the query prompt is an association control, the association control is presented in the remark panel by default, i.e., as soon as the remark panel is generated, without waiting for the user to trigger the storage operation.
Specifically, in addition to adding video remarks to the entire target video, the user may also add video remarks to a specific image frame in the target video. Before triggering the image frame association operation, a user can switch the image frames displayed on the playing page into image frames which are expected to be added with video remarks in a targeted manner by adjusting the position of the progress indication point on the progress bar and the like, and the image frames are recorded as target image frames. When the image frame association operation occurs, the client determines the playing time of the currently displayed target image frame, and stores the video remarks corresponding to the playing time.
According to the embodiment, the video remarks can be added to the whole target video, and the user can also add the video remarks to a specific image frame in the target video, so that the individuation of the video note is improved.
In one embodiment, the obtaining of the note information added to the target video on the playing page of the original video comprises: displaying image frames of an original video based on a playing page; intercepting the image frame containing the video mark according to the video mark operation triggered in the image frame to obtain note information; the note information includes the captured image frames and the corresponding playing time.
The video tag is tag information added to a certain image frame in the target video. The videomark may specifically be a position pointing mark. The position guide mark is used for prompting a viewing user to display the position of the video content needing important attention in the corresponding image frame. The position guide mark may be a mark with different shapes, sizes and colors, such as a dot, a line, a rectangular frame, etc. The video mark may specifically be graffiti information, or standard editing information entered based on an editing area provided by a playing page, such as text, voice, or additional other video.
The video marking operation refers to an operation which acts on one image frame and can trigger the addition of video marks on the image frame. The trigger operation may specifically be a touch operation, a cursor operation, a key operation, a shake operation, a voice operation, or the like. In one embodiment, when a target video editing operation occurs, the client displays an image marking control on a playing page of the original video. The videomark operation may be a trigger operation that acts on the image marking control.
Specifically, for one image frame in the target video, a corresponding video remark may be added, and a corresponding video mark may also be added. When the video marking operation occurs, the client determines the image frame currently displayed on the playing page as a target image frame. The client acquires data information of the video marker added aiming at the target image frame and display position information of the video marker in the target image frame. And the client correspondingly stores the playing time of the target image frame, the video mark added to the target image frame and the display position information thereof to obtain a video note.
In the above embodiment, besides configuring the relevant video remarks for the target video through the remark panel, the user can add a video mark to a certain image frame of the target video, thereby further improving the personalization of the video note.
In one embodiment, intercepting an image frame containing a videomark based on a videomark operation triggered at the image frame comprises: when the video marking operation occurs, calling a voice marking control in the image frame; recording voice mark data through a voice mark control; displaying a text mark obtained by recognizing the voice mark data in the voice mark control; when a confirmation operation corresponding to the text mark is detected, the text mark is displayed by replacing the voice mark control, and the image frame containing the text mark is intercepted.
The voice mark control is a control for triggering voice mark, and can be displayed in a text area in a virtual panel form. Specifically, when the terminal detects a videomark operation, the voice mark control can be invoked.
Specifically, when detecting a recording start operation acting on the voice mark control, the terminal calls a local voice acquisition device, enters a voice recording state after the voice acquisition device prepares for recording, starts recording voice data, and ends recording until a recording end condition is met to obtain voice mark data. The recording end condition may be that a preset time length is reached from the start of recording, that a silence state is detected to reach the preset time length, or that a recording end operation is detected. The terminal can acquire a text mark obtained by performing voice recognition on the voice mark data and display the acquired text mark in the voice mark control. Wherein speech recognition is a process of recognizing text from linguistic data.
Further, when the confirmation operation of the text mark acted on the voice mark control is detected, the terminal replaces the voice mark control to display a mark box containing the text mark. And the terminal intercepts the image of the image frame containing the mark to obtain a screenshot corresponding to the image frame. In one embodiment, the terminal empties the text mark displayed in the voice mark control when detecting the cancel operation of the text mark.
In the above embodiment, the user records the voice tag data through the voice tag control, and displays the text tag obtained by recognizing the voice tag data for the user to confirm. Therefore, when the user carries out text marking on the target video, manual editing is not needed, or only the text mark obtained by voice recognition is simply modified, so that the video note generation efficiency can be greatly improved.
And S206, generating an index array corresponding to the target video based on the start-stop time and the note information.
The index array is an array capable of associating the start-stop time of the target video with the note information. The index array comprises a plurality of array elements, for example, the index array of the target video A1 in the original video A can be [ S1,E1,T1,C1[C,Ct11、Ct14],P1[Pt12,Pt14]]. Wherein the array element S1Is the starting time of the target video; array element E1Is the end time of the target video; array element T1A video title that is a target video; array element C1[C,Ct11、Ct14]A video remark for the target video, C a video remark added for the entire target video, Ct11For a playing time of t11Video remarks to image frame addition of Ct14For a playing time of t14Video remarks added to the image frames of (1); array element P1[Pt12,Pt14]Video tagging of target video, Pt12For a playing time of t12Video marker added to the image frame of (1), Pt14For a playing time of t14Video markers added to the image frame.
In one embodiment, when the video remark comprises the associated file, the terminal stores the associated file to a note library of the server, and generates an index array of the target video based on the storage address of the associated file returned by the server. In other words, only the storage address of the associated file needs to be recorded in the index array. Similarly, when the note information contains the screenshot of the image frame of the video marker, the terminal stores the screenshot into a note library of the server, and the index array of the target video is generated based on the storage address of the screenshot returned by the server.
Specifically, when a target video storage operation occurs, the terminal organizes and stores the start-stop time of the acquired target video and the note information including a video title, a video remark or a video note and the like as an index array according to preset recording rules and recording sequences of different array elements. The target video storage operation refers to an operation that can trigger storage of note information of the target video, such as an operation of closing a playing page or an operation of exiting a client, which is performed on a playing page of the original video.
And S208, correspondingly storing the index array and the video identifier of the original video to obtain a video note corresponding to the target video.
The Video identifier is information capable of uniquely identifying a piece of Video, such as copyright information, and is denoted as Video _ id. The video note is a basis capable of uniquely determining a section of target video, and records a video identifier of an original video to which the target video belongs and an index array corresponding to the target video. For example, assuming that the Video identifier of the original Video a is Video _ a, the Video note of the target Video a1 can be written as note a1 ═ Video _ a + [ S ]1,E1,T1,C1[C,Ct11、Ct14],P1[Pt12,Pt14]]。
Specifically, when a target video storage operation occurs, the terminal associates the generated index array of the target video with the video identifier of the original video to obtain the video note of the target video. The terminal caches the video note to the local and sends the video note to the server for storage when the user is in an online state based on the client. In order to reduce terminal resource occupation, the locally stored video note can be deleted after the video note is stored in the server.
In one embodiment, the storing the index array in correspondence with the video identifier of the original video to obtain the video note corresponding to the target video includes: splicing the index arrays of the target videos to obtain a comprehensive index array; and correspondingly storing the comprehensive index array, the video identification of the original video and the user identification recorded by the client for playing the original video to obtain a comprehensive video note corresponding to the original video.
The user identifier refers to identity information of a user belonging to a client playing an original video, and is recorded as Uer _ id. The user identifier may specifically be a user name, a login account, and the like recorded in the client.
The user can take notes on a plurality of video clips in a section of original video, and then video notes of a plurality of target videos are obtained. For this reason, the terminal may record the video note of each target video in the above manner, or may record the index array information of all target videos in one integrated video note.
Specifically, the terminal splices the index arrays of the target videos according to the playing time sequence to obtain a comprehensive index array. And when the start-stop times of different target videos are crossed, the terminal splices the index arrays of the target videos according to the start time. And when the starting time is the same, the terminal splices the index arrays of the target videos according to the ending time.
It can be understood that the video identification and the playing time t of the original video are combinedij may uniquely identify one image frame in the original video. In other words, the combination result of the video identifier of the original video and the playing time of the image frame can be used as the image identifier of the corresponding image frame. Similarly, the video identification and the target view of the original videoStart-stop time of frequency<Si,Ei>The combined result can be used as the video identification of the corresponding target video, so that the index arrays of a plurality of target videos can be spliced out of order.
Further, the terminal obtains a user identifier of the user belonging to the client playing the original video, and associates the comprehensive index array with the video identifier and the user identifier of the original video to obtain a comprehensive video note corresponding to the original video. For example, in the above example, the user X extracts the target Video a1 and the target Video a2 from the original Video a, and the corresponding integrated Video note may be user _ X + Video _ a + [ [ S ] ]1,E1,T1,C1[C,Ct11、Ct14],P1[Pt12,Pt14]][S2,E2,T2,C2[C,Ct23、Ct24],P2[Pt22,Pt28]]]。
In the embodiment, the index array information of all target videos is recorded by one comprehensive video note, so that the video identification of the original video and the data storage capacity of the user identification can be reduced on one hand, and the storage resources are saved; on the other hand, one original video only corresponds to one comprehensive video note, so that the video note is convenient to store and manage.
According to the video note generation method, in the original video playing process, the video note of the target video can be triggered to be made; according to the triggered target video editing operation, the starting and ending time of the target video in the original video and the note information added aiming at the target video can be determined; video notes can be generated based on the starting and stopping time, the note information and the video identification of the original video, so that a user can make the video notes without separating from a client, and the video note efficiency is improved; the video notes made directly based on the client are all associated with the original video, so that the video notes are convenient to store and manage. In addition, only the index array generated based on the start-stop time and the note information is stored correspondingly with the video identifier of the original video, the video resource is not stored repeatedly, the target video can be played directly based on the video note, and the original video and the target video multiplex a plurality of image frames.
In one embodiment, the video note generating method further includes: determining a sequence label and video entry information corresponding to each target video based on the note information; establishing a link between the video entry information and the video note of the corresponding target video; and arranging and displaying the plurality of video entry information according to the sequence tags.
The sequence tag refers to information that can be used as a basis for ordering the target video, such as a target video generation time, a video number in a video title, and the like. The video number may be chapter directory information allocated by the user for the target video according to a topic or the like to which the target video relates. The video entry information refers to information that can trigger entry into a playing page of the target video, such as a triggerable video title, a video cover page, and the like. The video cover may be a screenshot of an image frame in the original video corresponding to the start time of the target video.
Specifically, the terminal extracts the sequence tag of the target video and the video entry information from the note information, and establishes a link between the video entry information and the video note of the corresponding target video, so that the corresponding video note can be acquired when the video entry information is triggered.
Further, in order to facilitate users to know the video note making condition in time, after the video notes are generated, the terminal immediately displays the video entry information of the plurality of video notes according to the sequence tags. Referring to fig. 5, fig. 5 is a schematic diagram of a page for performing target video playing based on video notes in an embodiment. In the video playback portal page 500, a plurality of video portal information 502 is displayed, each video portal information being linked to a corresponding target video. When a user triggers video entry information corresponding to a certain target video, the target video can be played. Subsequently, when the original video is queried again after exiting the original video playing page, the server can return the video entry information corresponding to the original video and the video entry information corresponding to the target video related to the original video according to the video query request sent by the terminal. Optionally, information such as start-stop time related to the target video can be returned, so that the terminal can display video list information, and a user can conveniently and clearly see all target videos generated according to the current original video. Also shown in the video playback entry page 500 are a video memo 504 and a videomark 506 added to the target video.
In one embodiment, if the original video belongs to the album series video, the terminal further stores the video identifier of the original video corresponding to the album identifier of the album to which the original video belongs. The terminal can realize multi-level directory display. For example, the terminal uses an album title of an album to which the original video belongs as a first hierarchical directory, uses a video title of the original video as a second hierarchical directory, uses a video title of the target video as a third hierarchical directory, and displays the video titles of the same hierarchical level in an arrangement manner according to the sequence tags.
In the embodiment, the video entry information corresponding to the video notes of the target video is sequentially displayed according to the sequence defined by the user in the note information, so that efficient storage and management of a large number of target videos are facilitated.
In one embodiment, the video note generating method further includes: when the triggering operation of the video entry information occurs, acquiring a video note corresponding to a target video linked by the triggered video entry information; the video note comprises a video identifier and a corresponding index array; according to the start-stop time recorded by the index array, pulling the image frame with the playing time as the start-stop time or between the start-stop time from a plurality of image frames corresponding to the video identifier, and requesting the note information corresponding to the pulled image frame; and playing the target video based on the image frames, and displaying the note information when playing to the corresponding image frame.
The server stores a large amount of media resources, such as copyrighted video resources, including video resources of the original video. The terminal can inquire some original video from the server and then pull the video resource of the corresponding target video from the inquired original video according to the video note. Referring to fig. 6, fig. 6 is a flow diagram illustrating video note generation and playing of a target video based on the video note in one embodiment. As shown in fig. 6, after the user completes the video note making at the client based on the video note generating method, the video note can be stored in the note library deployed by the server and then in other servers. Subsequently, the terminal can pull the video resource from the server according to the video note, and the target video is reviewed and watched.
When the triggering operation of the video entry information occurs, the terminal acquires the video notes of the target video linked by the triggered video entry information. According to the video note, the terminal traverses a plurality of image frames of the original video according to the playing time sequence. Specifically, the terminal determines an image frame of a first playing time sequence in the original video as an image frame of a current playing time sequence. And the terminal judges whether the playing time of the current playing time sequence image frame is in the starting and ending time of the video note record. When the playing time is in the starting and ending time of the video note recording, the terminal further judges whether the video note records the video remark corresponding to the image frame of the current playing time sequence chart or the storage address of the corresponding cut-off chart. And when the playing time is not in the starting and stopping time of the video note recording, continuously traversing the next playing time sequence image frame.
When the video note records the storage address of the picture frame corresponding to the current playing time sequence diagram, the terminal sends a video resource request to the server according to the storage address, so that the server pulls the picture frame corresponding to the current playing time sequence image frame, the video remark corresponding to the whole target video and the video remark corresponding to the current playing time sequence image frame from a note library or other servers according to the storage address. The terminal displays the request to the screenshot in the playing area of the playing page, and displays the video remark corresponding to the whole target video and the video remark corresponding to the current playing time sequence image frame in the remark area. When the video note does not record the storage address of the corresponding section of the image frame of the current playing time sequence diagram, the terminal sends a video resource request to the server according to the playing time of the image frame of the current playing time sequence, so that the server pulls the image frame of the current playing time sequence from the original video according to the playing time. And the terminal continuously traverses the next playing time sequence image frame in the original video according to the mode until the playing time of the traversed image frame is the end time of the target video, so that the target video is played.
In the playing process, the user can modify the note information of the target video, and the terminal updates the video note of the target video in time according to the modified note information. The user can also make video notes for a certain video segment in the target video.
In the above embodiment, according to the video note, the video resource corresponding to the target video and the corresponding handwriting information are pulled from the server frame by frame, so that the note information can be accurately displayed in the corresponding image frame or other areas of the playing page, and the target video has a better display effect.
In one embodiment, the video note generating method further includes: updating corresponding video segments in the original video based on the image frames added with the note information; and storing the updated video fragments, and determining the storage addresses as the index addresses of the corresponding video fragments.
Where a video slice is a storage unit of video data. Compared with the method that the loading of the video is started by pulling all video resources of the video at one time, in order to reduce the time consumption for starting and playing the video, the server can transcode a complete video, namely, divide a video into a plurality of small segments of video fragments, and then store all the divided video fragments. Each video segment has a corresponding playing time period, which refers to the time period range of playing in the original video. For example, if the playing time duration corresponding to the first video segment of the original video is 5s, and the playing time duration corresponding to the second video segment is 3 s, the playing time periods of the first video segment and the second video segment in the original video are 0 s to 5s, and 5s to 8 s in sequence.
The server also writes the playing duration and the index address of each divided video fragment into an index file, and can request the video fragments in sequence according to the sequence of the video fragments when playing the video, so that the video can be played immediately, and the playing of the whole video is realized. The video clips may be video Stream files in a TS (Transport Stream) format, and the index file may be, for example, an M3U8 file.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating a playing period corresponding to a video slice of an original video in an embodiment. As shown in fig. 7, the original video includes n video slices, wherein the playing period corresponding to the 0 th video slice is 0 to t1, the playing period corresponding to the 1 st video slice is t1 to t2, and so on.
In particular, each video slice includes a plurality of image frames. If a user adds a video marker to a specific image frame, the image frame in the corresponding video segment needs to be replaced by a screenshot of the image frame (marked as a marked image frame) containing a video note. In the video note making process, the terminal pulls the video fragments of the original video from the server, the playing time of the marked image frame is used as a dividing point, the video fragments containing the marked image frame are divided to obtain a plurality of intermediate fragments, and the marked image frame and the intermediate fragments are spliced according to the playing time sequence to obtain the video fragments which are sufficiently updated.
For example, if the user adds video markers to the image frames ti and tj in the video slices (t1, t2), the playing time ti and tj may be used as the separation points, respectively, to divide the video slices (t1, t2) into three intermediate slices (t1, ti-1), (ti, tj), and (tj +1, t2), the image frames ti including the video markers are used to splice the intermediate slices (t1, ti-1) and (ti, tj), the image frames tj including the video markers are used to splice the intermediate slices (ti, tj) and (tj +1, t2), and the video slices are recombined based on the image frames to which the video markers are added to obtain updated video slices "(t 1, ti-1), (ti, tj), (tj +1, t 2)".
In an embodiment, after obtaining the start-stop time and the note information of the target video, the terminal may also send the video identifier of the original video, the start-stop time of the target video, and the note information to the server, and the server updates the video segments including the marked image frames according to the above manner.
In the above embodiment, when the video content of the video fragment is modified in the video note making process, the video data and the index address of the corresponding video fragment are updated in time, so that the time consumed for starting and playing the video can be reduced, and the accuracy of the video note can be ensured.
In one embodiment, the start-stop time includes a start time and an end time; generating an index array corresponding to the target video based on the start-stop time and the note information comprises: determining a playing time period corresponding to a video fragment of an original video; determining a video fragment corresponding to a playing time interval containing starting time as a starting target fragment; determining a video fragment corresponding to a playing time interval containing the ending time as an ending target fragment; generating an index address of a target video according to the index address of the target fragment; and generating an index array based on the start-stop time, the note information and the index address of the target video.
After the playing time period of each video segment of the original video in the original video is determined, the terminal can determine a target segment for generating the target video according to the video segment corresponding to the playing time period including the start-stop time. Specifically, the terminal may first use the video segment corresponding to the playing time period including the starting time as a starting target segment for generating the target video, use the video segment corresponding to the playing time period including the ending time as an ending target segment for generating the target video, and use the video segment between the starting target segment and the ending target segment as an intermediate target segment.
As shown in fig. 7, if the start-stop time of the target video is m to n, where m e (0, t1) and n e (t2, t3), then the 0 th video slice of the original video is the starting target slice for generating the target video, the 1 st video slice is the middle target slice for generating the target video, and the 2 nd video slice is the ending target slice for generating the target video.
In order to facilitate other equipment or the terminal to obtain the generated target video, the terminal generates an index address of the target video according to the storage addresses of the starting target fragment, the middle target fragment and the ending target fragment, and stores the index address into an index array as an array element so as to obtain the video fragment corresponding to the target video according to the index address.
In an embodiment, after obtaining the start-stop time and the note information of the target video, the terminal may also send the video identifier of the original video, the start-stop time of the target video, and the note information to the server, and the server determines the target segment corresponding to the target video according to the above manner, and determines the index address of the target video according to the index address of the target segment.
In this embodiment, the target segment for generating the target video is directly found out from the video segments of the original video by the start-stop time, and the target video can be generated based on the target segment.
In one embodiment, generating the index address of the target video according to the index address of the target slice comprises: when the starting time is in the playing time interval corresponding to the starting target fragment, segmenting the starting target fragment according to the starting time and the ending time of the playing time interval corresponding to the starting target fragment to obtain a starting fragment; when the ending time is in the playing time interval corresponding to the ending target segment, segmenting the ending target segment according to the ending time and the starting time of the playing time interval corresponding to the ending target segment to obtain an ending segment; determining a video fragment between a starting target fragment and an ending target fragment as an intermediate fragment; and generating the index address of the target video according to the index addresses of the starting fragment, the middle fragment and the ending fragment.
Specifically, in this embodiment, when the start time is exactly the start time of the playing period corresponding to the start target segment, it is described that the entire start target segment may be regarded as a part constituting the target video. Namely, the starting target fragment is directly used as the starting fragment corresponding to the target video, so that the multiplexing of the video fragments of the original video is realized, and the storage space can be saved. For example, referring to fig. 8, fig. 8 is a schematic diagram of a video slice of a target video generated according to an original video in an embodiment. As shown in fig. 8, if the start time m happens to be t1 and the end time n ∈ (t2, t3), the 1 st video slice of the original video can be used as the start slice corresponding to the target video. In this case, the index address of the start target slice may be directly used as the index address of the start slice.
When the start time is in the playing time interval corresponding to the start target segment, it indicates that the whole start target segment cannot be used as a part of the target video, but a part of the video needs to be cut from the start target segment from the start time, and the cut video is used as the start segment of the target video. For example, referring to fig. 7, if the start time point m is between (0, t1), it is necessary to cut out video data corresponding to the time interval of (m, t1) from the start target segment, store the cut-out video data as a start segment corresponding to the target video, and separately store the start segment, and use the storage address as an index address of a start segment.
Further, when the ending time is just the ending time of the playing time period corresponding to the ending target segment, the ending target segment is directly used as the ending segment corresponding to the target video. For example, referring to fig. 8, if the ending time point n is exactly t3, the 2 nd video slice may be directly used as the ending slice corresponding to the target video. When the ending time point is in the playing time period corresponding to the ending target fragment, a part of video between the starting time of the playing time period corresponding to the ending target fragment and the corresponding ending time of the target video in the original video is segmented, and the segmented video is used as the ending fragment of the target video. For example, referring to fig. 8, if the ending time point n is between (t2, t3), it is necessary to cut out the video data corresponding to the time interval of (t2, n) from the ending target segment, and use the cut-out video data as the ending segment corresponding to the target video.
Further, the terminal directly takes the intermediate target fragment as an intermediate fragment corresponding to the target video, and obtains the index addresses according to the starting fragment, the intermediate fragment and the ending fragment. And the terminal generates the index address of the target video according to the index addresses of the starting fragment, the middle fragment and the ending fragment. It should be noted that the index addresses adopted in this embodiment are all the index addresses of the updated video slices.
Of course, when the determined starting target segment and ending target segment are adjacent video segments of the original video, no other video segment exists between the starting target segment and the ending target segment, and in this case, the target video is generated according to the starting target segment and the ending target segment.
In the above embodiment, the intermediate target segment is directly used as the intermediate segment corresponding to the target video, so that multiplexing of the video segments of the original video is realized, and the storage space can be saved.
In one embodiment, the video note generating method further includes: when a target video playing operation occurs, acquiring a video note corresponding to the target video; requesting video fragmentation according to an index address in the video note; and playing the target video based on the requested video fragment, and displaying corresponding note information on a playing page of the target video when the target video is played to the image frame added with the note information.
The terminal can display video entry information corresponding to the target video in the video playing entry page, when a user clicks the video entry information corresponding to the target video, the terminal can acquire a video note corresponding to the target video linked by the video entry information, request an updated target fragment according to an index address in the video note, and play the target video based on the updated target fragment. And displaying the video remarks corresponding to the whole target video in the playing process according to the handwriting information recorded by the video notes, and displaying the corresponding video remarks and/or video marks on a playing page when the video remarks and/or video marks are played to the image frame added with the note information.
In the above embodiment, the target video is played based on the target segment, which not only reduces the time consumed for starting playing of the video, but also can realize direct multiplexing of a large number of video segments, and can save the storage space.
In one embodiment, the video note generating method further includes: when a note sharing operation occurs, acquiring sharing information of a video note; and sending the shared information and the video note to a server so that the server pushes the video note to a terminal corresponding to the user identifier recorded by the shared information.
The shared information refers to information that a making user of the video note desires to share the video note with other users, and includes user identifications of other users desiring to share the contribution note.
Specifically, the terminal may transmit the set sharing information to the server together with the video note. Alternatively, the user may initiate a note sharing request at the terminal to the server some time after the terminal sends the video note to the server. The note sharing request carries sharing information and a video identifier of a target video to be shared. And the server performs identity verification on whether the user initiating the note sharing request is the target video making user or not according to the video identification of the target video. And when the identity authentication is passed, the server pushes the corresponding video note to the terminal of the corresponding user according to the user identification recorded by the shared information.
In the embodiment, the video note data can be shared within the range specified by the user through the user identifier configuration, and the data can be freely shared under the condition that the data security of the video note is ensured.
As shown in fig. 9, in a specific embodiment, the video note generating method can be applied to the terminal 110 in fig. 1, and specifically includes the following steps:
and S902, when the target video editing operation occurs, determining the starting and ending time of the target video in the original video.
And S904, acquiring the video remarks recorded on the remark panel in the playing page.
S906, when the image frame association operation occurs, the video remark and the playing time of the image frame currently displayed on the playing page are correspondingly stored, and the note information comprising the video remark and the corresponding playing time is obtained.
S908, according to the video marking operation triggered by the image frame displayed on the playing page, the image frame containing the video mark is intercepted, and the note information comprising the intercepted image frame and the corresponding playing time is obtained.
S910, based on the image frame added with the note information, updating the corresponding video segment in the original video.
S912, storing the updated video fragments, and determining the storage addresses as the index addresses of the corresponding video fragments.
And S914, determining the playing time interval corresponding to the video fragment of the original video.
S916, determine the video segment corresponding to the playing time interval containing the start time as the start target segment.
S918, determining the video segment corresponding to the playing time period containing the ending time as the ending target segment.
And S920, when the starting time is in the playing time interval corresponding to the starting target segment, segmenting the starting target segment according to the starting time and the ending time of the playing time interval corresponding to the starting target segment to obtain the starting segment.
And S922, when the ending time is in the playing time interval corresponding to the ending target segment, segmenting the ending target segment according to the ending time and the starting time of the playing time interval corresponding to the ending target segment to obtain the ending segment.
S924 determines the video segment between the starting target segment and the ending target segment as an intermediate segment.
S926, generating the index address of the target video according to the index addresses of the starting fragment, the middle fragment and the ending fragment.
And S928, generating an index array based on the starting and ending time, the note information and the index address of the target video.
And S930, correspondingly storing the index array, the video identifier of the original video and the user identifier recorded by the client for playing the original video to obtain a video note corresponding to the target video.
And S932, determining the sequence label and the video entry information corresponding to each target video based on the note information.
S934, establishing a link between the video entry information and the video note of the corresponding target video.
And S936, arranging and displaying the video entry information according to the sequence tags.
S938, when the triggering operation of the video entry information occurs, acquiring a video note corresponding to a target video linked by the triggered video entry information; the video note comprises a video identifier and a corresponding index array.
And S940, according to the start-stop time recorded by the index array, pulling the image frame with the playing time as the start-stop time or between the start-stop time from the plurality of image frames corresponding to the video identifier, and requesting the note information corresponding to the pulled image frame.
S942, the target video is played based on the image frames, and the note information is displayed when the target video is played to the corresponding image frame.
Fig. 2 and 9 are flow diagrams of a video note generation method in one embodiment. It should be understood that although the steps in the flowcharts of fig. 2 and 9 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2 and 9 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in FIG. 10, a video note generation apparatus 1000 is provided, the apparatus comprising a target video cropping module 1002, a note information adding module 1004, and a video note storage module 1006, wherein:
the target video clipping module 1002 is configured to determine a start-stop time of the target video in the original video when a target video editing operation occurs.
The note information adding module 1004 is configured to obtain note information added to the target video on the playing page of the original video.
The video note storage module 1006 is configured to generate an index array corresponding to the target video based on the start-stop time and the note information; and correspondingly storing the index array and the video identification of the original video to obtain the video note corresponding to the target video.
In one embodiment, the note information adding module 1004 is further configured to obtain a video note entered based on a note panel within the play page; when the storage operation of the video remarks occurs, displaying inquiry prompts of the associated image frames; when the confirmation operation of the inquiry prompt is detected, the video remarks and the playing time of the currently displayed image frame of the playing page are correspondingly stored to obtain note information; the note information includes the video notes and the play time of the associated image frames.
In one embodiment, the note information adding module 1004 is further configured to display, on the playing page of the original video, an image frame currently displayed on the playing page; intercepting the image frame containing the video mark according to the video mark operation triggered in the image frame to obtain note information; the note information includes the captured image frames and the corresponding playing time.
In one embodiment, the note information adding module 1004 is further configured to invoke voice markup controls in the image frame when a videomark operation occurs; recording voice mark data through a voice mark control; displaying a text mark obtained by recognizing the voice mark data in the voice mark control; when a confirmation operation corresponding to the text mark is detected, the text mark is displayed by replacing the voice mark control, and the image frame containing the text mark is intercepted.
In one embodiment, as shown in fig. 11, the video note storage module 1006 includes an index address update module 10062, configured to update the corresponding video slice in the original video based on the image frame to which the note information is added; and storing the updated video fragments, and determining the storage addresses as the index addresses of the corresponding video fragments.
In one embodiment, the start-stop time includes a start time and an end time; the video note storage module 1006 includes an index address determination module 10064, configured to determine a playing time period corresponding to a video segment of an original video; determining a video fragment corresponding to a playing time interval containing starting time as a starting target fragment; determining a video fragment corresponding to a playing time interval containing the ending time as an ending target fragment; generating an index address of a target video according to the index address of the target fragment; and generating an index array based on the start-stop time, the note information and the index address of the target video.
In an embodiment, the index address determining module 10064 is further configured to, when the start time is in the playing time period corresponding to the starting target segment, segment the starting target segment according to the start time and the end time of the playing time period corresponding to the starting target segment to obtain a starting segment; when the ending time is in the playing time interval corresponding to the ending target segment, segmenting the ending target segment according to the ending time and the starting time of the playing time interval corresponding to the ending target segment to obtain an ending segment; determining a video fragment between a starting target fragment and an ending target fragment as an intermediate fragment; and generating the index address of the target video according to the index addresses of the starting fragment, the middle fragment and the ending fragment.
In an embodiment, as shown in fig. 11, the video note generating apparatus 1000 further includes a video note playing module 1008, configured to obtain a video note corresponding to a target video when a target video playing operation occurs; requesting video fragmentation according to an index address in the video note; and playing the target video based on the requested video fragment, and displaying corresponding note information on a playing page of the target video when the target video is played to the image frame added with the note information.
In one embodiment, the video note storage module 1006 is further configured to splice the index arrays of the multiple target videos according to the playing time sequence to obtain a comprehensive index array; and correspondingly storing the comprehensive index array, the video identification of the original video and the user identification recorded by the client for playing the original video to obtain a comprehensive video note corresponding to the original video.
In one embodiment, as shown in fig. 11, the video note generating apparatus 1000 further includes a video link presenting module 1010, configured to determine, based on the note information, a sequence tag and video entry information corresponding to each target video; establishing a link between the video entry information and the video note of the corresponding target video; and arranging and displaying the plurality of video entry information according to the sequence tags.
In one embodiment, the video note playing module 1008 is further configured to, when a triggering operation for the video entry information occurs, obtain a video note corresponding to a target video linked by the triggered video entry information; the video note comprises a video identifier and a corresponding index array; according to the start-stop time recorded by the index array, pulling the image frame with the playing time as the start-stop time or between the start-stop time from a plurality of image frames corresponding to the video identifier, and requesting the note information corresponding to the pulled image frame; and playing the target video based on the image frames, and displaying the note information when playing to the corresponding image frame.
In one embodiment, as shown in fig. 11, the video note generating apparatus 1000 further includes a video note sharing module 1012, configured to obtain sharing information of the video note when a note sharing operation occurs; and sending the shared information and the video note to a server so that the server pushes the video note to a terminal corresponding to the user identifier recorded by the shared information.
The video note generating device can trigger the video note of the target video to be made in the original video playing process; according to the triggered target video editing operation, the starting and ending time of the target video in the original video and the note information added aiming at the target video can be determined; video notes can be generated based on the starting and stopping time, the note information and the video identification of the original video, so that a user can make the video notes without separating from a client, and the video note efficiency is improved; the video notes made directly based on the client are all associated with the original video, so that the video notes are convenient to store and manage. In addition, only the index array generated based on the start-stop time and the note information is stored correspondingly with the video identifier of the original video, the video resource is not stored repeatedly, the target video can be played directly based on the video note, and the original video and the target video multiplex a plurality of image frames.
FIG. 12 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be the terminal 110 in fig. 1. As shown in fig. 12, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the video note generation method. The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to perform a video note generation method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 12 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the video note generation apparatus provided herein may be implemented in the form of a computer program that is executable on a computer device such as that shown in FIG. 12. The memory of the computer device may store therein various program modules constituting the video note generating apparatus, such as a target video clipping module, a note information adding module, and a video note storage module shown in fig. 10. The computer program constituted by the respective program modules causes the processor to execute the steps in the video note generating method of the respective embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 12 may perform step S202 by the target video clipping module in the video note taking apparatus shown in fig. 10. The computer device may perform step S204 through the note information adding module. The computer device can execute step S206 and step S208 through the video note storage module.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the video note generation method described above. Here, the steps of the video note generating method may be the steps of the video note generating methods of the above embodiments.
In one embodiment, a computer readable storage medium is provided, storing a computer program that, when executed by a processor, causes the processor to perform the steps of the video note generation method described above. Here, the steps of the video note generating method may be the steps of the video note generating methods of the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, and the program can be stored in a non-volatile computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (15)
1. A video note generation method, comprising:
when a target video editing operation occurs, determining the starting and ending time of the target video in an original video;
acquiring note information added to the target video on the playing page of the original video;
replacing the image frames of the corresponding video fragments in the original video based on the screenshot of the marked image frame added with the video note, taking the playing time of the marked image frame as a dividing point, dividing the video fragments containing the marked image frame to obtain a plurality of intermediate fragments, and splicing the marked image frame and the intermediate fragments according to the playing time sequence to obtain updated video fragments;
storing the updated video fragments, and determining storage addresses as index addresses of the corresponding video fragments;
determining an index address of a target video according to the index address of the corresponding video fragment;
generating an index array corresponding to the target video based on the start-stop time, the note information and the index address of the target video;
and correspondingly storing the index array and the video identifier of the original video to obtain a video note corresponding to the target video.
2. The method of claim 1, wherein the obtaining of note information added to the playing page of the original video for the target video comprises:
acquiring a video remark recorded on the basis of a remark panel in the playing page;
when image frame association operation occurs, correspondingly storing the video remarks and the playing time of the currently displayed image frame of the playing page to obtain note information; the note information includes the video notes and the play time of the associated image frames.
3. The method of claim 1, wherein the obtaining of note information added to the playing page of the original video for the target video comprises:
displaying the image frame of the original video based on a playing page;
intercepting the image frame containing the video mark according to the video mark operation triggered by the image frame to obtain note information; the note information comprises the intercepted image frames and corresponding playing time.
4. The method of claim 3, wherein the intercepting an image frame containing the videomark comprises, in accordance with a videomark operation triggered at the image frame:
when a video marking operation occurs, calling up a voice marking control in the image frame;
recording voice mark data through the voice mark control;
displaying a text mark obtained by recognizing the voice mark data in the voice mark control;
when a confirmation operation corresponding to the text mark is detected, replacing the voice mark control to display the text mark, and intercepting an image frame containing the text mark.
5. The method of claim 1, wherein the start-stop time comprises a start time and an end time;
the generating an index array corresponding to the target video based on the start-stop time, the note information and the index address of the target video comprises:
determining a playing time period corresponding to the video fragment of the original video;
determining a video fragment corresponding to a playing time interval containing starting time as a starting target fragment;
determining a video fragment corresponding to a playing time interval containing the ending time as an ending target fragment;
generating an index address of the target video according to the index address of the target fragment;
and generating an index array based on the starting and ending time, the note information and the index address of the target video.
6. The method of claim 5, wherein the generating the index address of the target video according to the index address of the target slice comprises:
when the starting time is in the playing time interval corresponding to the starting target fragment, segmenting the starting target fragment according to the starting time and the ending time of the playing time interval corresponding to the starting target fragment to obtain a starting fragment;
when the ending time is in the playing time interval corresponding to the ending target segment, segmenting the ending target segment according to the ending time and the starting time of the playing time interval corresponding to the ending target segment to obtain an ending segment;
determining a video fragment between a starting target fragment and an ending target fragment as an intermediate fragment;
and generating the index address of the target video according to the index addresses of the starting fragment, the middle fragment and the ending fragment.
7. The method of claim 6, further comprising:
when a target video playing operation occurs, acquiring a video note corresponding to the target video;
requesting video fragmentation according to the index address in the video note;
and playing the target video based on the requested video fragment, and displaying corresponding note information on a playing page of the target video when the target video is played to the image frame added with the note information.
8. The method of claim 1, wherein the storing the index array in correspondence with the video identifier of the original video to obtain the video note corresponding to the target video comprises:
splicing the index arrays of the target videos to obtain a comprehensive index array;
and correspondingly storing the comprehensive index array, the video identification of the original video and the user identification recorded by the client for playing the original video to obtain a comprehensive video note corresponding to the original video.
9. The method of claim 1, further comprising:
determining a sequence tag and video entry information corresponding to each target video based on the note information;
establishing a link between the video entry information and a video note of a corresponding target video;
and arranging and displaying the plurality of video entry information according to the sequence tags.
10. The method of claim 9, further comprising: the video note comprises a video identifier and a corresponding index array;
according to the start-stop time recorded by the index array, pulling image frames with the playing time being the start-stop time or between the start-stop time from a plurality of image frames corresponding to the video identification, and requesting note information corresponding to the pulled image frames;
and playing the target video based on the image frames, and displaying note information when playing to the corresponding image frames.
11. The method of claim 1, further comprising:
when a note sharing operation occurs, acquiring sharing information of the video note;
and sending the sharing information and the video note to a server so that the server pushes the video note to a terminal corresponding to the user identifier recorded by the sharing information.
12. A video note generation apparatus, the apparatus comprising:
the target video cutting module is used for determining the starting and stopping time of the target video in the original video when the target video editing operation occurs;
a note information adding module, configured to obtain note information added to the target video on the playing page of the original video;
the video note storage module is used for replacing the image frames of the corresponding video fragments in the original video based on the screenshot of the marked image frame added with the video note, dividing the video fragments containing the marked image frame by taking the playing time of the marked image frame as a dividing point to obtain a plurality of intermediate fragments, and splicing the marked image frame and the intermediate fragments according to the playing time sequence to obtain updated video fragments; storing the updated video fragments, and determining storage addresses as index addresses of the corresponding video fragments; generating an index array corresponding to the target video based on the start-stop time, the note information and the index address of the target video; and correspondingly storing the index array and the video identifier of the original video to obtain a video note corresponding to the target video.
13. The device of claim 12, wherein the note information adding module is further configured to obtain a video note entered based on a note panel in the play page; when the storage operation of the video remarks occurs, displaying inquiry prompts of the associated image frames; when the confirmation operation of the inquiry prompt is detected, the video remarks and the playing time of the currently displayed image frame of the playing page are correspondingly stored to obtain note information; the note information includes the video notes and the play time of the associated image frames.
14. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 11.
15. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910666679.0A CN110381382B (en) | 2019-07-23 | 2019-07-23 | Video note generation method and device, storage medium and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910666679.0A CN110381382B (en) | 2019-07-23 | 2019-07-23 | Video note generation method and device, storage medium and computer equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110381382A CN110381382A (en) | 2019-10-25 |
CN110381382B true CN110381382B (en) | 2021-02-09 |
Family
ID=68255034
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910666679.0A Active CN110381382B (en) | 2019-07-23 | 2019-07-23 | Video note generation method and device, storage medium and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110381382B (en) |
Families Citing this family (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112825081B (en) * | 2019-11-20 | 2024-06-28 | 云丁网络技术(北京)有限公司 | Video information processing method, device, electronic equipment, processor and readable medium |
CN111314792B (en) * | 2020-02-27 | 2022-04-08 | 北京奇艺世纪科技有限公司 | Note generation method, electronic device and storage medium |
CN113392272A (en) * | 2020-03-11 | 2021-09-14 | 阿里巴巴集团控股有限公司 | Method and device for voice marking of pictures and videos |
CN111552860B (en) * | 2020-04-26 | 2023-10-31 | 北京奇艺世纪科技有限公司 | Feed acquisition method and device, electronic equipment and storage medium |
CN111556371A (en) * | 2020-05-20 | 2020-08-18 | 维沃移动通信有限公司 | Note recording method and electronic equipment |
CN111654749B (en) * | 2020-06-24 | 2022-03-01 | 百度在线网络技术(北京)有限公司 | Video data production method and device, electronic equipment and computer readable medium |
CN111833917A (en) * | 2020-06-30 | 2020-10-27 | 北京印象笔记科技有限公司 | Information interaction method, readable storage medium and electronic device |
CN111782915B (en) * | 2020-06-30 | 2024-04-19 | 北京百度网讯科技有限公司 | Information display method and device, electronic equipment and medium |
CN111711862A (en) * | 2020-06-30 | 2020-09-25 | 上海幽癸信息科技有限公司 | Marking method, playing method and retrieval method of audio and video clips |
CN111859856A (en) * | 2020-06-30 | 2020-10-30 | 维沃移动通信有限公司 | Information display method and device, electronic equipment and storage medium |
CN113936697B (en) * | 2020-07-10 | 2023-04-18 | 北京搜狗智能科技有限公司 | Voice processing method and device for voice processing |
CN111866548A (en) * | 2020-07-17 | 2020-10-30 | 北京欧应信息技术有限公司 | Marking method applied to medical video |
CN112087656B (en) * | 2020-09-08 | 2022-10-04 | 远光软件股份有限公司 | Online note generation method and device and electronic equipment |
CN112084756B (en) * | 2020-09-08 | 2023-10-10 | 远光软件股份有限公司 | Conference file generation method and device and electronic equipment |
CN112040277B (en) * | 2020-09-11 | 2022-03-04 | 腾讯科技(深圳)有限公司 | Video-based data processing method and device, computer and readable storage medium |
CN112087657B (en) * | 2020-09-21 | 2024-02-09 | 腾讯科技(深圳)有限公司 | Data processing method and device |
CN112099723B (en) * | 2020-09-23 | 2022-08-16 | 努比亚技术有限公司 | Association control method, device and computer readable storage medium |
CN114359920A (en) * | 2020-09-30 | 2022-04-15 | 北京小米移动软件有限公司 | Image processing method, device, equipment and storage medium |
CN112218118A (en) * | 2020-10-13 | 2021-01-12 | 湖南快乐阳光互动娱乐传媒有限公司 | Audio and video clipping method and device |
CN114449333B (en) * | 2020-10-30 | 2023-09-01 | 华为终端有限公司 | Video note generation method and electronic equipment |
CN114827753B (en) * | 2021-01-22 | 2023-10-27 | 腾讯科技(北京)有限公司 | Video index information generation method and device and computer equipment |
CN113099256B (en) * | 2021-04-01 | 2022-11-08 | 读书郎教育科技有限公司 | Method and system for playing back videos and adding voice notes in smart class |
CN112839258A (en) * | 2021-04-22 | 2021-05-25 | 北京世纪好未来教育科技有限公司 | Video note generation method, video note playing method, video note generation device, video note playing device and related equipment |
CN113283220A (en) * | 2021-05-18 | 2021-08-20 | 维沃移动通信有限公司 | Note recording method, device and equipment and readable storage medium |
CN113422998A (en) * | 2021-05-21 | 2021-09-21 | 北京奇艺世纪科技有限公司 | Method, device, equipment and storage medium for generating short video and note content |
CN113506608B (en) * | 2021-06-25 | 2024-03-19 | 青岛海信医疗设备股份有限公司 | Ultrasonic film processing method and ultrasonic equipment |
CN113395605B (en) * | 2021-07-20 | 2022-12-13 | 上海哔哩哔哩科技有限公司 | Video note generation method and device |
CN114143520B (en) * | 2021-11-29 | 2023-09-26 | 中船重工(武汉)凌久电子有限责任公司 | Method for realizing multi-channel HDMI interface transmission and automatic correction |
CN113949920A (en) * | 2021-12-20 | 2022-01-18 | 深圳佑驾创新科技有限公司 | Video annotation method and device, terminal equipment and storage medium |
CN114501112B (en) * | 2022-01-24 | 2024-03-22 | 北京百度网讯科技有限公司 | Method, apparatus, device, medium, and article for generating video notes |
CN116980469A (en) * | 2022-04-24 | 2023-10-31 | 北京字跳网络技术有限公司 | Multimedia sharing method, device, equipment and medium |
CN114979769A (en) * | 2022-06-01 | 2022-08-30 | 山东福生佳信科技股份有限公司 | Video continuous playing progress management system and method |
CN115119061A (en) * | 2022-06-15 | 2022-09-27 | 深圳康佳电子科技有限公司 | Video note generation method based on infinite screen system and related equipment |
CN115134650A (en) * | 2022-06-27 | 2022-09-30 | 上海哔哩哔哩科技有限公司 | Video note display method and device |
CN115052192A (en) * | 2022-07-25 | 2022-09-13 | 维沃移动通信有限公司 | Video processing method and device |
CN115474089A (en) * | 2022-08-12 | 2022-12-13 | 深圳市大头兄弟科技有限公司 | Audio and video online examination method and related equipment |
CN116844166B (en) * | 2023-08-24 | 2023-11-24 | 青岛罗博数码科技有限公司 | Video positioning device and method based on learning behavior |
CN117910557B (en) * | 2024-01-10 | 2024-07-26 | 广东职业技术学院 | Information processing method, system and medium for digital media |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104462207A (en) * | 2014-11-03 | 2015-03-25 | 陕西师范大学 | Multi-piecemeal learning resource labeling method for distributed learning environment |
CN104603807A (en) * | 2012-08-28 | 2015-05-06 | 微软公司 | Mobile video conferencing with digital annotation |
CN105072460A (en) * | 2015-07-15 | 2015-11-18 | 中国科学技术大学先进技术研究院 | Information annotation and association method, system and device based on VCE |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6988245B2 (en) * | 2002-06-18 | 2006-01-17 | Koninklijke Philips Electronics N.V. | System and method for providing videomarks for a video program |
US8307273B2 (en) * | 2002-12-30 | 2012-11-06 | The Board Of Trustees Of The Leland Stanford Junior University | Methods and apparatus for interactive network sharing of digital video content |
CN101491089A (en) * | 2006-03-28 | 2009-07-22 | 思科媒体方案公司 | Embedded metadata in a media presentation |
CN101127870A (en) * | 2007-09-13 | 2008-02-20 | 深圳市融合视讯科技有限公司 | A creation and use method for video stream media bookmark |
CN103488661A (en) * | 2013-03-29 | 2014-01-01 | 吴晗 | Audio/video file annotation system |
JP2017503394A (en) * | 2014-12-14 | 2017-01-26 | エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd | VIDEO PROCESSING METHOD, VIDEO PROCESSING DEVICE, AND DISPLAY DEVICE |
CN106303723B (en) * | 2016-08-11 | 2020-10-16 | 网易有道信息技术(杭州)有限公司 | Video processing method and device |
CN106803992B (en) * | 2017-02-14 | 2020-05-22 | 北京时间股份有限公司 | Video editing method and device |
CN109194887B (en) * | 2018-10-26 | 2021-11-30 | 深圳亿幕信息科技有限公司 | Cloud shear video recording and editing method and plug-in |
-
2019
- 2019-07-23 CN CN201910666679.0A patent/CN110381382B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104603807A (en) * | 2012-08-28 | 2015-05-06 | 微软公司 | Mobile video conferencing with digital annotation |
CN104462207A (en) * | 2014-11-03 | 2015-03-25 | 陕西师范大学 | Multi-piecemeal learning resource labeling method for distributed learning environment |
CN105072460A (en) * | 2015-07-15 | 2015-11-18 | 中国科学技术大学先进技术研究院 | Information annotation and association method, system and device based on VCE |
Non-Patent Citations (1)
Title |
---|
"MOOC学习中在线视频资源的交互性设计研究";解慧园等;《数字教育》;20181020;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110381382A (en) | 2019-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110381382B (en) | Video note generation method and device, storage medium and computer equipment | |
CN110213672B (en) | Video generation method, video playing method, video generation system, video playing device, video storage medium and video equipment | |
US11457256B2 (en) | System and method for video conversations | |
US20190253474A1 (en) | Media production system with location-based feature | |
CN111400518B (en) | Method, device, terminal, server and system for generating and editing works | |
US8826117B1 (en) | Web-based system for video editing | |
CN109474844B (en) | Video information processing method and device and computer equipment | |
CN110913241B (en) | Video retrieval method and device, electronic equipment and storage medium | |
US20120177345A1 (en) | Automated Video Creation Techniques | |
CN110475140B (en) | Bullet screen data processing method and device, computer readable storage medium and computer equipment | |
JP2006155384A (en) | Video comment input/display method and device, program, and storage medium with program stored | |
CN110019933A (en) | Video data handling procedure, device, electronic equipment and storage medium | |
CN111654749B (en) | Video data production method and device, electronic equipment and computer readable medium | |
CN110058887B (en) | Video processing method, video processing device, computer-readable storage medium and computer equipment | |
US10062413B2 (en) | Media-production system with social media content interface feature | |
CN110958470A (en) | Multimedia content processing method, device, medium and electronic equipment | |
CN110046263B (en) | Multimedia recommendation method, device, server and storage medium | |
WO2023029984A1 (en) | Video generation method and apparatus, terminal, server, and storage medium | |
CN115174506B (en) | Session information processing method, apparatus, readable storage medium and computer device | |
CN110891198B (en) | Video playing prompt method, multimedia playing prompt method, bullet screen processing method and device | |
KR101328270B1 (en) | Annotation method and augmenting video process in video stream for smart tv contents and system thereof | |
CN114449361B (en) | Media data playing method and device, readable storage medium and computer equipment | |
WO2024146612A1 (en) | Media interaction method and apparatus, device and storage medium | |
CN112019936B (en) | Method, device, storage medium and computer equipment for controlling video playing | |
CN109101964B (en) | Method, device and storage medium for determining head and tail areas in multimedia file |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |