WO2021258655A1 - 视频资料制作方法及装置、电子设备、计算机可读介质 - Google Patents

视频资料制作方法及装置、电子设备、计算机可读介质 Download PDF

Info

Publication number
WO2021258655A1
WO2021258655A1 PCT/CN2020/134291 CN2020134291W WO2021258655A1 WO 2021258655 A1 WO2021258655 A1 WO 2021258655A1 CN 2020134291 W CN2020134291 W CN 2020134291W WO 2021258655 A1 WO2021258655 A1 WO 2021258655A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
user
tag
information
video data
Prior art date
Application number
PCT/CN2020/134291
Other languages
English (en)
French (fr)
Inventor
李卫国
Original Assignee
百度在线网络技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百度在线网络技术(北京)有限公司 filed Critical 百度在线网络技术(北京)有限公司
Priority to KR1020217027687A priority Critical patent/KR20210114536A/ko
Priority to JP2021550025A priority patent/JP7394143B2/ja
Priority to EP20919374.7A priority patent/EP3958580A4/en
Priority to US17/460,008 priority patent/US20210397652A1/en
Publication of WO2021258655A1 publication Critical patent/WO2021258655A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8352Generation of protective data, e.g. certificates involving content or source identification data, e.g. Unique Material Identifier [UMID]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Definitions

  • the embodiments of the present disclosure relate to the technical fields of computers and video image processing, and in particular to a method and device for producing video data, electronic equipment, and computer-readable media.
  • Video can express information more intuitively and clearly, so video is widely disseminated and used as an important communication carrier.
  • the embodiments of the present disclosure provide a method and device for producing video data, electronic equipment, and computer-readable media.
  • embodiments of the present disclosure provide a method for producing video data, which includes:
  • the tag information, the time stamp, and the video data of the original video are associated and integrated to generate comprehensive video data carrying the tag information.
  • the obtaining the tag information added by the user for the video image includes:
  • the method before the associating and integrating the tag information, the time stamp, and the video data of the original video to generate a comprehensive video material carrying the tag information, the method further includes:
  • the label auxiliary information is an explanation of the label and limited use rights
  • the associating and integrating the tag information, the time stamp, and the video data of the original video to generate comprehensive video data carrying the tag information includes:
  • the tag information, the time stamp, the tag auxiliary information and the video data of the original video are associated and integrated to generate a comprehensive video material carrying the tag information.
  • the tag auxiliary information includes at least one of user information, user configuration information, and identification of the original video.
  • the user information includes a user account and/or an identification of a terminal device used by the user; the user configuration information includes user authority information.
  • the method further includes:
  • the tag information corresponding to the tag is displayed.
  • the method further includes:
  • the new integrated video data is generated and carried in association and integration.
  • the method further includes:
  • the comprehensive video data is shared to a sharing platform, so that other users on the sharing platform can obtain the comprehensive video data.
  • obtaining the currently played video image before the time stamp in the original video further includes:
  • Video usage information includes one or more of the video playback volume, replay rate, user comments, and the number of likes.
  • the tag information includes tags and/or notes
  • a video data production device which includes:
  • the trigger module is used to trigger the operation of inserting the label in response to the user's trigger instruction
  • the first obtaining module is used to obtain the timestamp of the currently played video image in the original video
  • the second acquiring module is configured to acquire tag information added by the user for the video image
  • the association module is used for associating and integrating the tag information, the time stamp, and the video data of the original video to generate comprehensive video data carrying the tag information.
  • an electronic device which includes:
  • One or more processors are One or more processors;
  • a memory with one or more programs stored thereon, and when the one or more programs are executed by the one or more processors, the one or more processors enable the one or more processors to make any one of the above-mentioned video material production methods;
  • One or more I/O interfaces are connected between the processor and the memory, and are configured to implement information interaction between the processor and the memory.
  • embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, and when the program is executed by a processor, any one of the above-mentioned methods for producing video data is realized.
  • the video material production method provided by the embodiment of the present disclosure responds to a user-triggered operation of inserting a tag to obtain the time stamp of the currently played video image in the original video; obtain the tag information added by the user for the video image;
  • the tag information, the time stamp, and the video data of the original video are associated and integrated to generate a comprehensive video material carrying the tag information.
  • the user can directly add the tag information to the data of the original video when watching the original video.
  • the operation is convenient, and the integrity of the original video is retained.
  • the comprehensive video material is repeatedly watched later, the tag location can be quickly and accurately located, reducing search time, improving learning efficiency, and improving user experience.
  • FIG. 1 is a flowchart of a method for producing video data provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a tag editing page provided by an embodiment of the disclosure.
  • FIG. 3 is a flowchart of another method for producing video data provided by an embodiment of the present disclosure.
  • FIG. 4 is a flowchart of another method for producing video data according to an embodiment of the disclosure.
  • FIG. 5 is a flowchart of yet another method for producing video data according to an embodiment of the present disclosure.
  • FIG. 6 is a flowchart of another method for producing video data provided by an embodiment of the present disclosure.
  • FIG. 7 is a schematic block diagram of a video data production device according to an embodiment of the disclosure.
  • FIG. 8 is a block diagram of an electronic device provided by an embodiment of the disclosure.
  • the video tag builds an index table based on the start and end time of the target video segment in the original video and the video tag, and then stores the index table and the video label of the original video to obtain the video note.
  • the video notes and the original video are two files.
  • This method of separating video notes and original videos is not only slow in response speed, but also, when the original video is played, the user's video notes for the original video cannot be obtained directly.
  • the video notes can only be searched from the file corresponding to the video note, and the complete information cannot be obtained.
  • the original video affects the user experience.
  • the embodiments of the present disclosure provide a method for making video material, which is used to make a comprehensive video material with a video tag, so that the user can conveniently and quickly locate a desired video image in the original video.
  • FIG. 1 is a flowchart of a method for producing video data provided by an embodiment of the disclosure. As shown in Figure 1, the method of making video materials includes:
  • Step 101 In response to a user-triggered operation of inserting a tag, obtain the time stamp of the currently played video image in the original video.
  • the original video is the original video data released by the publisher of the video resource, which can be played on the terminal.
  • the original video is a movie, a video courseware, a documentary, a recorded learning video, etc.
  • the video format of the original video can be MPEG, AVI, MOV, ASF, or WMV supported by the terminal.
  • the video image is a certain video frame in the original video.
  • the currently playing video image is the video image displayed on the screen when the original video is being played.
  • the original video is a documentary
  • the user when the user is interested in a certain video image in the documentary, he can insert a tag in the video image so that he can jump directly to the documentary when the documentary is replayed later.
  • Video image Or, if you are interested in a certain video in the documentary, you can insert a tag at the beginning of the video segment, so that when the documentary is replayed later, you can directly jump to the beginning of the video segment and start playing.
  • the original video is a video courseware
  • a tag can be inserted at the beginning of the video segment to facilitate subsequent replay of the video courseware . You can jump directly to the beginning of the video segment and start playing. Or, when you are interested in a certain video image in the video courseware, you can insert a tag in the video image, so that when the video courseware is replayed later, you can jump directly to the video image.
  • the user in the process of playing the original video, can trigger the operation of inserting the tag by triggering a button, triggering an action, or triggering a voice on the playback page.
  • the trigger operation can be realized by a mouse or keyboard.
  • the mouse to perform a trigger operation use the mouse to click a preset operation button, and the clicking action can be a single click or a double click.
  • the shortcut key can be any key or a combination of multiple keys on the keyboard. The specific setting method of the shortcut key and the type of the shortcut key are not limited here.
  • the trigger operation can be realized by means such as touch.
  • the user touches a preset button to achieve it, or slides a preset button to achieve it.
  • the currently played video image refers to the image displayed on the display screen of the terminal at the current moment
  • the time stamp refers to the time node of the video image in the original video.
  • the terminal is playing the video courseware of Chapter X, Section X of the mathematics class, and the video image of the 9:30 (9:30) The timestamp is 9:30.
  • Step 102 Obtain tag information added by the user for the video image.
  • the label information includes tags, study notes, afterthoughts, and so on.
  • the mark is equivalent to a bookmark, which is only used to indicate that the video image is more important.
  • the study notes are comments added by the user to the video image.
  • the comment can be an explanation or question about the content in a certain video image, or it can be a summary or summary.
  • the annotation is a summary or explanation of the video image and a video segment before the video image.
  • the user summarizes the content of the time node 9:30 and the content within 30 seconds before the time node 9:30, that is, summarizes the content of the video courseware from the time node 9:00 to 9:30, and at the time node 9: 30 Add tags.
  • the tag information is directly added in the video image, or it can be externally hung in the edge area of the video image.
  • the user can add tags to the video image by calling the tag entry module.
  • the tag entry module can be a tag control embedded in the player program. For example, when the user operates the activation button, the label entry module is activated, and the label editing page is displayed on the display screen of the terminal, and the user can input and edit content on the label editing page.
  • Fig. 2 is a schematic diagram of a tag editing page provided by an embodiment of the disclosure.
  • the label editing page includes a label number area 21 and a label content editing area 22, in which information such as label number and label name can be input in the label number area 21.
  • the tag content editing area 22 can input information such as notes.
  • the tag entry module is an application program installed in the terminal, such as an application program such as a writing pad and a sticky note, and the player is connected to the application program.
  • the application installed on the terminal is called, and the display screen displays the interface of the application.
  • the WordPad is connected to the player call
  • the WordPad is called, and the screen displays the interface of the WordPad, and the user can edit the content of the label on the WordPad.
  • the user can click the Finish button, and the content of the label will be automatically associated with the time stamp and the video data of the original video.
  • the activated label entry module and the called editable application may occupy the entire page of the display screen, or may occupy part of the display screen.
  • Step 103 Associate and integrate the tag information, the time stamp, and the video data of the original video to generate a comprehensive video material carrying the tag information.
  • the integrated video material not only contains the video data of the original video, but also contains the tag information and the time stamp.
  • the time stamp is associated with the tag information.
  • the tag information, the time stamp and the video data of the original video are associated.
  • associating refers to adding tag information to the video data of the original video and associating it with the time stamp, so that the tag information, the time stamp, and the video data of the original video are integrated into one whole data.
  • the tag information, the time stamp, and the video data of the original video are integrated into the integrated video material through the data model.
  • the integrated video material can be regarded as the original video containing more information, that is, the integrated video material is A file.
  • the data model may adopt any model that can associate and integrate the tag information, the time stamp, and the video data of the original video, which is not limited in this embodiment.
  • the player may display the time node for adding the tag according to a preset icon.
  • the preset icon can be a cartoon graphic, an animal graphic, a pointer graphic, or a time graphic.
  • the time graph shows the time in hours, minutes, and seconds. In some embodiments, if the duration of the integrated video data is less than one hour, the time graph only shows minutes and seconds. If the comprehensive video data exceeds one hour, the time graph will indicate hours, minutes, and seconds.
  • the video material production method provided in this embodiment responds to a user-triggered tag insertion operation to obtain the time stamp of the currently played video image in the original video; obtain the tag information added by the user for the video image; combine the tag information, time stamp, and
  • the video data of the original video is associated and integrated to generate comprehensive video data carrying tag information. Since the integrated video data is a file, it is convenient to store and share, and it can be quickly recalled and buffered during playback.
  • the user can directly add the tag information to the original video data when watching the original video, which is convenient to operate and retains the integrity of the original video.
  • the comprehensive video material is repeatedly watched later, the tag can be located quickly and accurately. Location, reduce search time, improve learning efficiency, thereby improving user experience.
  • FIG. 3 is a flowchart of another method for producing video data according to an embodiment of the disclosure. As shown in Figure 3, the method of making video materials includes:
  • Step 301 In response to the operation of inserting the tag triggered by the user, obtain the timestamp of the currently played video image in the original video.
  • the original video is the original video data released by the publisher of the video resource, which can be played on the terminal.
  • the original video is a movie, a video courseware, a documentary, a recorded learning video, etc.
  • the video format of the original video can be MPEG, AVI, MOV, ASF, or WMV supported by the terminal.
  • the video image is a certain video frame in the original video.
  • the currently playing video image is the video image displayed on the screen when the original video is being played.
  • the time stamp refers to the time node of the video image in the original video.
  • the terminal is playing the video courseware of Chapter X, Section X of the mathematics class, and the video image of the 9:30 (9:30) The timestamp is 9:30.
  • the label can be a mark, a study note, an afterthought, and so on.
  • the marking please refer to step 101 of the foregoing embodiment, and in order to save space, it will not be described in detail here.
  • the user can trigger the operation of inserting a label by triggering a button, triggering an action, or triggering a voice on the playback page.
  • the operation of triggering the insertion of the label can be used in different ways according to different terminals. For example, when the terminal is a computer, the operation of inserting the label can be triggered by the mouse or keyboard. When the terminal is a mobile phone, the operation of inserting the tag can be triggered by touch.
  • Step 302 Obtain tag information added by the user for the video image.
  • the label information includes tags, study notes, afterthoughts, and so on.
  • the mark is equivalent to a bookmark, which is only used to indicate that the video image is more important.
  • the study notes are comments added by the user to the video image.
  • the comment can be an explanation or question about the content in a certain video image, or it can be a summary or summary.
  • the annotation is a summary or explanation of the video image and a video segment before the video image.
  • the user summarizes the content of the time node 9:30 and the content within 30 seconds before the time node 9:30, that is, summarizes the content of the video courseware from the time node 9:00 to 9:30, and at the time node 9: 30 Add tags.
  • the tag information is directly added in the video image, or it can be externally hung in the edge area of the video image.
  • the user can add tags to the video image by calling the tag entry module.
  • the tag entry module can be a tag control embedded in the player program. For example, when the user operates the activation button, the label entry module is activated, and the label editing page is displayed on the display screen of the terminal, and the user can input and edit content on the label editing page.
  • Step 303 Obtain tag auxiliary information.
  • the label auxiliary information is the information explaining the label and limiting the usage rights.
  • the tag auxiliary information includes at least one of user information, user configuration information, and identification of the original video.
  • the user information includes the user account and/or the identification of the terminal device used by the user.
  • the user account is an account used to distinguish users who watch the original video, or an account used to distinguish users who add tag information.
  • the user account may be the account of the user who uses the player, or the user account of the user who logs in to the server, which is a server that stores the original video.
  • the user account may also be a user account for logging in to the terminal.
  • the identification of the terminal device used by the user is also used to distinguish the user who added the tag. When the terminal device and the user have a corresponding relationship, the identification of the terminal device can be used to distinguish the user.
  • the user configuration information is permission information added to the original video by the user who added the tag, including user permission information.
  • the user authority information is used to limit the user's use authority. For example, when the user adds tag information, it can be set that user A can view all tag information, and user B can only view the tag, not the notes. For another example, when the user adds tag information, he can set that user C can view the tag information numbered with a single number, and user D can view the tag information numbered with a double number.
  • the original video identifier is unique and is used to distinguish the original video.
  • the corresponding original video can be obtained through the original video identifier.
  • Step 304 Associate and integrate the tag information, the time stamp, the video data of the original video, and the tag auxiliary information to generate a comprehensive video material carrying the tag information.
  • the tag information, timestamp, tag auxiliary information, and the video data of the original video are integrated into the integrated video data through the data model, and the integrated video data can be considered as containing more information.
  • the original video that is, the integrated video material is a file.
  • the player can directly analyze the comprehensive video data and display all tagged time nodes in the comprehensive video data according to the timestamp. The user only needs to click the corresponding time node to view the tag information.
  • the data model may adopt any model that can associate and integrate the tag information, the time stamp, and the video data of the original video, which is not limited in this embodiment.
  • the integrated video material can be distinguished by the original video identifier.
  • the user shares the integrated video material to the sharing platform other users can obtain the corresponding integrated video material through the original video identifier, obtain the producer of the integrated video material through the user information, and obtain the playback authority according to the user authority information.
  • Step 305 Store the integrated video data.
  • the user can store the integrated video data in a local storage medium, or exist in the source of the original video, or store it in a third-party server as needed.
  • FIG. 4 is a flowchart of another method for producing video data provided by an embodiment of the disclosure. As shown in Figure 4, the method of making video materials includes:
  • Step 401 In response to the operation of inserting a tag triggered by the user, obtain the timestamp of the currently played video image in the original video.
  • the original video is the original video data released by the publisher of the video resource, which can be played on the terminal.
  • the original video is a movie, a video courseware, a documentary, a recorded learning video, etc.
  • the video format of the original video can be MPEG, AVI, MOV, ASF, or WMV supported by the terminal.
  • the video image is a certain video frame in the original video.
  • the currently playing video image is the video image displayed on the screen when the original video is being played.
  • the time stamp refers to the time node of the video image in the original video.
  • the terminal is playing the video courseware of Chapter X, Section X of the mathematics class, and the video image of the 9:30 (9:30) The timestamp is 9:30.
  • the label can be a mark, a study note, an afterthought, and so on.
  • the marking please refer to step 101 of the foregoing embodiment, and in order to save space, it will not be described in detail here.
  • the user can trigger the operation of inserting a label by triggering a button, triggering an action, or triggering a voice on the playback page.
  • the operation of triggering the insertion of the label can be used in different ways according to different terminals. For example, when the terminal is a computer, the operation of inserting the label can be triggered by the mouse or keyboard. When the terminal is a mobile phone, the operation of inserting the tag can be triggered by touch.
  • Step 402 Obtain tag information added by the user for the video image.
  • the label information includes tags, study notes, afterthoughts, and so on.
  • the mark is equivalent to a bookmark, which is only used to indicate that the video image is more important.
  • the study notes are comments added by the user to the video image.
  • the comment can be an explanation or question about the content in a certain video image, or it can be a summary or summary.
  • the annotation is a summary or explanation of the video image and a video segment before the video image.
  • the user summarizes the content of the time node 9:30 and the content within 30 seconds before the time node 9:30, that is, summarizes the content of the video courseware from the time node 9:00 to 9:30, and at the time node 9: 30 Add tags.
  • the tag information is directly added in the video image, or it can be externally hung in the edge area of the video image.
  • the user can add tags to the video image by calling the tag entry module.
  • the tag entry module can be a tag control embedded in the player program. For example, when the user operates the activation button, the label entry module is activated, and the label editing page is displayed on the display screen of the terminal, and the user can input and edit content on the label editing page.
  • Step 403 Obtain tag auxiliary information.
  • the label auxiliary information is the information explaining the label and limiting the usage rights.
  • the tag auxiliary information includes at least one of user information, user configuration information, and identification of the original video.
  • the user information includes the user account and/or the identification of the terminal device used by the user.
  • the user account is an account used to distinguish users who watch the original video, or an account used to distinguish users who add tag information.
  • the user account may be the account of the user who uses the player, or the user account of the user who logs in to the server, which is a server that stores the original video.
  • the user account may also be a user account for logging in to the terminal.
  • the identification of the terminal device used by the user is also used to distinguish the user who added the tag. When the terminal device and the user have a corresponding relationship, the identification of the terminal device can be used to distinguish the user.
  • the user configuration information is permission information added to the original video by the user who added the tag, including user permission information.
  • the user authority information is used to limit the user's use authority. For example, when the user adds tag information, it can be set that user A can view all tag information, and user B can only view the tag, not the notes. For another example, when the user adds tag information, he can set that user C can view the tag information numbered with a single number, and user D can view the tag information numbered with a double number.
  • the original video identifier is unique and is used to distinguish the original video.
  • the corresponding original video can be obtained through the original video identifier.
  • Step 404 Associate and integrate the tag information, the time stamp, the video data of the original video, and the tag auxiliary information to generate a comprehensive video material carrying the tag information.
  • the integrated video material includes tag information, a time stamp, video data of the original video, and tag auxiliary information, and the tag information, time stamp, and tag auxiliary information are associated with the video data of the original video.
  • the integrated video material can be distinguished by the original video identifier.
  • the user can store the integrated video data in a local storage medium, or exist in the source of the original video, or store it in a third-party server as needed.
  • Step 405 Share the integrated video data to the sharing platform so that other users on the sharing platform can obtain the integrated video data.
  • the user shares the integrated video data to a sharing platform, and shares it with friends or others through the sharing platform.
  • the sharing platform can be the sharing platform that the user is currently logged in to, or it can be a third-party sharing platform that is different from the sharing platform currently logged in.
  • the player judges the permissions of the other users based on the user permission information in the tag auxiliary information, and plays the comprehensive video materials according to the permissions.
  • all the time nodes of the inserted tag may be displayed on the playing page for the user to quickly locate.
  • the user can also modify the label information.
  • FIG. 5 is a flowchart of yet another method for producing video data according to an embodiment of the present disclosure. As shown in Figure 5, the method of making video materials includes:
  • Step 501 In response to the user's play instruction, it is determined whether the integrated video material has a tag.
  • the player determines whether the integrated video material has a tag.
  • the player can determine whether the integrated video material has a tag based on the tag data.
  • Step 502 Analyze the integrated video material to obtain all tags and tag information in the integrated video material data.
  • the integrated video material when the integrated video material contains tags, the integrated video material is analyzed to obtain all tags and tag information in the integrated video material data.
  • Step 503 Display all tags on the playback page.
  • the play page is all or part of the display page of the terminal.
  • the playback page may be a part of the display page of the terminal.
  • the play page may be the entire display page of the terminal.
  • the play page may be a part of the display page of the terminal.
  • displaying all tags on the playback page is beneficial for the user to quickly and accurately locate the desired location, shorten the search time, improve efficiency, and thereby improve the user experience.
  • Step 504 Based on the label selected by the user, the label information corresponding to the label is displayed.
  • the user can touch to select the label of the label information that needs to be further displayed. For example, if the user clicks a label icon, the label information corresponding to the label icon is displayed on the display page.
  • Step 505 Receive the user's modification information for the tag, and update the tag information based on the modification information.
  • the modify button if the user needs to modify the tag information, he can click the modify button to enter the tag entry module to modify. In other embodiments, when the user clicks the label icon, the label information is directly displayed in the label entry module, so that the user can directly modify the label information and update the label information.
  • Step 506 According to the updated tag information, time stamp, tag auxiliary information and the video data of the original video, a new comprehensive video data is generated and integrated.
  • Step 507 Store the updated comprehensive video data or share it on a sharing platform.
  • the user can store the integrated video data in a local storage medium, or exist in the source of the original video, or store it in a third-party server as needed. Or, share the updated comprehensive video information on the sharing platform. Or, store the updated comprehensive video data and share it on the sharing platform.
  • FIG. 6 is a flowchart of another method for producing video data according to an embodiment of the present disclosure. As shown in Figure 6, the method of making video materials includes:
  • Step 601 Filter video resources based on the video usage information to obtain the original video.
  • the video usage information includes one or more of the video's playback volume, replay rate, user comments, and the number of likes.
  • some users of the latter who want to obtain learning materials from the Internet can analyze the video usage information of the video materials through the background big data analysis module, and select valuable original video materials based on the analysis results, thereby reducing unnecessary resources waste.
  • Step 602 In response to the operation of inserting the tag triggered by the user, obtain the timestamp of the currently played video image in the original video.
  • the video image is a certain video frame in the original video.
  • the currently playing video image is the video image displayed on the screen when the original video is being played.
  • the time stamp refers to the time node of the video image in the original video.
  • the terminal is playing the video courseware of Chapter X, Section X of the mathematics class, and the video image of the 9:30 (9:30) The timestamp is 9:30.
  • the label can be a mark, a study note, an afterthought, and so on.
  • the marking please refer to step 101 of the foregoing embodiment, and in order to save space, it will not be described in detail here.
  • the user can trigger the operation of inserting a label by triggering a button, triggering an action, or triggering a voice on the playback page.
  • the operation of triggering the insertion of the label can be used in different ways according to different terminals. For example, when the terminal is a computer, the operation of inserting the label can be triggered by the mouse or keyboard. When the terminal is a mobile phone, the operation of inserting the tag can be triggered by touch.
  • Step 603 Obtain tag information added by the user for the video image.
  • the label information includes tags, study notes, afterthoughts, and so on.
  • the mark is equivalent to a bookmark, which is only used to indicate that the video image is more important.
  • the study notes are comments added by the user to the video image.
  • the comment can be an explanation or question about the content in a certain video image, or it can be a summary or summary.
  • the annotation is a summary or explanation of the video image and a video segment before the video image.
  • the user can add tags to the video image by calling the tag entry module.
  • the tag entry module can be a tag control embedded in the player program. For example, when the user operates the activation button, the label entry module is activated, and the label editing page is displayed on the display screen of the terminal, and the user can input and edit content on the label editing page.
  • Step 604 Obtain tag auxiliary information.
  • the label auxiliary information is the information explaining the label and limiting the usage rights.
  • the tag auxiliary information includes at least one of user information, user configuration information, and identification of the original video.
  • the user information includes the user account and/or the identification of the terminal device used by the user.
  • the user account is an account used to distinguish users who watch the original video, or an account used to distinguish users who add tag information.
  • the user account may be the account of the user who uses the player, or the user account of the user who logs in to the server, which is a server that stores the original video.
  • the user account may also be a user account for logging in to the terminal.
  • the identification of the terminal device used by the user is also used to distinguish the user who added the tag. When the terminal device and the user have a corresponding relationship, the identification of the terminal device can be used to distinguish the user.
  • Step 605 Associate and integrate the tag information, the time stamp, the video data of the original video, and the tag auxiliary information to generate a comprehensive video material carrying the tag information.
  • the integrated video material includes tag information, a time stamp, video data of the original video, and tag auxiliary information, and the tag information, time stamp, and tag auxiliary information are associated with the video data of the original video.
  • Step 606 Share the integrated video data to the sharing platform so that other users on the sharing platform can obtain the integrated video data.
  • the user shares the integrated video data to a sharing platform, and shares it with friends or others through the sharing platform.
  • the sharing platform can be the sharing platform that the user is currently logged in to, or it can be a third-party sharing platform that is different from the sharing platform currently logged in.
  • Step 607 Analyze the integrated video material when playing the integrated video material to obtain all tags and tag information in the integrated video material data.
  • the integrated video material when the integrated video material contains tags, the integrated video material is analyzed to obtain all tags and tag information in the integrated video material data.
  • step 608 all tags are displayed on the playback page.
  • the play page is all or part of the display page of the terminal.
  • the playback page may be a part of the display page of the terminal.
  • the playback page can be the entire display page of the terminal.
  • the play page may be a part of the display page of the terminal.
  • Step 609 Based on the label selected by the user, the label information corresponding to the label is displayed.
  • the user can touch to select the label of the label information that needs to be further displayed. For example, if the user clicks a label icon, the label information corresponding to the label icon is displayed on the display page.
  • Step 610 Receive the user's modification information for the tag, and update the tag information based on the modification information.
  • the modify button if the user needs to modify the tag information, he can click the modify button to enter the tag entry module to modify. In other embodiments, when the user clicks the label icon, the label information is directly displayed in the label entry module, so that the user can directly modify the label information and update the label information.
  • Step 611 According to the updated tag information, timestamp, tag auxiliary information and the video data of the original video, they are associated and integrated to generate and carry new comprehensive video data.
  • Step 612 Store the updated comprehensive video data or share it on a sharing platform.
  • the video material production method provided in this embodiment responds to a user-triggered tag insertion operation to obtain the time stamp of the currently played video image in the original video; obtain the tag information added by the user for the video image; combine the tag information, time stamp, and
  • the video data of the original video is associated and integrated to generate comprehensive video data carrying tag information. Since the integrated video data is a file, it is convenient to store and share, and it can be quickly recalled and buffered during playback.
  • the user can directly add the tag information to the original video data when watching the original video, which is convenient to operate and retains the integrity of the original video.
  • the comprehensive video material is repeatedly watched later, the tag can be located quickly and accurately. Location, reduce search time, improve learning efficiency, thereby improving user experience.
  • FIG. 7 is a schematic block diagram of a video data production device according to an embodiment of the disclosure. As shown in Figure 7, the video data production device includes:
  • the trigger module 701 is used to trigger the operation of inserting the label in response to a trigger instruction of the user.
  • the user in the process of playing the original video, can trigger the operation of inserting the tag by triggering a button, triggering an action, or triggering a voice on the playback page.
  • the trigger operation can be realized by a mouse or keyboard.
  • the mouse to perform a trigger operation use the mouse to click a preset operation button, and the clicking action can be a single click or a double click.
  • the shortcut key can be any key or a combination of multiple keys on the keyboard. The specific setting method of the shortcut key and the type of the shortcut key are not limited here.
  • the first obtaining module 702 is configured to obtain the timestamp of the currently played video image in the original video.
  • the time stamp refers to the time node of the video image in the original video.
  • the currently playing video image refers to the image displayed on the display screen of the terminal at the current moment.
  • the terminal is playing the video courseware of Chapter X, Section X of the mathematics class, and the video image of the 9:30 (9:30)
  • the timestamp is 9:30.
  • the second acquisition module 703 is configured to acquire tag information added by the user for the video image.
  • the label information includes tags, study notes, afterthoughts, and so on.
  • the mark is equivalent to a bookmark, which is only used to indicate that the video image is more important.
  • the study notes are comments added by the user to the video image.
  • the comment can be an explanation or question about the content in a certain video image, or it can be a summary or summary.
  • the annotation is a summary or explanation of the video image and a video segment before the video image.
  • the user can add tags to the video image by calling the tag entry module.
  • the tag entry module can be a tag control embedded in the player program. For example, when the user operates the activation button, the label entry module is activated, and the label editing page is displayed on the display screen of the terminal, and the user can input and edit content on the label editing page.
  • the second acquisition module 703 is a tag entry module.
  • the tag entry module is an application program installed in the terminal, such as an application program such as a writing pad and a sticky note, and the application program is associated with the player.
  • the application program is associated with the player.
  • the display screen displays the interface of the application.
  • the WordPad is associated with the player
  • the WordPad is called, and the screen displays the WordPad interface, and the user can edit the content of the label on the WordPad.
  • the user can click the Finish button, and the label content will be automatically time stamped and associated with the video data of the original video.
  • the activated label entry module and the called editable application may occupy the entire page of the display screen, or may occupy part of the display screen.
  • the associating module 704 is used for associating and integrating the tag information, the time stamp, and the video data of the original video to generate comprehensive video data carrying the tag information.
  • the integrated video material not only contains the video data of the original video, but also contains tag information and time stamps.
  • the time stamps are associated with the tag information.
  • the tag information, time stamps and the video data of the original video are associated.
  • the tag information, the time stamp, and the video data of the original video are integrated into the integrated video material through the data model.
  • the integrated video material can be regarded as the original video containing more information, that is, the integrated video material is A file.
  • the data model may adopt any model that can associate and integrate the tag information, the time stamp, and the video data of the original video, which is not limited in this embodiment.
  • the player may display the time node for adding the tag according to a preset icon.
  • the preset icon can be a cartoon graphic, an animal graphic, a pointer graphic, or a time graphic.
  • the time graph shows the time in hours, minutes, and seconds. In some embodiments, if the duration of the integrated video data is less than one hour, the time graph only shows minutes and seconds. If the comprehensive video data exceeds one hour, the time graph will indicate hours, minutes, and seconds.
  • the trigger module is used to respond to a user-triggered operation of inserting a tag.
  • the first obtaining module is used to obtain the time stamp of the currently played video image in the original video;
  • the second obtaining module is used to obtain The tag information added by the user for the video image;
  • the association module is used to associate and integrate the tag information, timestamp, and the video data of the original video to generate comprehensive video data that carries the tag information. Since the integrated video data is a file, it is convenient to store and share, and it can be quickly recalled and buffered during playback.
  • the user can directly add the tag information to the original video data when watching the original video, which is convenient to operate and retains the integrity of the original video.
  • the comprehensive video material is repeatedly watched later, the tag can be located quickly and accurately. Location, reduce search time, improve learning efficiency, thereby improving user experience.
  • an electronic device which includes:
  • the memory 802 has one or more programs stored thereon, and when the one or more programs are executed by one or more processors, the one or more processors implement any one of the above-mentioned video material production methods;
  • One or more I/O interfaces 803 are connected between the processor and the memory, and are configured to implement information interaction between the processor and the memory.
  • the processor 801 is a device with data processing capabilities, including but not limited to a central processing unit (CPU), etc.
  • the memory 802 is a device with data storage capabilities, including but not limited to random access memory (RAM, more specifically Such as SDRAM, DDR, etc.), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory (FLASH); I/O interface (read and write interface) 803 is connected between processor 801 and memory 802 , Can realize the information interaction between the processor 801 and the memory 802, which includes but is not limited to a data bus (Bus) and the like.
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • FLASH flash memory
  • I/O interface 803 is connected between processor 801 and memory 802 , Can realize the information interaction between the processor 801 and the memory 802, which includes but is not limited to a data bus (Bus) and the like.
  • the processor 801, the memory 802, and the I/O interface 803 are connected to each other through a bus, and further connected to other components of the computing device.
  • embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, and when the program is executed by a processor, any one of the above-mentioned methods for producing video data is realized.
  • Such software may be distributed on a computer-readable medium, and the computer-readable medium may include a computer storage medium (or a non-transitory medium) and a communication medium (or a transitory medium).
  • the term computer storage medium includes volatile and non-volatile implementations in any method or technology for storing information (such as computer-readable instructions, data structures, program modules, or other data). Sexual, removable and non-removable media.
  • Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tapes, magnetic disk storage or other magnetic storage devices, or Any other medium used to store desired information and that can be accessed by a computer.
  • a communication medium usually contains computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transmission mechanism, and may include any information delivery medium. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

本公开提供了一种视频资料制作方法,涉及计算机和视频图像处理技术领域,该方法包括:响应用户触发的插入标签的操作,获得当前播放的视频图像在原始视频中的时间戳;获取所述用户针对所述视频图像添加的标签信息;将所述标签信息、所述时间戳与所述原始视频的视频数据关联整合,生成携带所述标签信息的综合视频资料。该方法便于添加标签信息,而且保留原始视频的完整性,后续重复观看该综合视频资料时,可以快速、准确地定位到标签位置,减少查找时间,提高学习效率,从而提高用户体验。本公开还提供了一种视频资料制作装置、电子设备和计算机可读介质。

Description

视频资料制作方法及装置、电子设备、计算机可读介质
本申请要求于2020年06月24日提交的、申请号为202010590913.9、发明名称为“视频资料制作方法及装置、电子设备、计算机可读介质”的中国专利申请的优先权,该申请的全文以引用的方式并入本申请中。
技术领域
本公开实施例涉及计算机和视频图像处理技术领域,特别涉及一种视频资料制作方法及装置、电子设备、计算机可读介质。
背景技术
随着网络环境的优化和移动智能设备的普及,移动终端成为人们获取信息的主要途径。视频能够更直观、明了地表达信息,所以视频被作为一种重要的传播载体被广泛传播和使用。
用户在观看视频时,尤其是观看知识型视频或者内容比较丰富的视频,希望在某些视频节点做标签或笔记,以便后续重复观看或学习。由于视频播放器只能通过倍速或者手动调节进度来寻找想要重复观看的视频节点。
公开内容
本公开实施例提供一种视频资料制作方法及装置、电子设备、计算机可读介质。
第一方面,本公开实施例提供一种视频资料制作方法,其包括:
响应用户触发的插入标签的操作,获得当前播放的视频图像在原始视频中的时间戳;
获取所述用户针对所述视频图像添加的标签信息;
将所述标签信息、所述时间戳与所述原始视频的视频数据关联 整合,生成携带所述标签信息的综合视频资料。
在一些实施例中,所述获取所述用户针对所述视频图像添加的标签信息,包括:
通过标签录入模块获得所述用户针对所述视频图像添加的标签信息。
在一些实施例中,所述将所述标签信息、所述时间戳和所述原始视频的视频数据关联整合,生成携带所述标签信息的综合视频资料之前,还包括:
获取标签辅助信息;其中,所述标签辅助信息是对所述标签和限定使用权限的说明;
所述将所述标签信息、所述时间戳和所述原始视频的视频数据关联整合,生成携带所述标签信息的综合视频资料,包括:
将所述标签信息、所述时间戳、所述标签辅助信息与所述原始视频的视频数据关联整合,生成携带所述标签信息的综合视频资料。
在一些实施例中,所述标签辅助信息包括用户信息、用户配置信息和原始视频的标识中至少之一。
在一些实施例中,所述用户信息包括用户帐号和/或用户使用的终端设备的标识;所述用户配置信息包括用户权限信息。
在一些实施例中,所述将所述标签信息、所述时间戳和所述原始视频的视频数据关联整合,生成携带所述标签信息的综合视频资料之后,还包括:
响应用户的播放指令,解析所述综合视频资料,获得所述综合视频资料数据中所有的所述标签和所述标签信息;
在播放页面展示所有的所述标签;
基于所述用户选择的所述标签,展示所述标签对应的所述标签信息。
在一些实施例中,所述基于所述用户选择的所述视频节点,展示所述视频节点对应的所述标签信息之后,还包括:
接收所述用户针对所述标签的修改信息,并基于所述修改信息更新所述标签信息;
根据更新后的所述标签信息、所述时间戳、所述标签辅助信息与所述原始视频的视频数据关联整合,生成携带新的所述综合视频资料。
在一些实施例中,所述将所述标签信息、所述时间戳和所述原始视频的视频数据关联整合,生成携带所述标签信息的综合视频资料之后,还包括:
将所述综合视频资料分享至共享平台,以供所述共享平台中的其它用户获得所述综合视频资料。
在一些实施例中,所述响应用户触发的插入标签的操作,获得当前播放的视频图像在原始视频中的时间戳之前,还包括:
基于视频使用信息筛选视频资源,获得所述原始视频;其中,所述视频使用信息包括视频的播放量、重播率、用户评论和点赞数量中的一种或多种。
在一些实施例中,所述标签信息包括标记和/或笔记;
第二方面,本公开实施例提供一种视频资料制作装置,其包括:
触发模块,用于响应用户的触发指令触发插入标签的操作;
第一获取模块,用于获得当前播放的视频图像在原始视频中的时间戳;
第二获取模块,用于获取所述用户针对所述视频图像添加的标签信息;
关联模块,用于将所述标签信息、所述时间戳和所述原始视频的视频数据关联整合,生成携带所述标签信息的综合视频资料。
第三方面,本公开实施例提供一种电子设备,其包括:
一个或多个处理器;
存储器,其上存储有一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器上述任意一种视频资料制作方法;
一个或多个I/O接口,连接在所述处理器与存储器之间,配置为实现所述处理器与存储器的信息交互。
第四方面,本公开实施例提供一种计算机可读介质,其上存储 有计算机程序,所述程序被处理器执行时实现上述任意一种视频资料制作方法。
本公开实施例提供的视频资料制作方法,响应用户触发的插入标签的操作,获得当前播放的视频图像在原始视频中的时间戳;获取所述用户针对所述视频图像添加的标签信息;将所述标签信息、所述时间戳和所述原始视频的视频数据关联整合,生成携带所述标签信息的综合视频资料,用户可以在观看原始视频时,直接将标签信息添加到原始视频的数据中,操作方便,而且保留原始视频的完整性,后续重复观看该综合视频资料时,可以快速、准确地定位到标签位置,减少查找时间,提高学习效率,从而提高用户体验。
附图说明
附图用来提供对本公开实施例的进一步理解,并且构成说明书的一部分,与本公开的实施例一起用于解释本公开,并不构成对本公开的限制。通过参考附图对详细示例实施例进行描述,以上和其它特征和优点对本领域技术人员将变得更加显而易见,在附图中:
图1为本公开实施例提供的一种视频资料制作方法的流程图;
图2为本公开实施例提供的标签编辑页面的示意图;
图3为本公开实施例提供的另一种视频资料制作方法的流程图;
图4为本公开实施例提供的另一种视频资料制作方法的流程图;
图5为本公开实施例提供的再一种视频资料制作方法的流程图;
图6为本公开实施例提供的又一种视频资料制作方法的流程图;
图7为本公开实施例的一种视频资料制作装置的原理框图;
图8为本公开实施例提供的一种电子设备的组成框图。
具体实施方式
为使本领域的技术人员更好地理解本公开的技术方案,下面结合附图对本公开提供的视频资料制作方法及装置、电子设备、计算机可读介质进行详细描述。
在下文中将参考附图更充分地描述示例实施例,但是所述示例 实施例可以以不同形式来体现且不应当被解释为限于本文阐述的实施例。反之,提供这些实施例的目的在于使本公开透彻和完整,并将使本领域技术人员充分理解本公开的范围。
在不冲突的情况下,本公开各实施例及实施例中的各特征可相互组合。
如本文所使用的,术语“和/或”包括一个或多个相关列举条目的任何和所有组合。
本文所使用的术语仅用于描述特定实施例,且不意欲限制本公开。如本文所使用的,单数形式“一个”和“该”也意欲包括复数形式,除非上下文另外清楚指出。还将理解的是,当本说明书中使用术语“包括”和/或“由……制成”时,指定存在所述特征、整体、步骤、操作、元件和/或组件,但不排除存在或添加一个或多个其它特征、整体、步骤、操作、元件、组件和/或其群组。
除非另外限定,否则本文所用的所有术语(包括技术和科学术语)的含义与本领域普通技术人员通常理解的含义相同。还将理解,诸如那些在常用字典中限定的那些术语应当被解释为具有与其在相关技术以及本公开的背景下的含义一致的含义,且将不解释为具有理想化或过度形式上的含义,除非本文明确如此限定。
对于知识型视频,用户希望在原始视频中某些视频图像(视频节点)做标签或笔记,以便后续重复观看或学习;而且,为了提高学习效率,只需重点观看标签位置对应的视频图像,不再重复播放全部的原始视频,同时保留完整的原始视频,以便于满足一些特殊需求,如将便于其他用户既能获得完整的原始视频,又能获得当前用户添加的标签。
目前视频标签是根据原始视频中的目标视频段的起止时间与视频标签建立索引表,然后将索引表与原始视频的视频标号存储,从而得到视频笔记。不难理解,视频笔记与原始视频是两个文件。当需要查看视频笔记时,根据索引表查看目标视频段的起止时间,然后根据视频标号和目标视频段的起止时间查找原始视频,获得目标视频段。这种视频笔记和原始视频分离方式不仅响应速度慢,而且,当播放原 始视频时,无法直接获得用户针对该原始视频的视频笔记,视频笔记只能从视频笔记对应的文件查找,无法获得完整的原始视频,影响用户的体验。
第一方面,本公开实施例提供一种视频资料制作方法,用于制作带有视频标签的综合视频资料,以便于用户能够方便、快速地在原始视频定位到期望的视频图像。
图1为本公开实施例提供的一种视频资料制作方法的流程图。如图1所示,视频资料制作方法包括:
步骤101,响应用户触发的插入标签的操作,获得当前播放的视频图像在原始视频中的时间戳。
其中,原始视频是视频资源发布者发布的原始视频数据,可以在终端播放。例如,原始视频是一部电影、一个视频课件、一部纪录片、一段录制的学习视频等。原始视频的视频格式可以是终端支持的MPEG、AVI、MOV、ASF或WMV等格式。
其中,视频图像是原始视频中的某一个视频帧。当前播放的视频图像是播放原始视频时,在显示屏中显示的视频图像。
当原始视频是一部纪录片,用户播放该纪录片时,当对纪录片中的某个视频图像感兴趣时,可以在该视频图像中插入标签,以便于后续重播该纪录片时,能够直接跳转至该视频图像。或者,若对纪录片中的某段视频感兴趣,可以在该视频段的起始位置插入标签,以便于在后续重播该纪录片时,能够直接跳转至该视频段的起始位置开始播放。
当原始视频是一部视频课件,用户播放该视频课件的过程中,若对视频课件中的某段视频感兴趣,可以在该视频段的起始位置插入标签,以便于在后续重播该视频课件时,能够直接跳转至该视频段的起始位置开始播放。或者,当对视频课件中的某个视频图像感兴趣时,可以在该视频图像中插入标签,以便于后续重播该视频课件时,能够直接跳转至该视频图像。
在一些实施例中,在播放原始视频的过程中,用户可以在播放页面通过触发按钮、触发动作、触发语音触发插入标签的操作。当终 端是电脑终端时,触发操作可以由鼠标或键盘等方式实现。例如,当使用鼠标进行触发操作时,利用鼠标点击预先设定的操作按钮,点击动作可以是单击或双击。再如,当使用键盘进行触发操作时,可以按压预先设定的快捷键。快捷键可以是键盘上任意一个按键或者多个按键的组合。快捷键的具体设定方式以及快捷键的类型再此不作限定。
当终端是移动终端或具有触摸功能的终端时,触发操作可以由触摸等方式实现。例如,用户触摸预先设定的按钮来实现,或者滑动预先设定的按钮来实现。
其中,当前播放的视频图像是指当前时刻显示在终端的显示屏上显示图像,时间戳是指视频图像在原始视频中的时间节点。例如,终端正在播放数学课第X章第X节的视频课件,当前时刻在终端的显示屏上显示的是该视频课件第9:30(9分30秒)的视频图像,那么该视频图像对应的时间戳为9:30。
步骤102,获取用户针对视频图像添加的标签信息。
其中,标签信息包括标记、学习笔记、观后感等。其中,标记相当于一个书签,仅用于表示视频图像比较重要。学习笔记是用户针对视频图像添加的批注,该批注可以是对某个视频图像中的内容的解释或疑问,也可以是总结或小结。或者,该批注是对视频图像以及该视频图像之前的一段视频段的总结或解释。
例如,用户对时间节点9:30以及时间节点9:30之前30秒内的内容进行总结,即对时间节点9:00至9:30时间段的视频课件内容进行总结,并在时间节点9:30添加标签。在一些实施例中,标签信息直接添加在视频图像内,也可以外挂在视频图像的边缘区域。
在一些实施例中,用户可以通过调用标签录入模块在视频图像中添加标签,标签录入模块可以是标签控件,该标签控件嵌入播放器程序中。例如,当用户操作激活按钮后,标签录入模块被激活,在终端的显示屏显示标签编辑页面,用户可在标签编辑页面输入、编辑内容。
图2为本公开实施例提供的标签编辑页面的示意图。如图2所示,标签编辑页面包括标签编号区域21和标签内容编辑区域22,其 中,标签编号区域21可以输入标签编号、标签名称等信息。标签内容编辑区域22可以输入笔记等信息。而且,在标签编辑区域和标签内容编辑区域还可以对输入的内容进行删除、复制、粘贴等操作。
在一些实施例中,标签录入模块是安装在终端的应用程序,如写字板、便签等应用程序,播放器与该应用程序调用连接。当用户触碰激活按钮时,安装于终端的应用程序被调用,显示屏显示该应用程序的界面。例如,当写字板与播放器调用连接,若用户滑动激活按钮,则写字板被调用,显示屏显示写字板的界面,用户可以在写字板编辑标签的内容。当标签内容编辑完毕后,用户可以点击完成按钮,标签内容自动时间戳、原始视频的视频数据关联。
在一些实施例中,用户激活插入标签的操作时,被激活的标签录入模块和被调用的可编辑的应用可以占用显示屏的整个页面,也可以占用显示屏的部分页面。
步骤103,将标签信息、时间戳与原始视频的视频数据关联整合,生成携带标签信息的综合视频资料。
其中,综合视频资料不仅包含有原始视频的视频数据,而且包含有标签信息和时间戳,时间戳与标签信息关联,同时,标签信息、时间戳和原始视频的视频数据关联。其中,关联是指将标签信息加入原始视频的视频数据中,并与时间戳相关联,使得标签信息、时间戳和原始视频的视频数据整合成一个整体数据。当激活标签时,播放器可直接跳转至时间戳位置,播放对应的视频图像。
在本实施例中,通过数据模型将标签信息、时间戳与原始视频的视频数据被整合为综合视频资料,综合视频资料可以被认为是包含了更多信息的原始视频,即,综合视频资料是一个文件。当播放综合视频资料时,播放器可以直接解析综合视频资料,并根据时间戳将该综合视频资料中所有添加标签的时间节点显示出来,用户只要点击对应的时间节点即可查看标签信息。其中,数据模型可以采用任意一个能够将标签信息、时间戳与原始视频的视频数据关联整合的模型即可,本实施例对此不作限定。
在一些实施例中,当播放器可以按照预设的图标显示添加标签 的时间节点。其中,预设的图标可以是卡通图形、动物图形、指针图形,也可以是时间图形。例如,时间图形表示出时、分、秒的时间。在一些实施例中,若综合视频资料的时长不足一小时,时间图形仅表示出分、秒。若综合视频资料超过一小时,则时间图形表示时、分、秒。
本实施例提供的视频资料制作方法,响应用户触发的插入标签的操作,获得当前播放的视频图像在原始视频中的时间戳;获取用户针对视频图像添加的标签信息;将标签信息、时间戳和原始视频的视频数据关联整合,生成携带标签信息的综合视频资料。由于综合视频资料为一个文件,方便存储和分享,而且播放时可以快速调用和缓冲。另外,用户可以在观看原始视频时,直接将标签信息添加到原始视频的数据中,操作方便,而且保留原始视频的完整性,后续重复观看该综合视频资料时,可以快速、准确地定位到标签位置,减少查找时间,提高学习效率,从而提高用户体验。
图3为本公开实施例提供的另一种视频资料制作方法的流程图。如图3所示,视频资料制作方法包括:
步骤301,响应用户触发的插入标签的操作,获得当前播放的视频图像在原始视频中的时间戳。
其中,原始视频是视频资源发布者发布的原始视频数据,可以在终端播放。例如,原始视频是一部电影、一个视频课件、一部纪录片、一段录制的学习视频等。原始视频的视频格式可以是终端支持的MPEG、AVI、MOV、ASF或WMV等格式。
其中,视频图像是原始视频中的某一个视频帧。当前播放的视频图像是播放原始视频时,在显示屏中显示的视频图像。
其中,时间戳是指视频图像在原始视频中的时间节点。例如,终端正在播放数学课第X章第X节的视频课件,当前时刻在终端的显示屏上显示的是该视频课件第9:30(9分30秒)的视频图像,那么该视频图像对应的时间戳为9:30。
其中,标签可以是标记、学习笔记、观后感等。关于标记的进一步说明可以参见上述实施例步骤101,为节约篇幅,在此不再详述。
在一些实施例中,用户可以在播放页面通过触发按钮、触发动作、触发语音触发插入标签的操作。另外,触发插入标签的操作可以根据不同终端采用不用的方式。例如,当终端为电脑时,可以通过鼠标或键盘触发插入标签的操作。当终端为手机时,可以通过触摸方式触发插入标签的操作。
步骤302,获取用户针对视频图像添加的标签信息。
其中,标签信息包括标记、学习笔记、观后感等。其中,标记相当于一个书签,仅用于表示视频图像比较重要。学习笔记是用户针对视频图像添加的批注,该批注可以是对某个视频图像中的内容的解释或疑问,也可以是总结或小结。或者,该批注是对视频图像以及该视频图像之前的一段视频段的总结或解释。
例如,用户对时间节点9:30以及时间节点9:30之前30秒内的内容进行总结,即对时间节点9:00至9:30时间段的视频课件内容进行总结,并在时间节点9:30添加标签。在一些实施例中,标签信息直接添加在视频图像内,也可以外挂在视频图像的边缘区域。
在一些实施例中,用户可以通过调用标签录入模块在视频图像中添加标签,标签录入模块可以是标签控件,该标签控件嵌入播放器程序中。例如,当用户操作激活按钮后,标签录入模块被激活,在终端的显示屏显示标签编辑页面,用户可在标签编辑页面输入、编辑内容。
步骤303,获取标签辅助信息。
其中,标签辅助信息是说明标签和限定使用权限的信息。例如,标签辅助信息包括用户信息、用户配置信息和原始视频的标识中至少之一。
在一些实施例中,用户信息包括用户帐号和/或用户使用的终端设备的标识。其中,用户帐号是用于区别观看原始视频的用户的帐号,或者是用于区别添加标签信息的用户的帐号。其中,用户账号可以是使用播放器的用户的帐号,或者是登陆服务器的用户帐号,该服务器是存储原始视频的服务器。用户帐号还可以是登陆终端的用户帐号。用户使用的终端设备的标识同样是为了区别添加标签的用户。当终端 设备与用户具有对应关系时,可以利用终端设备的标识来区别用户。
在一些实施例中,用户配置信息是添加标签的用户为原始视频添加的权限信息,包括用户权限信息。其中,用户权限信息用于限定用户的使用权限。例如,用户在添加标签信息时,可以设定用户A可以观看全部的标签信息,用户B仅可以观看标记,不能观看笔记。再如,用户在添加标签信息时,可以设定用户C可以观看编号为单号的标签信息,用户D可以观看编号为双号的标签信息。
在一些实施例中,原始视频标识具有唯一性,用于区别原始视频。通过原始视频标识即可获得对应的原始视频。
步骤304,将标签信息、时间戳、原始视频的视频数据和标签辅助信息关联整合,生成携带标签信息的综合视频资料。
在一些实施例中,在本实施例中,通过数据模型将标签信息、时间戳、标签辅助信息与原始视频的视频数据被整合为综合视频资料,综合视频资料可以被认为是包含了更多信息的原始视频,即,综合视频资料是一个文件。当播放综合视频资料时,播放器可以直接解析综合视频资料,并根据时间戳将该综合视频资料中所有添加标签的时间节点显示出来,用户只要点击对应的时间节点即可查看标签信息。其中,数据模型可以采用任意一个能够将标签信息、时间戳与原始视频的视频数据关联整合的模型即可,本实施例对此不作限定。
不难理解,由于原始视频标识具有唯一性,因此,可以通过原始视频标识区别综合视频资料。当用户将综合视频资料分享至共享平台时,其它用户可以通过原始视频标识获得对应的综合视频资料,并通过用户信息获得该综合视频资料的制作人,并根据用户权限信息获得播放权限。
步骤305,存储综合视频资料。
在一些实施例中,用户可以将综合视频资料存储于本地存储介质中,或者存在于原始视频的来源地,或者根据需要存储于第三方服务器。
图4为本公开实施例提供的另一种视频资料制作方法的流程图。如图4所示,视频资料制作方法包括:
步骤401,响应用户触发的插入标签的操作,获得当前播放的视频图像在原始视频中的时间戳。
其中,原始视频是视频资源发布者发布的原始视频数据,可以在终端播放。例如,原始视频是一部电影、一个视频课件、一部纪录片、一段录制的学习视频等。原始视频的视频格式可以是终端支持的MPEG、AVI、MOV、ASF或WMV等格式。
其中,视频图像是原始视频中的某一个视频帧。当前播放的视频图像是播放原始视频时,在显示屏中显示的视频图像。
其中,时间戳是指视频图像在原始视频中的时间节点。例如,终端正在播放数学课第X章第X节的视频课件,当前时刻在终端的显示屏上显示的是该视频课件第9:30(9分30秒)的视频图像,那么该视频图像对应的时间戳为9:30。
其中,标签可以是标记、学习笔记、观后感等。关于标记的进一步说明可以参见上述实施例步骤101,为节约篇幅,在此不再详述。
在一些实施例中,用户可以在播放页面通过触发按钮、触发动作、触发语音触发插入标签的操作。另外,触发插入标签的操作可以根据不同终端采用不用的方式。例如,当终端为电脑时,可以通过鼠标或键盘触发插入标签的操作。当终端为手机时,可以通过触摸方式触发插入标签的操作。
步骤402,获取用户针对视频图像添加的标签信息。
其中,标签信息包括标记、学习笔记、观后感等。其中,标记相当于一个书签,仅用于表示视频图像比较重要。学习笔记是用户针对视频图像添加的批注,该批注可以是对某个视频图像中的内容的解释或疑问,也可以是总结或小结。或者,该批注是对视频图像以及该视频图像之前的一段视频段的总结或解释。
例如,用户对时间节点9:30以及时间节点9:30之前30秒内的内容进行总结,即对时间节点9:00至9:30时间段的视频课件内容进行总结,并在时间节点9:30添加标签。在一些实施例中,标签信息直接添加在视频图像内,也可以外挂在视频图像的边缘区域。
在一些实施例中,用户可以通过调用标签录入模块在视频图像 中添加标签,标签录入模块可以是标签控件,该标签控件嵌入播放器程序中。例如,当用户操作激活按钮后,标签录入模块被激活,在终端的显示屏显示标签编辑页面,用户可在标签编辑页面输入、编辑内容。
步骤403,获取标签辅助信息。
其中,标签辅助信息是说明标签和限定使用权限的信息。例如,标签辅助信息包括用户信息、用户配置信息和原始视频的标识中至少之一。
在一些实施例中,用户信息包括用户帐号和/或用户使用的终端设备的标识。其中,用户帐号是用于区别观看原始视频的用户的帐号,或者是用于区别添加标签信息的用户的帐号。其中,用户账号可以是使用播放器的用户的帐号,或者是登陆服务器的用户帐号,该服务器是存储原始视频的服务器。用户帐号还可以是登陆终端的用户帐号。用户使用的终端设备的标识同样是为了区别添加标签的用户。当终端设备与用户具有对应关系时,可以利用终端设备的标识来区别用户。
在一些实施例中,用户配置信息是添加标签的用户为原始视频添加的权限信息,包括用户权限信息。其中,用户权限信息用于限定用户的使用权限。例如,用户在添加标签信息时,可以设定用户A可以观看全部的标签信息,用户B仅可以观看标记,不能观看笔记。再如,用户在添加标签信息时,可以设定用户C可以观看编号为单号的标签信息,用户D可以观看编号为双号的标签信息。
在一些实施例中,原始视频标识具有唯一性,用于区别原始视频。通过原始视频标识即可获得对应的原始视频。
步骤404,将标签信息、时间戳、原始视频的视频数据和标签辅助信息关联整合,生成携带标签信息的综合视频资料。
在一些实施例中,综合视频资料包括标签信息、时间戳、原始视频的视频数据和标签辅助信息,而且,标签信息、时间戳和标签辅助信息与原始视频的视频数据关联。
不难理解,由于原始视频标识具有唯一性,因此,可以通过原始视频标识区别综合视频资料。
当用户将综合视频资料分享至共享平台时,其它用户可以通过原始视频标识获得对应的综合视频资料,并通过用户信息获得该综合视频资料的制作人,并根据用户权限信息获得播放权限。
在一些实施例中,用户可以将综合视频资料存储于本地存储介质中,或者存在于原始视频的来源地,或者根据需要存储于第三方服务器。
步骤405,将综合视频资料分享至共享平台,以供共享平台中的其它用户获得综合视频资料。
在一些实施例中,用户将综合视频资料分享至共享平台,通过共享平台分享给好友或其他人。其中,共享平台可以是用户当前登陆的共享平台,也可以是不同于当前登陆的共享平台的第三方共享平台,
通过共享平台获得综合视频资料的其他用户,通过其播放器解析综合视频资料后,播放器通过标签辅助信息中的用户权限信息判断该其他用户的权限,并根据权限播放综合视频资料。
在一些实施例中,用户播放综合视频资料,可以将插入标签的时间节点在播放页面全部显示,以供用户快速定位。另外,用户也可以对标签信息进行修改。
图5为本公开实施例提供的再一种视频资料制作方法的流程图。如图5所示,视频资料制作方法包括:
步骤501,响应用户的播放指令,判断综合视频资料是否有标签。
其中,播放器收到用户的播放指令后,判断综合视频资料是否有标签。在一些实施例中,播放器可以通过标签数据来判断综合视频资料是否有标签。
步骤502,解析综合视频资料,获得综合视频资料数据中所有的标签和标签信息。
在本实施例中,当综合视频资料中包含标签时,解析综合视频资料,得综合视频资料数据中所有的标签和标签信息。
步骤503,在播放页面展示所有的标签。
其中,播放页面是终端的显示页面的全部或部分。例如,当终 端的显示页面显示多个引用程序时,播放页面可以是终端的部分显示页面。当终端的显示页面只显示播放器时,播放页面可以是终端的全部显示页面。然而,当终端的显示页面只显示播放器时,播放页面可以是终端的部分显示页面。
在本实施例中,在播放页面展示所有的标签,有利于用户快速、准确地定位到想要的位置,缩短查找时间,提高效率,从而提高用户的体验。
步骤504,基于用户选择的标签,展示标签对应的标签信息。
在一些实施例中,用户可以通过触摸方式选择需要进一步显示的标签信息的标签。例如,用户点击标签图标,则该标签图标对应的标签信息被显示在显示页面。
步骤505,接收用户针对标签的修改信息,并基于修改信息更新标签信息。
在一些实施例中,若用户需要修改标签信息,可以点击修改按钮,进入标签录入模块修改。在另一些实施例中,当用户点击标签图标时,标签信息直接在标签录入模块中显示,这样用户可以直接修改标签信息,并更新标签信息。
步骤506,根据更新后的标签信息、时间戳、标签辅助信息与原始视频的视频数据关联整合,生成携带新的综合视频资料。
步骤507,存储更新后的综合视频资料或在共享平台分享。
在一些实施例中,用户可以将综合视频资料存储于本地存储介质中,或者存在于原始视频的来源地,或者根据需要存储于第三方服务器。或者,将更新后的综合视频资料在共享平台分享。或者,将更新后的综合视频资料存储的同时,在共享平台分享。
图6为本公开实施例提供的又一种视频资料制作方法的流程图。如图6所示,视频资料制作方法包括:
步骤601,基于视频使用信息筛选视频资源,获得原始视频。
其中,视频使用信息包括视频的播放量、重播率、用户评论和点赞数量中的一种或多种。
对于视频资料制作平台,后者一些希望从网络获得学习资料的用户,可以通过后台大数据分析模块分析视频资料的视频使用信息,基于分析结果选择有价值的原始视频资料,从而减少不必要的资源浪费。
步骤602,响应用户触发的插入标签的操作,获得当前播放的视频图像在原始视频中的时间戳。
其中,视频图像是原始视频中的某一个视频帧。当前播放的视频图像是播放原始视频时,在显示屏中显示的视频图像。
其中,时间戳是指视频图像在原始视频中的时间节点。例如,终端正在播放数学课第X章第X节的视频课件,当前时刻在终端的显示屏上显示的是该视频课件第9:30(9分30秒)的视频图像,那么该视频图像对应的时间戳为9:30。
其中,标签可以是标记、学习笔记、观后感等。关于标记的进一步说明可以参见上述实施例步骤101,为节约篇幅,在此不再详述。
在一些实施例中,用户可以在播放页面通过触发按钮、触发动作、触发语音触发插入标签的操作。另外,触发插入标签的操作可以根据不同终端采用不用的方式。例如,当终端为电脑时,可以通过鼠标或键盘触发插入标签的操作。当终端为手机时,可以通过触摸方式触发插入标签的操作。
步骤603,获取用户针对视频图像添加的标签信息。
其中,标签信息包括标记、学习笔记、观后感等。其中,标记相当于一个书签,仅用于表示视频图像比较重要。学习笔记是用户针对视频图像添加的批注,该批注可以是对某个视频图像中的内容的解释或疑问,也可以是总结或小结。或者,该批注是对视频图像以及该视频图像之前的一段视频段的总结或解释。
在一些实施例中,用户可以通过调用标签录入模块在视频图像中添加标签,标签录入模块可以是标签控件,该标签控件嵌入播放器程序中。例如,当用户操作激活按钮后,标签录入模块被激活,在终端的显示屏显示标签编辑页面,用户可在标签编辑页面输入、编辑内容。
步骤604,获取标签辅助信息。
其中,标签辅助信息是说明标签和限定使用权限的信息。例如,标签辅助信息包括用户信息、用户配置信息和原始视频的标识中至少之一。
在一些实施例中,用户信息包括用户帐号和/或用户使用的终端设备的标识。其中,用户帐号是用于区别观看原始视频的用户的帐号,或者是用于区别添加标签信息的用户的帐号。其中,用户账号可以是使用播放器的用户的帐号,或者是登陆服务器的用户帐号,该服务器是存储原始视频的服务器。用户帐号还可以是登陆终端的用户帐号。用户使用的终端设备的标识同样是为了区别添加标签的用户。当终端设备与用户具有对应关系时,可以利用终端设备的标识来区别用户。
步骤605,将标签信息、时间戳、原始视频的视频数据和标签辅助信息关联整合,生成携带标签信息的综合视频资料。
在一些实施例中,综合视频资料包括标签信息、时间戳、原始视频的视频数据和标签辅助信息,而且,标签信息、时间戳和标签辅助信息与原始视频的视频数据关联。
步骤606,将综合视频资料分享至共享平台,以供共享平台中的其它用户获得综合视频资料。
在一些实施例中,用户将综合视频资料分享至共享平台,通过共享平台分享给好友或其他人。其中,共享平台可以是用户当前登陆的共享平台,也可以是不同于当前登陆的共享平台的第三方共享平台,
步骤607,在播放综合视频资料时,解析综合视频资料,获得综合视频资料数据中所有的标签和标签信息。
在本实施例中,当综合视频资料中包含标签时,解析综合视频资料,得综合视频资料数据中所有的标签和标签信息。
步骤608,在播放页面展示所有的标签。
其中,播放页面是终端的显示页面的全部或部分。例如,当终端的显示页面显示多个引用程序时,播放页面可以是终端的部分显示页面。当终端的显示页面只显示播放器时,播放页面可以是终端的全 部显示页面。然而,当终端的显示页面只显示播放器时,播放页面可以是终端的部分显示页面。
步骤609,基于用户选择的标签,展示标签对应的标签信息。
在一些实施例中,用户可以通过触摸方式选择需要进一步显示的标签信息的标签。例如,用户点击标签图标,则该标签图标对应的标签信息被显示在显示页面。
步骤610,接收用户针对标签的修改信息,并基于修改信息更新标签信息。
在一些实施例中,若用户需要修改标签信息,可以点击修改按钮,进入标签录入模块修改。在另一些实施例中,当用户点击标签图标时,标签信息直接在标签录入模块中显示,这样用户可以直接修改标签信息,并更新标签信息。
步骤611,根据更新后的标签信息、时间戳、标签辅助信息与原始视频的视频数据关联整合,生成携带新的综合视频资料。
步骤612,存储更新后的综合视频资料或在共享平台分享。
本实施例提供的视频资料制作方法,响应用户触发的插入标签的操作,获得当前播放的视频图像在原始视频中的时间戳;获取用户针对视频图像添加的标签信息;将标签信息、时间戳和原始视频的视频数据关联整合,生成携带标签信息的综合视频资料。由于综合视频资料为一个文件,方便存储和分享,而且播放时可以快速调用和缓冲。另外,用户可以在观看原始视频时,直接将标签信息添加到原始视频的数据中,操作方便,而且保留原始视频的完整性,后续重复观看该综合视频资料时,可以快速、准确地定位到标签位置,减少查找时间,提高学习效率,从而提高用户体验。
第二方面,本公开实施例提供一种视频资料制作装置。图7为本公开实施例的一种视频资料制作装置的原理框图。如图7所示,视频资料制作装置包括:
触发模块701,用于响应用户的触发指令触发插入标签的操作。
在一些实施例中,在播放原始视频的过程中,用户可以在播放页面通过触发按钮、触发动作、触发语音触发插入标签的操作。当终 端是电脑终端时,触发操作可以由鼠标或键盘等方式实现。例如,当使用鼠标进行触发操作时,利用鼠标点击预先设定的操作按钮,点击动作可以是单击或双击。再如,当使用键盘进行触发操作时,可以按压预先设定的快捷键。快捷键可以是键盘上任意一个按键或者多个按键的组合。快捷键的具体设定方式以及快捷键的类型再此不作限定。
第一获取模块702,用于获得当前播放的视频图像在原始视频中的时间戳。
其中,时间戳是指视频图像在原始视频中的时间节点。当前播放的视频图像是指当前时刻显示在终端的显示屏上显示图像。例如,终端正在播放数学课第X章第X节的视频课件,当前时刻在终端的显示屏上显示的是该视频课件第9:30(9分30秒)的视频图像,那么该视频图像对应的时间戳为9:30。
第二获取模块703,用于获取用户针对视频图像添加的标签信息。
其中,标签信息包括标记、学习笔记、观后感等。其中,标记相当于一个书签,仅用于表示视频图像比较重要。学习笔记是用户针对视频图像添加的批注,该批注可以是对某个视频图像中的内容的解释或疑问,也可以是总结或小结。或者,该批注是对视频图像以及该视频图像之前的一段视频段的总结或解释。
在一些实施例中,用户可以通过调用标签录入模块在视频图像中添加标签,标签录入模块可以是标签控件,该标签控件嵌入播放器程序中。例如,当用户操作激活按钮后,标签录入模块被激活,在终端的显示屏显示标签编辑页面,用户可在标签编辑页面输入、编辑内容。
在一些实施例中,第二获取模块703为标签录入模块。其中,标签录入模块是安装在终端的应用程序,如写字板、便签等应用程序,该应用程序与播放器关联。当用户触碰激活按钮时,安装于终端的应用程序被调用,显示屏显示该应用程序的界面。例如,当写字板与播放器关联,若用户滑动激活按钮,则写字板被调用,显示屏显示写字板的界面,用户可以在写字板编辑标签的内容。当标签内容编辑完毕 后,用户可以点击完成按钮,标签内容自动时间戳、原始视频的视频数据关联。
在一些实施例中,用户激活插入标签的操作时,被激活的标签录入模块和被调用的可编辑的应用可以占用显示屏的整个页面,也可以占用显示屏的部分页面。
关联模块704,用于将标签信息、时间戳和原始视频的视频数据关联整合,生成携带标签信息的综合视频资料。
其中,综合视频资料不仅包含有原始视频的视频数据,而且包含有标签信息和时间戳,时间戳与标签信息关联,同时,标签信息、时间戳和原始视频的视频数据关联。
在本实施例中,通过数据模型将标签信息、时间戳与原始视频的视频数据被整合为综合视频资料,综合视频资料可以被认为是包含了更多信息的原始视频,即,综合视频资料是一个文件。当播放综合视频资料时,播放器可以直接解析综合视频资料,并根据时间戳将该综合视频资料中所有添加标签的时间节点显示出来,用户只要点击对应的时间节点即可查看标签信息。其中,数据模型可以采用任意一个能够将标签信息、时间戳与原始视频的视频数据关联整合的模型即可,本实施例对此不作限定。
在一些实施例中,当播放器可以按照预设的图标显示添加标签的时间节点。其中,预设的图标可以是卡通图形、动物图形、指针图形,也可以是时间图形。例如,时间图形表示出时、分、秒的时间。在一些实施例中,若综合视频资料的时长不足一小时,时间图形仅表示出分、秒。若综合视频资料超过一小时,则时间图形表示时、分、秒。
本实施例提供的视频资料制作装置,触发模块用于响应用户触发的插入标签的操作,第一获取模块用于获得当前播放的视频图像在原始视频中的时间戳;第二获取模块用于获取用户针对视频图像添加的标签信息;关联模块用于将标签信息、时间戳和原始视频的视频数据关联整合,生成携带标签信息的综合视频资料。由于综合视频资料为一个文件,方便存储和分享,而且播放时可以快速调用和缓冲。另 外,用户可以在观看原始视频时,直接将标签信息添加到原始视频的数据中,操作方便,而且保留原始视频的完整性,后续重复观看该综合视频资料时,可以快速、准确地定位到标签位置,减少查找时间,提高学习效率,从而提高用户体验。
第三方面,参照图8,本公开实施例提供一种电子设备,其包括:
一个或多个处理器801;
存储器802,其上存储有一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现上述任意一项的视频资料制作方法;
一个或多个I/O接口803,连接在处理器与存储器之间,配置为实现处理器与存储器的信息交互。
其中,处理器801为具有数据处理能力的器件,其包括但不限于中央处理器(CPU)等;存储器802为具有数据存储能力的器件,其包括但不限于随机存取存储器(RAM,更具体如SDRAM、DDR等)、只读存储器(ROM)、带电可擦可编程只读存储器(EEPROM)、闪存(FLASH);I/O接口(读写接口)803连接在处理器801与存储器802间,能实现处理器801与存储器802的信息交互,其包括但不限于数据总线(Bus)等。
在一些实施例中,处理器801、存储器802和I/O接口803通过总线相互连接,进而与计算设备的其它组件连接。
第四方面,本公开实施例提供一种计算机可读介质,其上存储有计算机程序,程序被处理器执行时实现上述任意一种视频资料制作方法。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件 合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其它数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其它存储器技术、CD-ROM、数字多功能盘(DVD)或其它光盘存储、磁盒、磁带、磁盘存储或其它磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其它的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其它传输机制之类的调制数据信号中的其它数据,并且可包括任何信息递送介质。
本文已经公开了示例实施例,并且虽然采用了具体术语,但它们仅用于并仅应当被解释为一般说明性含义,并且不用于限制的目的。在一些实例中,对本领域技术人员显而易见的是,除非另外明确指出,否则可单独使用与特定实施例相结合描述的特征、特性和/或元素,或可与其它实施例相结合描述的特征、特性和/或元件组合使用。因此,本领域技术人员将理解,在不脱离由所附的权利要求阐明的本公开的范围的情况下,可进行各种形式和细节上的改变。

Claims (13)

  1. 一种视频资料制作方法,其包括:
    响应用户触发的插入标签的操作,获得当前播放的视频图像在原始视频中的时间戳;
    获取所述用户针对所述视频图像添加的标签信息;以及
    将所述标签信息、所述时间戳与所述原始视频的视频数据关联整合,生成携带所述标签信息的综合视频资料。
  2. 根据权利要求1所述的方法,其中,所述获取所述用户针对所述视频图像添加的标签信息,包括:
    通过标签录入模块获得所述用户针对所述视频图像添加的标签信息。
  3. 根据权利要求1-3任一项所述的方法,其中,所述将所述标签信息、所述时间戳和所述原始视频的视频数据关联整合,生成携带所述标签信息的综合视频资料之前,还包括:
    获取标签辅助信息;其中,所述标签辅助信息是对所述标签和限定使用权限的说明;
    所述将所述标签信息、所述时间戳和所述原始视频的视频数据关联整合,生成携带所述标签信息的综合视频资料,包括:
    将所述标签信息、所述时间戳、所述标签辅助信息与所述原始视频的视频数据关联整合,生成携带所述标签信息的综合视频资料。
  4. 根据权利要求3所述的方法,其中,所述标签辅助信息包括用户信息、用户配置信息和原始视频的标识中至少之一。
  5. 根据权利要求4所述的方法,其中,所述用户信息包括用户帐号和/或用户使用的终端设备的标识;所述用户配置信息包括用户权限信息。
  6. 根据权利要求1-5任一项所述的方法,其中,所述将所述标签信息、所述时间戳和所述原始视频的视频数据关联整合,生成携带所述标签信息的综合视频资料之后,还包括:
    响应用户的播放指令,解析所述综合视频资料,获得所述综合视频资料数据中所有的所述标签和所述标签信息;
    在播放页面展示所有的所述标签;以及
    基于所述用户选择的所述标签,展示所述标签对应的所述标签信息。
  7. 根据权利要求6所述的方法,其中,所述基于所述用户选择的所述视频节点,展示所述视频节点对应的所述标签信息之后,还包括:
    接收所述用户针对所述标签的修改信息,并基于所述修改信息更新所述标签信息;以及
    根据更新后的所述标签信息、所述时间戳、所述标签辅助信息与所述原始视频的视频数据关联整合,生成携带新的所述综合视频资料。
  8. 根据权利要求1-5任一项所述的方法,其中,所述将所述标签信息、所述时间戳和所述原始视频的视频数据关联整合,生成携带所述标签信息的综合视频资料之后,还包括:
    将所述综合视频资料分享至共享平台,以供所述共享平台中的其它用户获得所述综合视频资料。
  9. 根据权利要求1-5任一项所述的方法,其中,所述响应用户触发的插入标签的操作,获得当前播放的视频图像在原始视频中的时间戳之前,还包括:
    基于视频使用信息筛选视频资源,获得所述原始视频;其中,所述视频使用信息包括视频的播放量、重播率、用户评论和点赞数量 中的一种或多种。
  10. 根据权利要求1-5任一项所述的方法,其中,所述标签信息包括标记和/或笔记。
  11. 一种视频资料制作装置,其包括:
    触发模块,用于响应用户的触发指令触发插入标签的操作;
    第一获取模块,用于获得当前播放的视频图像在原始视频中的时间戳;
    第二获取模块,用于获取所述用户针对所述视频图像添加的标签信息;以及
    关联模块,用于将所述标签信息、所述时间戳和所述原始视频的视频数据关联整合,生成携带所述标签信息的综合视频资料。
  12. 一种电子设备,其包括:
    一个或多个处理器;
    存储装置,其上存储有一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现根据权利要求1-10任一项所述的方法;
    一个或多个I/O接口,连接在所述处理器与存储器之间,配置为实现所述处理器与存储器的信息交互。
  13. 一种计算机可读介质,其上存储有计算机程序,所述程序被处理器执行时实现根据权利要求1-10任一项所述的方法。
PCT/CN2020/134291 2020-06-24 2020-12-07 视频资料制作方法及装置、电子设备、计算机可读介质 WO2021258655A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020217027687A KR20210114536A (ko) 2020-06-24 2020-12-07 영상자료 제작 방법 및 장치, 전자 기기, 컴퓨터 판독가능 매체
JP2021550025A JP7394143B2 (ja) 2020-06-24 2020-12-07 映像資料作成方法および装置、電子機器、コンピュータ可読記憶媒体並びにコンピュータプログラム
EP20919374.7A EP3958580A4 (en) 2020-06-24 2020-12-07 METHOD AND APPARATUS FOR PRODUCTION OF VIDEO DATA, ELECTRONIC DEVICE AND COMPUTER READABLE MEDIA
US17/460,008 US20210397652A1 (en) 2020-06-24 2021-08-27 Method and apparatus for producing video material, electronic device and computer readable medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010590913.9A CN111654749B (zh) 2020-06-24 2020-06-24 视频资料制作方法及装置、电子设备、计算机可读介质
CN202010590913.9 2020-06-24

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/460,008 Continuation US20210397652A1 (en) 2020-06-24 2021-08-27 Method and apparatus for producing video material, electronic device and computer readable medium

Publications (1)

Publication Number Publication Date
WO2021258655A1 true WO2021258655A1 (zh) 2021-12-30

Family

ID=72352470

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/134291 WO2021258655A1 (zh) 2020-06-24 2020-12-07 视频资料制作方法及装置、电子设备、计算机可读介质

Country Status (2)

Country Link
CN (1) CN111654749B (zh)
WO (1) WO2021258655A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111654749B (zh) * 2020-06-24 2022-03-01 百度在线网络技术(北京)有限公司 视频资料制作方法及装置、电子设备、计算机可读介质
CN112199553A (zh) * 2020-09-24 2021-01-08 北京达佳互联信息技术有限公司 一种信息资源的处理方法、装置、设备及存储介质
CN112601129B (zh) * 2020-12-09 2023-06-13 深圳市房多多网络科技有限公司 视频交互系统、方法和接收端
CN112286943B (zh) * 2020-12-26 2021-04-20 东华理工大学南昌校区 基于时事案例优化的思想政治教案展示方法和系统
CN113051436A (zh) * 2021-03-16 2021-06-29 读书郎教育科技有限公司 一种智慧课堂视频学习点分享系统及方法
CN113949920A (zh) * 2021-12-20 2022-01-18 深圳佑驾创新科技有限公司 视频标注方法、装置、终端设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104065979A (zh) * 2013-03-22 2014-09-24 北京中传数广技术有限公司 一种动态显示和视频内容相关联信息方法及系统
US20150109438A1 (en) * 2013-10-21 2015-04-23 Canon Kabushiki Kaisha Management method for network system and network device, network device and control method therefor, and management system
CN109194978A (zh) * 2018-10-15 2019-01-11 广州虎牙信息科技有限公司 直播视频剪辑方法、装置和电子设备
US20190272141A1 (en) * 2014-03-07 2019-09-05 Steelcase Inc. Method and system for facilitating collaboration sessions
CN110381382A (zh) * 2019-07-23 2019-10-25 腾讯科技(深圳)有限公司 视频笔记生成方法、装置、存储介质和计算机设备
CN111209437A (zh) * 2020-01-13 2020-05-29 腾讯科技(深圳)有限公司 一种标签处理方法、装置、存储介质和电子设备
CN111654749A (zh) * 2020-06-24 2020-09-11 百度在线网络技术(北京)有限公司 视频资料制作方法及装置、电子设备、计算机可读介质

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930779B (zh) * 2010-07-29 2012-02-29 华为终端有限公司 一种视频批注方法及视频播放器
CN103810185A (zh) * 2012-11-07 2014-05-21 联想(北京)有限公司 一种信息处理方法、装置及电子设备
CN104735552A (zh) * 2013-12-23 2015-06-24 北京中传数广技术有限公司 一种直播视频标签插入的方法与系统
CN105187795B (zh) * 2015-09-14 2018-11-09 博康云信科技有限公司 一种基于视图库的视频标签定位方法及装置
CN107027070A (zh) * 2016-02-02 2017-08-08 中国电信股份有限公司 在视频中植入信息的方法、终端以及系统
CN107105255B (zh) * 2016-02-23 2020-03-03 阿里巴巴集团控股有限公司 视频文件中添加标签的方法和装置
CN106791574B (zh) * 2016-12-16 2020-04-24 联想(北京)有限公司 视频标注方法、装置及视频会议系统
CN108540750A (zh) * 2017-03-01 2018-09-14 中国电信股份有限公司 基于监控视频与电子设备标识关联的方法、装置及系统
CN108632541B (zh) * 2017-03-20 2021-07-20 杭州海康威视数字技术股份有限公司 一种多视频片段合并方法及装置
CN107066619B (zh) * 2017-05-10 2021-08-10 广州视源电子科技股份有限公司 基于多媒体资源的用户笔记生成方法、装置和终端
CN108551473B (zh) * 2018-03-26 2022-03-18 武汉爱农云联科技有限公司 一种基于可视农业的农产品交流方法和装置
CN110035330B (zh) * 2019-04-16 2021-11-23 上海平安智慧教育科技有限公司 基于在线教育的视频生成方法、系统、设备及存储介质
CN110677735A (zh) * 2019-10-17 2020-01-10 网易(杭州)网络有限公司 一种视频定位方法和装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104065979A (zh) * 2013-03-22 2014-09-24 北京中传数广技术有限公司 一种动态显示和视频内容相关联信息方法及系统
US20150109438A1 (en) * 2013-10-21 2015-04-23 Canon Kabushiki Kaisha Management method for network system and network device, network device and control method therefor, and management system
US20190272141A1 (en) * 2014-03-07 2019-09-05 Steelcase Inc. Method and system for facilitating collaboration sessions
CN109194978A (zh) * 2018-10-15 2019-01-11 广州虎牙信息科技有限公司 直播视频剪辑方法、装置和电子设备
CN110381382A (zh) * 2019-07-23 2019-10-25 腾讯科技(深圳)有限公司 视频笔记生成方法、装置、存储介质和计算机设备
CN111209437A (zh) * 2020-01-13 2020-05-29 腾讯科技(深圳)有限公司 一种标签处理方法、装置、存储介质和电子设备
CN111654749A (zh) * 2020-06-24 2020-09-11 百度在线网络技术(北京)有限公司 视频资料制作方法及装置、电子设备、计算机可读介质

Also Published As

Publication number Publication date
CN111654749B (zh) 2022-03-01
CN111654749A (zh) 2020-09-11

Similar Documents

Publication Publication Date Title
WO2021258655A1 (zh) 视频资料制作方法及装置、电子设备、计算机可读介质
US10949052B2 (en) Social interaction in a media streaming service
US9311408B2 (en) Methods and systems for processing media files
JP4347223B2 (ja) マルチメディア文書における多モード特性に注釈を付けるためのシステムおよび方法
US7996432B2 (en) Systems, methods and computer program products for the creation of annotations for media content to enable the selective management and playback of media content
US20100180218A1 (en) Editing metadata in a social network
US20060277457A1 (en) Method and apparatus for integrating video into web logging
US11580088B2 (en) Creation, management, and transfer of interaction representation sets
US20190050378A1 (en) Serializable and serialized interaction representations
CN108282673B (zh) 一种播放记录的更新方法、服务器及客户端
US20180047429A1 (en) Streaming digital media bookmark creation and management
KR20100086512A (ko) 시간을 이용하여 메타데이터를 미디어 오브젝트들과 연관시키는 방법 및 장치
US10732796B2 (en) Control of displayed activity information using navigational mnemonics
CN113395605B (zh) 视频笔记生成方法及装置
JP5306555B1 (ja) 複数のデジタルコンテンツを提供可能なシステム及びこれを用いた方法
US20090216743A1 (en) Systems, Methods and Computer Program Products for the Use of Annotations for Media Content to Enable the Selective Management and Playback of Media Content
US20210397652A1 (en) Method and apparatus for producing video material, electronic device and computer readable medium
WO2021098263A1 (zh) 应用程序的分享方法及装置、电子设备、可读介质
JP5342509B2 (ja) コンテンツ再生装置、コンテンツ再生装置の制御方法、制御プログラム、及び記録媒体
JP6234080B2 (ja) 複数のデジタルコンテンツを提供可能なシステム及びこれを用いた方法
KR20180093301A (ko) 멀티미디어 콘텐츠의 메모 기능 구현 방법
TW202020682A (zh) 網路媒體內容類型識別系統與方法
CN114090844A (zh) 一种目标页面生成方法、装置、设备及存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021550025

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20217027687

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020919374

Country of ref document: EP

Effective date: 20210826

NENP Non-entry into the national phase

Ref country code: DE