CN109286850B - Video annotation method and terminal based on bullet screen - Google Patents

Video annotation method and terminal based on bullet screen Download PDF

Info

Publication number
CN109286850B
CN109286850B CN201710605238.0A CN201710605238A CN109286850B CN 109286850 B CN109286850 B CN 109286850B CN 201710605238 A CN201710605238 A CN 201710605238A CN 109286850 B CN109286850 B CN 109286850B
Authority
CN
China
Prior art keywords
video
segment
user
file
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710605238.0A
Other languages
Chinese (zh)
Other versions
CN109286850A (en
Inventor
邓益群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Technology Group Co Ltd
Original Assignee
TCL Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Technology Group Co Ltd filed Critical TCL Technology Group Co Ltd
Priority to CN201710605238.0A priority Critical patent/CN109286850B/en
Publication of CN109286850A publication Critical patent/CN109286850A/en
Application granted granted Critical
Publication of CN109286850B publication Critical patent/CN109286850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4756End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/232Content retrieval operation locally within server, e.g. reading video streams from disk arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Abstract

The invention discloses a video annotation method and a terminal based on a bullet screen, wherein the method comprises the following steps: acquiring barrage comment data of a video file; analyzing the barrage comment data, extracting a candidate segment set from the video file according to an analysis result, and acquiring segment marking files corresponding to the candidate segments; acquiring film watching preference information of a user; finding out a segment label file corresponding to the viewing preference information of the user from the segment label files corresponding to the candidate segments; and according to the segment marking file corresponding to the watching preference information of the user, carrying out video segment marking on the video file opened by the user. The invention can enable the terminal to provide personalized video annotation for the user according to the preference of the user without manual participation, thereby improving the film watching experience of the user.

Description

Video annotation method and terminal based on bullet screen
Technical Field
The invention relates to the technical field of computers, in particular to a video annotation method and a terminal based on a barrage.
Background
With the development demand of user interaction mode, the bullet screen technology gradually becomes a common function of video playing. When watching videos, users can realize the requirements of bullet screen communication or communication by means of a real-time bullet screen system. The real-time property of the bullet screen provides a new information display means for the moment.
At present, a user watches videos on a terminal, which is a common leisure mode. With the rapid development of the internet technology, the production cost is greatly reduced, the video volume and the total time in the internet are exponentially increased, and people have difficulty in having enough time to follow up with massive video programs. Therefore, a way to quickly understand the contents of the movie becomes important. For the user, a good video clip can make the user judge whether the generally unviewed film and television programs are interesting or not, and can also make classical review through the clip.
The traditional method of manual editing is time-consuming and labor-consuming, and increases video sources in a phase-changing manner, so that points which are really interested by users cannot be well captured. Although the existing method for dotting and labeling the video key fragments avoids the problem of increasing video sources and reduces the complexity of switching different video sources of a user, the user can quickly browse the key fragments by dragging a mouse to a labeling point, the video still needs to be labeled in a manual labeling mode, personalized video labeling can not be provided for the user according to the user preference, and the film watching experience of the user is influenced.
Disclosure of Invention
The invention provides a video annotation method and a terminal based on a bullet screen, which can enable the terminal to provide personalized video annotation for a user according to different preferences of the user without manual participation and improve the film watching experience of the user.
In a first aspect, the present invention provides a video annotation method based on a bullet screen, including:
acquiring barrage comment data of a video file;
analyzing the barrage comment data, extracting a candidate segment set from the video file according to an analysis result, and acquiring segment marking files corresponding to the candidate segments;
acquiring film watching preference information of a user;
finding out a segment label file corresponding to the viewing preference information of the user from the segment label files corresponding to the candidate segments;
and according to the segment marking file corresponding to the watching preference information of the user, carrying out video segment marking on the video file opened by the user.
In a second aspect, the present invention provides a terminal, comprising:
the barrage data acquisition unit is used for acquiring barrage comment data of the video file;
the barrage data analysis unit is used for analyzing the barrage comment data, extracting a candidate segment set from the video file according to an analysis result, and acquiring segment marking files corresponding to the candidate segments;
a user preference acquiring unit for acquiring the viewing preference information of the user;
a markup file matching unit, configured to find a fragment markup file corresponding to the viewing preference information of the user from the fragment markup files corresponding to the candidate fragments;
and the video annotation execution unit is used for carrying out video segment annotation on the video file opened by the user according to the segment annotation file corresponding to the watching preference information of the user.
In a third aspect, the present invention provides another terminal, including a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store an application program instruction that supports the terminal to execute the method, and the processor is configured to call the application program instruction to execute the barrage-based video annotation method of the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium storing a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the bullet screen based video annotation method of the first aspect described above.
The method comprises the steps of obtaining barrage comment data of a video file; analyzing the barrage comment data, extracting a candidate segment set from the video file according to an analysis result, and acquiring segment marking files corresponding to the candidate segments; the method comprises the steps of obtaining the watching preference information of a user, inquiring the segment label matched with the watching preference information from the server according to the watching preference information of the user, and labeling the video watched by the user according to the inquired segment label, so that the terminal can provide personalized video labels for the user according to different preferences of the user, manual participation is not needed, and the watching experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a video annotation method based on a bullet screen according to a first embodiment of the present invention;
fig. 2 is a flowchart illustrating a specific implementation of step S101 in a video annotation method based on a bullet screen according to a first embodiment of the present invention;
fig. 3 is a flowchart illustrating a specific implementation of step S201 in a video annotation method based on a bullet screen according to a first embodiment of the present invention;
fig. 4 is a schematic flowchart of a video annotation method based on a bullet screen according to a second embodiment of the present invention;
fig. 5 is a schematic block diagram of a terminal according to a third embodiment of the present invention;
fig. 6 is a schematic block diagram of a terminal according to a fourth embodiment of the present invention;
fig. 7 is a schematic block diagram of a terminal according to a fifth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In particular implementations, the terminals described in embodiments of the invention include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the discussion that follows, a terminal that includes a display and a touch-sensitive surface is described. However, it should be understood that the terminal may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The terminal supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the terminal may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
Referring to fig. 1, a schematic flow chart of a video annotation method based on a bullet screen according to a first embodiment of the present invention is provided, and as shown in fig. 1, the method may include:
step S101, acquiring barrage comment data of the video file.
The barrage comment data is video comments made by users watching the video file, and includes but is not limited to comment time and comment content.
And step S102, analyzing the barrage comment data, extracting a candidate segment set from the video file according to an analysis result, and acquiring segment marking files corresponding to the candidate segments.
Referring to fig. 2, in an embodiment, step S102 specifically includes:
step S201, analyzing the barrage comment data to obtain a segment theme related to the video file. Further, referring to fig. 3, step S201 specifically includes:
step S301, performing semantic analysis on the barrage comment data.
The semantic analysis includes but is not limited to word segmentation, part-of-speech tagging, entity word extraction, grammar parsing, TF-IDF analysis, word frequency statistics and the like of comment contents in the barrage comment data.
And step S302, filtering the barrage comment data according to a semantic analysis result.
Wherein the filtering the spring comment data according to the semantic analysis result comprises:
filtering bullet screen comment data which do not accord with grammar according to the grammar analysis result; and filtering the barrage comment data with lower word frequency according to the word frequency statistical result.
And step S303, performing theme analysis on the filtered barrage comment data, and extracting a segment theme related to the video file.
The filtered barrage comment data can be sorted according to the word frequency statistical result, and the keywords in the barrage comment data which are sorted in the front are extracted to serve as the segment subjects related to the video segments.
Step S202, extracting the video clip corresponding to the clip theme from the video file.
After the clip theme related to the video is extracted from the barrage comment data, video clips within a preset time range before and after the comment time can be intercepted from the video file according to the comment time of the barrage comment data and serve as video clips corresponding to the clip theme. In a specific application, the preset time range can be 1-5 s.
Step S203, counting the playing time and the bullet screen density of the video clip corresponding to each clip theme;
and step S204, selecting a candidate segment set meeting preset conditions from all the video segments according to the playing time and the bullet screen density, wherein the preset conditions are that the playing time is greater than a preset time threshold and/or the bullet screen density is greater than a preset bullet screen density threshold.
In a specific application, the longer the playing and the larger the bullet screen density on a video clip are, the greater the attraction of the video clip to audiences is, so that the video clip with the playing time greater than a preset time threshold and/or the bullet screen density greater than a preset bullet screen density threshold is selected from the video clips corresponding to the previous clip themes as a candidate clip, and the candidate clip can be ensured to be a wonderful clip popular with the audiences.
Step S205, obtaining a segment markup file corresponding to each candidate segment in the candidate segment set.
The segment annotation file of the candidate segment includes, but is not limited to, a segment topic, a segment length, and a segment start time. In a particular application, the clip markup file can be named by the clip theme of the video clip.
Step S103, acquiring the viewing preference information of the user. Further, step S103 specifically includes:
and acquiring basic attribute information and viewing behavior information of the user, and analyzing and acquiring the viewing preference information of the user according to the basic attribute information and the viewing behavior information.
The basic attribute information of the user includes but is not limited to the age, the sex and other information of the user; the viewing behavior information of the user comprises the name, the type and other information of videos historically watched by the user.
In a specific application, a user can register an account for logging in a client in advance before watching a video on the client, basic attribute information of the user is filled in when the login account is registered, behaviors of watching the video information are recorded under the account after the user logs in the account, and the terminal can read the basic attribute information and the watching behavior information of the user through the account logged in by the user.
Step S104, finding out the segment label file corresponding to the user' S viewing preference information from the segment label file corresponding to each candidate segment.
Wherein, the viewing preference information of the user includes, but is not limited to, the type of video, the plot of video, etc. that the user likes to view. After acquiring the film watching preference information of the user, the terminal can inquire the segment label file matched with the film watching preference of the user from the server according to the film watching preference of the user. For example: if the viewing preference of the user is: if the user likes to watch the video of the martial art type, all the fragment annotation files related to the martial art can be inquired from the server according to the favorite information and returned to the terminal as the fragment annotation file matched with the favorite information of the user.
Step S105, according to the segment marking file corresponding to the watching preference information of the user, carrying out video segment marking on the video file opened by the user.
After receiving the segment annotation file corresponding to the viewing preference information of the user returned by the server, the terminal can perform video segment annotation on the video file opened by the user according to the segment annotation file. In a specific application, the acquired segment markup file corresponding to the viewing information of the user may include a plurality of segments. Since the segment annotation file includes, but is not limited to, the segment topic, the segment duration, and the start time of the segment, the terminal can add segment annotation to the corresponding position of the video file according to these information.
As can be seen from the above, in the video annotation method based on the barrage provided by the embodiment, barrage comment data of a video file is acquired; analyzing the barrage comment data, extracting a candidate segment set from the video file according to an analysis result, and acquiring segment marking files corresponding to the candidate segments; the method comprises the steps of obtaining film watching preference information of a user, inquiring segment labels matched with the film watching preference information from segment label files corresponding to all candidate segments according to the film watching preference information of the user, and labeling videos watched by the user according to the inquired segment labels, so that the terminal can provide personalized video labels for the user according to different preferences of the user, manual participation is not needed, and film watching experience of the user is improved.
Referring to fig. 4, it is a schematic flowchart of a video annotation method based on bullet screens according to a second embodiment of the present invention. As shown in fig. 3, with respect to the previous embodiment, the video annotation method based on bullet screens provided by this embodiment further includes:
step S406, if the video playing is in the video segment playing mode, the video is opened and then the segment is played in a fast way according to the video segment marks.
In a specific application, the video playing comprises a video clip playing mode, and after a user controls the video playing to enter the video clip playing mode, the terminal automatically plays the video clips marked before according to the time sequence of the video marking clips so as to quickly browse the video highlights.
Step S407, if the video playing is in the full video watching mode, the full video playing is performed after the video is opened.
Preferably, referring to fig. 3, with respect to the previous embodiment, the video annotation method based on a bullet screen provided in this embodiment further includes:
step S408, in the process of playing the full video, if a fast forward instruction input by a user is received, jumping to the next marked video segment according to the marking of the video segment to play the video;
step S409, in the process of playing the full video, if a fast-backward instruction input by a user is received, jumping to the last marked video segment according to the marking of the video segment to play the video.
In a specific application, when a user watches a video in a full video mode, if the user does not want to watch a currently played video segment, a fast forward or fast backward instruction can be input by clicking a preset fast forward key or a preset fast backward key, when the terminal receives the fast forward or fast backward instruction input by the user, the terminal can directly jump to a next labeled video segment adjacent to the currently played video segment according to the fast forward instruction or directly jump to a last labeled video segment adjacent to the currently played video segment according to the fast backward instruction, the user does not need to manually drag a progress bar on a video display interface to search the video segment which the user wants to watch, and the user can accurately fast forward or fast backward to play the video segment favorite by the user.
It should be noted that, since the implementation manners of step S401 to step S405 in this embodiment are completely the same as the implementation manners of step S101 to step S105 in the previous embodiment, detailed descriptions thereof are omitted here.
As can be seen from the above, the video annotation method based on the barrage provided by the embodiment can also enable the terminal to provide personalized video annotation for the user according to different preferences of the user, so that manual participation is not required, and the film watching experience of the user is improved; compared with the previous embodiment, the embodiment can also quickly play the standard highlight video clip to the user when the terminal is in the video clip mode; when the terminal is in a full video playing mode, the terminal directly skips to a video playing segment which is interested by the user according to a fast forward or fast backward instruction input by the user, and the film watching experience of the user is further improved.
Fig. 5 is a schematic block diagram of a terminal according to a third embodiment of the present invention. For convenience of explanation, only the portions related to the present embodiment are shown.
Referring to fig. 5, the present embodiment provides a terminal 100, including:
the barrage data acquiring unit 11 is used for acquiring barrage comment data of the video file;
the barrage data analysis unit 12 is configured to analyze the barrage comment data, extract a candidate segment set from the video file according to an analysis result, and acquire a segment label file corresponding to each candidate segment;
a user preference acquiring unit 13 for acquiring viewing preference information of a user;
a markup document matching unit 14, configured to find a fragment markup document corresponding to the viewing preference information of the user from the fragment markup documents corresponding to the candidate fragments;
and the video annotation executing unit 15 is configured to perform video segment annotation on the video file opened by the user according to the segment annotation file corresponding to the viewing preference information of the user.
Optionally, the bullet screen data analysis unit 12 is specifically configured to:
analyzing the barrage comment data to obtain a segment theme related to the video file;
extracting a video clip corresponding to the clip theme from the video file;
counting the playing time and the bullet screen density of the video clip corresponding to each clip theme;
selecting a candidate segment set meeting a preset condition from all the video segments according to the playing time and the bullet screen density, wherein the preset condition is that the playing time is greater than a preset time threshold and/or the bullet screen density is greater than a preset bullet screen density threshold;
and acquiring a segment marking file corresponding to each candidate segment in the candidate segment set.
Optionally, referring to fig. 6, in a fourth embodiment, the terminal 100 further includes:
and the video fast-viewing playing unit 16 is configured to, if the video playing is in the video segment playing mode, perform segment fast-viewing playing according to the video segment label after the video is opened.
Optionally, referring to fig. 6, in another embodiment, the terminal 100 further includes:
the full video playing unit 17 is used for performing full video playing after the video is opened if the video playing is in a full video watching mode;
the video fast forward processing unit 18 is configured to, in a full video playing process, skip to a next marked video segment according to the video segment marking to play a video if a fast forward instruction input by a user is received;
and the video fast-rewinding processing unit 19 is configured to, in the process of playing full video, jump to the last marked video segment according to the video segment label if a fast-rewinding instruction input by the user is received, and play the video.
It should be noted that, since each unit in the terminal provided in the embodiment of the present invention is based on the same concept as that of the embodiment of the method of the present invention, the technical effect brought by the unit is the same as that of the embodiment of the method of the present invention, and specific contents may refer to descriptions in the embodiment of the method of the present invention, and are not described herein again.
Therefore, the terminal provided by the embodiment of the invention can provide personalized video labels for the user according to different preferences of the user, does not need manual participation, and improves the film watching experience of the user.
Referring to fig. 7, a schematic block diagram of a terminal according to a fifth embodiment of the present invention is shown. The terminal in this embodiment as shown in the figure may include: one or more processors 701; one or more input devices 702, one or more output devices 703, and memory 704. The processor 701, the input device 702, the output device 703, and the memory 704 are connected by a bus 705. The memory 704 is used to store application program instructions and the processor 701 is used to execute the application program instructions stored by the memory 704. Wherein the processor 701 is configured to:
acquiring barrage comment data of a video file;
analyzing the barrage comment data, extracting a candidate segment set from the video file according to an analysis result, and acquiring segment marking files corresponding to the candidate segments;
acquiring film watching preference information of a user;
finding out a segment label file corresponding to the viewing preference information of the user from the segment label files corresponding to the candidate segments;
and according to the segment marking file corresponding to the watching preference information of the user, carrying out video segment marking on the video file opened by the user.
Optionally, the processor 701 is further configured to:
if the video playing is in a video segment playing mode, the video is played at a segment speed according to the video segment marks after being opened;
and if the video playing is in a full video watching mode, the full video playing is carried out after the video is opened.
Optionally, the processor 701 is further configured to:
in the process of playing the full video, if a fast forward instruction input by a user is received, jumping to the next marked video segment according to the marking of the video segment to play the video;
in the process of playing the full video, if a fast-backward instruction input by a user is received, jumping to the last marked video segment according to the video segment marking to play the video.
Optionally, the processor 701 is further configured to:
analyzing the barrage comment data to obtain a segment theme related to the video file;
extracting a video clip corresponding to the clip theme from the video file;
counting the playing time and the bullet screen density of the video clip corresponding to each clip theme;
selecting a candidate segment set meeting a preset condition from all the video segments according to the playing time and the bullet screen density, wherein the preset condition is that the playing time is greater than a preset time threshold and/or the bullet screen density is greater than a preset bullet screen density threshold;
and acquiring a segment marking file corresponding to each candidate segment in the candidate segment set.
It should be understood that, in the embodiment of the present invention, the Processor 701 may be a Central Processing Unit (CPU), and the Processor may also be other general processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device 702 may include a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, etc., and the output device 703 may include a display (LCD, etc.), a speaker, etc.
The memory 704 may include both read-only memory and random-access memory, and provides instructions and data to the processor 701. A portion of the memory 704 may also include non-volatile random access memory. For example, the memory 704 may also store device type information.
In a specific implementation, the processor 701, the input device 702, and the output device 703 described in this embodiment of the present invention may execute the implementation described in the video annotation method based on a bullet screen provided in this embodiment of the present invention, and may also execute the implementation of the terminal described in this embodiment of the present invention, which is not described herein again.
In another embodiment of the invention, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, implements:
acquiring barrage comment data of a video file;
analyzing the barrage comment data, extracting a candidate segment set from the video file according to an analysis result, and acquiring segment marking files corresponding to the candidate segments;
acquiring film watching preference information of a user;
finding out a segment label file corresponding to the viewing preference information of the user from the segment label files corresponding to the candidate segments;
and according to the segment marking file corresponding to the watching preference information of the user, carrying out video segment marking on the video file opened by the user.
Optionally, the computer program when executed by the processor implements:
if the video playing is in a video segment playing mode, the video is played at a segment speed according to the video segment marks after being opened;
and if the video playing is in a full video watching mode, the full video playing is carried out after the video is opened.
Optionally, the computer program when executed by the processor implements:
in the process of playing the full video, if a fast forward instruction input by a user is received, jumping to the next marked video segment according to the marking of the video segment to play the video;
in the process of playing the full video, if a fast-backward instruction input by a user is received, jumping to the last marked video segment according to the video segment marking to play the video.
Optionally, the computer program when executed by the processor implements:
analyzing the barrage comment data to obtain a segment theme related to the video file;
extracting a video clip corresponding to the clip theme from the video file;
counting the playing time and the bullet screen density of the video clip corresponding to each clip theme;
selecting a candidate segment set meeting a preset condition from all the video segments according to the playing time and the bullet screen density, wherein the preset condition is that the playing time is greater than a preset time threshold and/or the bullet screen density is greater than a preset bullet screen density threshold;
and acquiring a segment marking file corresponding to each candidate segment in the candidate segment set.
The computer readable storage medium may be an internal storage unit of the terminal according to any of the foregoing embodiments, for example, a hard disk or a memory of the terminal. The computer readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the terminal. The computer-readable storage medium is used for storing the computer program and other programs and data required by the terminal. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the terminal and the unit described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal and method can be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A video annotation method based on bullet screens is characterized by comprising the following steps:
acquiring barrage comment data of a video file, wherein the barrage comment data comprises comment time and comment content;
analyzing the barrage comment data, extracting a candidate segment set from the video file according to an analysis result, and acquiring segment marking files corresponding to the candidate segments;
acquiring film watching preference information of a user;
finding out a segment label file corresponding to the viewing preference information of the user from the segment label files corresponding to the candidate segments;
and according to the segment marking file corresponding to the watching preference information of the user, carrying out video segment marking on the video file opened by the user.
2. The bullet screen based video annotation method of claim 1, wherein after the video segment annotation of the video watched by the user, the method further comprises:
if the video playing is in a video segment playing mode, the video is played at a segment speed according to the video segment marks after being opened;
and if the video playing is in a full video watching mode, the full video playing is carried out after the video is opened.
3. The bullet screen based video annotation method of claim 2, further comprising:
in the process of playing the full video, if a fast forward instruction input by a user is received, jumping to the next marked video segment according to the marking of the video segment to play the video;
in the process of playing the full video, if a fast-backward instruction input by a user is received, jumping to the last marked video segment according to the video segment marking to play the video.
4. The barrage-based video annotation method according to claim 1, wherein the analyzing the barrage comment data, extracting a candidate segment set from the video file according to an analysis result, and obtaining a segment annotation file corresponding to each candidate segment includes:
analyzing the barrage comment data to obtain a segment theme related to the video file;
extracting a video clip corresponding to the clip theme from the video file;
counting the playing time and the bullet screen density of the video clip corresponding to each clip theme;
selecting a candidate segment set meeting a preset condition from all the video segments according to the playing time and the bullet screen density, wherein the preset condition is that the playing time is greater than a preset time threshold and/or the bullet screen density is greater than a preset bullet screen density threshold;
and acquiring a segment marking file corresponding to each candidate segment in the candidate segment set.
5. A terminal, comprising:
the barrage data acquisition unit is used for acquiring barrage comment data of the video file, and the barrage comment data comprises comment time and comment content;
the barrage data analysis unit is used for analyzing the barrage comment data, extracting a candidate segment set from the video file according to an analysis result, and acquiring segment marking files corresponding to the candidate segments;
a user preference acquiring unit for acquiring the viewing preference information of the user;
a markup file matching unit, configured to find a fragment markup file corresponding to the viewing preference information of the user from the fragment markup files corresponding to the candidate fragments;
and the video annotation execution unit is used for carrying out video segment annotation on the video file opened by the user according to the segment annotation file corresponding to the watching preference information of the user.
6. The terminal of claim 5, further comprising:
the video quick-view playing unit is used for carrying out quick-view playing on the video segments according to the video segment marks after the video is opened if the video playing is in a video segment playing mode;
and the full video playing unit is used for playing the full video after the video is opened if the video playing is in a full video watching mode.
7. The terminal of claim 6, further comprising:
the video fast forward processing unit is used for jumping to the next marked video segment according to the video segment marking to play the video if a fast forward instruction input by a user is received in the process of playing the full video;
and the video fast-backward processing unit is used for jumping to the video clip with the last label according to the video clip label to play the video if a fast-backward instruction input by a user is received in the process of playing the full video.
8. The terminal according to claim 5, wherein the bullet screen data analysis unit is specifically configured to:
analyzing the barrage comment data to obtain a segment theme related to the video file;
extracting a video clip corresponding to the clip theme from the video file;
counting the playing time and the bullet screen density of the video clip corresponding to each clip theme;
selecting a candidate segment set meeting a preset condition from all the video segments according to the playing time and the bullet screen density, wherein the preset condition is that the playing time is greater than a preset time threshold and/or the bullet screen density is greater than a preset bullet screen density threshold;
and acquiring a segment marking file corresponding to each candidate segment in the candidate segment set.
9. A terminal comprising a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory being interconnected, wherein the memory is configured to store application program instructions, and the processor is configured to invoke the application program instructions to execute the bullet screen based video annotation method according to any one of claims 1-4.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the bullet screen based video annotation method according to any one of claims 1-4.
CN201710605238.0A 2017-07-21 2017-07-21 Video annotation method and terminal based on bullet screen Active CN109286850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710605238.0A CN109286850B (en) 2017-07-21 2017-07-21 Video annotation method and terminal based on bullet screen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710605238.0A CN109286850B (en) 2017-07-21 2017-07-21 Video annotation method and terminal based on bullet screen

Publications (2)

Publication Number Publication Date
CN109286850A CN109286850A (en) 2019-01-29
CN109286850B true CN109286850B (en) 2020-11-13

Family

ID=65185523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710605238.0A Active CN109286850B (en) 2017-07-21 2017-07-21 Video annotation method and terminal based on bullet screen

Country Status (1)

Country Link
CN (1) CN109286850B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427897A (en) * 2019-08-07 2019-11-08 北京奇艺世纪科技有限公司 Analysis method, device and the server of video highlight degree
CN110891198B (en) * 2019-11-29 2021-06-15 腾讯科技(深圳)有限公司 Video playing prompt method, multimedia playing prompt method, bullet screen processing method and device
CN111343483A (en) * 2020-02-18 2020-06-26 北京奇艺世纪科技有限公司 Prompting method and device for media content segments, storage medium and electronic device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8051081B2 (en) * 2008-08-15 2011-11-01 At&T Intellectual Property I, L.P. System and method for generating media bookmarks
CN104469508B (en) * 2013-09-13 2018-07-20 中国电信股份有限公司 Method, server and the system of video location are carried out based on the barrage information content
CN104410920B (en) * 2014-12-31 2015-12-30 合一网络技术(北京)有限公司 The method of wonderful mark is carried out based on video segmentation playback volume
CN106095804B (en) * 2016-05-30 2019-08-20 维沃移动通信有限公司 A kind of processing method of video clip, localization method and terminal
CN106210902B (en) * 2016-07-06 2019-06-11 华东师范大学 A kind of cameo shot clipping method based on barrage comment data
CN106507143A (en) * 2016-10-21 2017-03-15 北京小米移动软件有限公司 Video recommendation method and device

Also Published As

Publication number Publication date
CN109286850A (en) 2019-01-29

Similar Documents

Publication Publication Date Title
US10560739B2 (en) Method, system, apparatus, and non-transitory computer readable recording medium for extracting and providing highlight image of video content
US10949052B2 (en) Social interaction in a media streaming service
CN109286850B (en) Video annotation method and terminal based on bullet screen
KR20160104661A (en) Methods, systems, and media for presenting supplemental information corresponding to on-demand media content
WO2017096877A1 (en) Recommendation method and device
US8799300B2 (en) Bookmarking segments of content
CN106331778B (en) Video recommendation method and device
US9330098B2 (en) User interface operating method and electronic device with the user interface and program product storing program for operating the user interface
US20140289241A1 (en) Systems and methods for generating a media value metric
CN103004228A (en) Obtaining keywords for searching
US8812498B2 (en) Methods and systems for providing podcast content
US20150007014A1 (en) Detect and Automatically Hide Spoiler Information in a Collaborative Environment
CN105373580A (en) Method and device for displaying subjects
CN107153684A (en) Display methods, device and the equipment of PUSH message
EP3005055B1 (en) Apparatus and method for representing and manipulating metadata
US20130317951A1 (en) Auto-annotation of video content for scrolling display
US10440435B1 (en) Performing searches while viewing video content
US9152707B2 (en) System and method for creating and providing media objects in a navigable environment
US10650861B2 (en) Video summarization and collaboration systems and methods
CN107423058A (en) A kind of interface display method and device
US20200409984A1 (en) Providing media based on image analysis
CN106557511B (en) Video adaptation processing method and device and terminal
Seok et al. WalkieTagging: Efficient video annotation method based on spoken words for smart devices
CN110740373A (en) audio/video file buffering method and related device
KR20160139818A (en) Method and apparatus for controlling display of contents, and computer program for executing the method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 516006 TCL technology building, No.17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province

Applicant after: TCL Technology Group Co.,Ltd.

Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District

Applicant before: TCL RESEARCH AMERICA Inc.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant