CN113055741A - Video abstract generation method, electronic equipment and computer readable storage medium - Google Patents

Video abstract generation method, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113055741A
CN113055741A CN202011622336.3A CN202011622336A CN113055741A CN 113055741 A CN113055741 A CN 113055741A CN 202011622336 A CN202011622336 A CN 202011622336A CN 113055741 A CN113055741 A CN 113055741A
Authority
CN
China
Prior art keywords
video
bullet screen
barrage
unit
video clip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011622336.3A
Other languages
Chinese (zh)
Other versions
CN113055741B (en
Inventor
詹长静
周维
陈志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN202011622336.3A priority Critical patent/CN113055741B/en
Publication of CN113055741A publication Critical patent/CN113055741A/en
Application granted granted Critical
Publication of CN113055741B publication Critical patent/CN113055741B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a video abstract generating method, electronic equipment and a computer readable storage medium, wherein the video abstract generating method comprises the following steps: acquiring a source video, and dividing the source video into a plurality of unit video clips; screening a plurality of unit video clips from the plurality of unit video clips as key video clips according to the bullet screen information corresponding to each unit video clip; and splicing all the key video clips according to the time sequence to generate a video abstract corresponding to the source video. According to the scheme, the personalized video abstract can be generated.

Description

Video abstract generation method, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method for generating a video summary, an electronic device, and a computer-readable storage medium.
Background
With the development of internet technology and multimedia technology, digital video is being flooded, such as news, advertisement, television, movies, live webcasts, etc. Regardless of learning work or social entertainment, users are surrounded by massive videos, and it is not easy to quickly inquire interested videos in massive videos, so that video abstraction comes from birth, and as the name suggests, video abstraction is a brief representation of video content, and is used for facilitating users to quickly know the video content and decide whether to watch the video content in detail or not. And indexing and querying for video databases, etc.
The generalized video abstract is generally divided into two types, one is to directly extract key frames in the video to combine into a new video, similar to a trailer of a movie. One is video enrichment, which is more complex than the former one, and includes the design and implementation of a series of algorithms, such as moving object detection, object trajectory extraction, trajectory optimization, and generation of an enriched video.
An existing video summary generation method is to sample a video based on a time point, that is, a frame or a segment is extracted at a certain time interval. Another video summary generation method is to combine visual information (such as color, shape, motion direction, etc.) in the video and other multimedia information (such as audio, subtitle, etc.) and the like, apply video and image processing techniques by adopting a pattern recognition method, and finally generate a key frame sequence or a thumbnail video. However, the method ignores the requirements of the user, lacks the characteristics of interaction between the user and the video, and cannot reflect the video content which is focused by the user.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a method for generating a video abstract, an electronic device and a computer readable storage medium, which can generate a personalized video abstract.
In order to solve the above problem, a first aspect of the present application provides a method for generating a video summary, where the method includes: acquiring a source video, and dividing the source video into a plurality of unit video segments; screening a plurality of unit video clips from the plurality of unit video clips as key video clips according to the bullet screen information corresponding to each unit video clip; and splicing all the key video clips according to the time sequence to generate a video abstract corresponding to the source video.
In order to solve the above problem, a second aspect of the present application provides an electronic device, which includes a memory and a processor coupled to each other, where the processor is configured to execute program instructions stored in the memory to implement the video summary generation method of the first aspect.
In order to solve the above problem, a third aspect of the present application provides a computer readable storage medium having stored thereon program instructions, which when executed by a processor, implement the method for generating a video summary of the first aspect.
The invention has the beneficial effects that: different from the situation in the prior art, after the source video is acquired, the source video can be divided into a plurality of unit video clips, then a plurality of unit video clips are screened out from the plurality of unit video clips as key video clips according to the bullet screen information corresponding to each unit video clip, and then all the key video clips can be spliced according to the time sequence to generate the video abstract corresponding to the source video. Because all key video clips are screened from a plurality of unit video clips according to the barrage information, the characteristics of interaction between the user and the video are integrated, and the unit video clips which the user is interested in can be captured more accurately, the generated video abstract can reflect the contents which are focused by the user, namely the personalized video abstract can be generated.
Drawings
Fig. 1 is a schematic flow chart of a first embodiment of a method for generating a video summary of the present application;
FIG. 2 is a flowchart illustrating an embodiment of step S12 in FIG. 1;
FIG. 3 is a flowchart illustrating an embodiment of step S122 in FIG. 2;
fig. 4 is a flowchart illustrating a second embodiment of the method for generating a video summary according to the present application;
FIG. 5 is a flowchart illustrating an embodiment of step S45 in FIG. 4;
fig. 6 is a flowchart illustrating a method for generating a video summary according to a third embodiment of the present application;
FIG. 7 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 8 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a first embodiment of a method for generating a video summary according to the present application. Specifically, the method for generating a video summary according to this embodiment may include the following steps:
step S11: a source video is acquired, and the source video is divided into a plurality of unit video clips.
The video summary is a brief representation of the video content, and is used for the purpose of facilitating a user to quickly understand the video content and decide whether to view the video content in detail, so that some video frames can be screened from the source video to form a video summary corresponding to the source video. Specifically, after the source video is acquired, the source video may be divided into a plurality of unit video segments based on the time sequence of the source video, and then the unit video segments that can be used to form the video summary may be screened out from the unit video segments. In this embodiment, the source video may be divided into all unit video segments according to a preset time length, where the preset time length may be set according to actual needs, for example, 1 second or 2 seconds.
Step S12: and screening a plurality of unit video clips from the plurality of unit video clips as key video clips according to the bullet screen information corresponding to each unit video clip.
The bullet screen is a commenting caption popped out when a user watches a video, a commenting object is often a transient fragment in the video, the expression form is short and bold, the expression content is from worship, exclamation, joy, perplexity to undetachable, kayakon, spitting groove and the like, the instant reaction and emotion of the user are recorded, and the preference of the user on the video content is reflected. It can be said that the barrage is an interaction between the user and the video content itself, and the user watching the same video can send the barrage whenever and wherever, and explores the video content or carries out a conversation by the barrage, for example, "the front is … … don't, i feel this is … …", and the expression is disapproval of the opinions of others and the opinions of themselves, so that, although the actual sending time of the barrage may be different, the barrages sent at the same time in the time sequence of the source video playing tend to have the same theme or characteristic. Therefore, all the barrages sent in the playing time of a certain unit video clip are the barrages corresponding to the unit video clip, and all the barrages corresponding to the unit video clip can reflect whether the user pays attention to or is interested in the unit video clip, and if the attention degree or the interest degree of the user to the certain unit video clip is high, the unit video clip can be used as a key video clip; therefore, a plurality of unit video clips can be screened out from the plurality of unit video clips as key video clips according to the bullet screen information corresponding to each unit video clip.
Step S13: and splicing all the key video clips according to the time sequence to generate a video abstract corresponding to the source video.
It can be understood that after all key video segments are obtained, the key video segments can be spliced according to the corresponding time sequence of each key video segment in the source video, so that the video summary corresponding to the source video can be generated.
According to the scheme, after the source video is obtained, the source video can be divided into the plurality of unit video clips, then the plurality of unit video clips are screened out from the plurality of unit video clips as the key video clips according to the bullet screen information corresponding to each unit video clip, and then all the key video clips can be spliced according to the time sequence to generate the video abstract corresponding to the source video. Because all key video clips are screened from a plurality of unit video clips according to the barrage information, the characteristics of interaction between the user and the video are integrated, and the unit video clips which the user is interested in can be captured more accurately, the generated video abstract can reflect the contents which the user focuses on, namely the personalized video abstract can be generated.
Further, please refer to fig. 2, wherein fig. 2 is a flowchart illustrating an embodiment of step S12 in fig. 1. In an embodiment, the step S12 may specifically include:
step S121: and acquiring bullet screen information corresponding to each unit video clip, and performing type division on each bullet screen information.
It can be understood that, in order to determine whether a user pays attention to or is interested in a unit video segment through all barrages corresponding to the unit video segment, the instant reaction and emotion of the user recorded in each barrage need to be analyzed, so after acquiring barrage information corresponding to each unit video segment, type division needs to be performed on each barrage information, and then the attention degree or interest degree of the user on the content of the corresponding unit video segment can be analyzed according to the divided situation.
In one embodiment, the barrage information includes a user group of the barrage, a type of the barrage, and emotional tendency of the barrage; the step S121 may specifically include: and acquiring the bullet screen texts of all bullet screen information corresponding to the unit video clip, dividing all the bullet screen texts according to the user groups of the bullet screens, the bullet screen types of the bullet screens and the emotional tendencies of the bullet screens, and counting the bullet screen quantity of all the user groups, the bullet screen quantity of all the bullet screen types and the bullet screen quantity of all the emotional tendencies.
The bullet screen information of a bullet screen can reflect the user group corresponding to the bullet screen, for example, in many video playing platforms, the color of the bullet screen can be used as a significant way to distinguish the user group, taking a beep li website as an example, a white bullet screen represents an ordinary user, and when the ordinary user reaches a certain level and becomes a high-level user, a color bullet screen can be sent. The bullet screen information of a bullet screen can also reflect the bullet screen type of the bullet screen, and common bullet screen types include but are not limited to: comment type bullet screen, chase star type bullet screen, science popularization type bullet screen, translation type bullet screen, dramatic and perspective type bullet screen, symbol type bullet screen, abuse type bullet screen at will, and the like. The bullet screen information of a bullet screen can reflect the emotional tendency of the bullet screen, and the emotional tendency generally comprises positive direction, negative direction or neutral. Therefore, after the barrage texts of all barrage information corresponding to the unit video clip are obtained, all the barrage texts can be divided according to the user groups of the barrage, the types of the barrage and the emotional tendency of the barrage, and then the number of the barrages of various user groups, the number of the barrages of various types of the barrage and the number of the barrages of various emotional tendencies can be counted.
Further, the step of acquiring all the subtitle texts corresponding to the unit video clip in step S121 may specifically include: acquiring all barrage texts with publication time in a second time period according to a first time period corresponding to the unit video clip, and taking the acquired barrage texts as all barrage texts corresponding to the unit video clip; the starting time of the first time period and the starting time of the second time period are different by a preset time length, and the time length of the first time period is the same as that of the second time period.
It is understood that, in consideration of the time delay of the transmission of the bullet screen, the bullet screen text with the publication time at the second time is actually the comment of the user on the video at the first time, and the preset time length difference between the first time and the second time can be set according to the actual situation, for example, the length of the bullet screen text or the state of the network can be considered. For example, the bullet screen at the time t can be traced back forward for 2 seconds, that is, the bullet screen at the time t corresponds to the video at the time t-2; in addition, since the pivot type bullet screen is more specific and describes the video content at the future time, the pivot type bullet screen at time t can be associated with the video at time t + 2. Therefore, according to the first time period corresponding to a certain unit video clip, all the barrage texts of the schedule time in the second time period are obtained and used as all the barrage texts corresponding to the unit video clip, and the barrage information corresponding to the unit video clip can be accurately obtained.
Step S122: and carrying out weighted summation on the bullet screen information based on the type of each piece of bullet screen information and the weighted coefficient of each type to obtain the key degree of the unit video clip.
It can be understood that the bullet screen information belonging to different types reflects that the attention degree or interest degree of the user to the video may be different, so different weighting coefficients may be set for different types of bullet screen information, and for a unit video clip, the key degree of the unit video clip may be obtained by performing weighted summation on all bullet screen information of the unit video clip based on the type of each bullet screen information and the weighting coefficients of the respective types.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating an embodiment of step S122 in fig. 2. In an embodiment, the step S122 may specifically include:
step S1221: and calculating to obtain the user group score of the unit video clip according to the number of the barrages of the various user groups and the preset weight of the various user groups.
It will be appreciated that in addition to being more visually attractive to the eye, color bullet screens have a higher threshold than white bullet screens, and the high-level user represented behind them is also the preferred object that the video creator wants to attract. Therefore, different weights can be set for the barrages with different colors, that is, preset weights can be set for different types of user groups, for example, when a preferred object to be attracted by a certain source video is a high-level user, the preset weights of the high-level user can be set to be higher; accordingly, according to the number of barrage of various user groups in a certain unit video clip and the preset weight of various user groups, the user group score of the unit video clip can be calculated, the user group score represents the key degree of the unit video clip in different user groups, and the higher the user group score is, the higher the key degree of the unit video clip can be represented.
Step S1222: and calculating to obtain the bullet screen type score of the unit video clip according to the bullet screen quantity of each bullet screen type and the preset weight of each bullet screen type.
Common types of video content include, but are not limited to: television shows, network shows, movies, fantasy shows, sporting events, animations, documentaries, news, music videos, game videos, funny videos, life videos, travel videos, short videos, and the like, with common types of barrage including, but not limited to: comment type barrage, chase star type barrage, science popularization type barrage, translation type barrage, dramatic and perspective type barrage, symbolic type barrage, random and abuse type barrage and the like, and for different types of videos, the occurrence conditions of various types of barrages are slightly different. As shown in the following table:
Figure BDA0002876447580000071
Figure RE-GDA0003073782790000081
the description of various types of barrage is shown in the above table, and it can be found that, for different types of videos, the occurrence of each type of barrage is different, and therefore, the preset weights of different types of barrage need to be set according to the type of the source video. For example, for the source video of movies and documentaries, the weights for the various types of barrage in the table above can be set as follows:
film Recording sheet
Comment bullet screen 0.4 0.4
Star-pursuing bullet screen 0.2 0.05
Science popularization bullet screen 0.05 0.2
Translation bullet screen 0.05 0.05
Perspective bullet screen 0.1 0.1
Bullet screen for symbols 0.1 0.1
Others 0.1 0.1
It can be understood that, for the case that the source video is a movie, the user mainly expresses the comment on the movie and the comment on the star of the actor through the bullet screen, so that higher weight can be set for the comment bullet screen and the star pursuit bullet screen, and for the case that the source video is a documentary, the user mainly expresses the comment on the documentary and the science popularization of related knowledge through the bullet screen, so that higher weight can be set for the comment bullet screen and the science popularization bullet screen; additionally, abuse-like barrages are generally independent of video, and thus can be removed at a high rate. Therefore, according to the number of the barrage of each type of barrage in a certain unit video clip and the preset weight of each type of barrage, the barrage type score of the unit video clip can be calculated, the barrage type score represents the key degree of the unit video clip in different barrage types, and the higher the barrage type score is, the higher the key degree of the unit video clip can be represented.
Step S1223: and calculating the emotional tendency score of the unit video clip according to the number of the barrage of the various emotional tendencies and the preset weight of the various emotional tendencies.
It is understood that the barrage is the comment sent by the user immediately when watching the video, and includes the current emotional feeling of the user, and the commendation and derogation are inconsistent. Therefore, different weights can be set for the barrages with different emotional tendencies, for example, for a barrage text, an emotional classifier can be trained, then the emotional tendency of a current user who sends a certain barrage is judged by using the emotional classifier, the weight of the barrage with positive emotional tendency is set to be 1, the weight of the barrage with negative emotional tendency is set to be-1, and the weight of the barrage with neutral emotional tendency is set to be 0; therefore, according to the number of barrage of various emotional tendencies in a certain unit video clip and the preset weight of various emotional tendencies, the emotional tendency score of the unit video clip can be calculated, the emotional tendency score represents the key degree of the unit video clip in different emotional tendencies, and the higher the emotional tendency score is, the higher the key degree of the unit video clip is.
Step S1224: and summing the user group score, the barrage type score and the emotional tendency score to obtain the key degree of the unit video clip.
Specifically, dividing a source video by taking preset time length as a unit to obtain a plurality of unit video segments, acquiring a bullet screen on each unit video segment, counting various colors and the number of various types of bullet screens to obtain the number of the bullet screens of various user groups and the number of the bullet screens of various types of bullet screens, and carrying out weighted summation on the bullet screens according to the preset weights of various user groups and the preset weights of various types of bullet screens to obtain the user group score and the bullet screen type score of the unit video segment; meanwhile, judging the emotional tendency of each bullet screen by adopting an emotional classifier, counting the number of the bullet screens with various emotional tendencies, and carrying out weighted summation on the bullet screens according to the preset weight of various emotional tendencies to obtain the emotional tendency score of the unit video clip; then, adding the user group score, the barrage type score and the emotional tendency score to obtain a total score S of the source video at each unit timet
St=∑iwici+∑jwjej+∑kwkhk
Wherein S istScoring of the video at time t, ciNumber of bullet screens of i-th color, wiE (0,1) is the weight of the ith bullet screen color, ejNumber of j types of bullet screens, wjE (0,1), is the weight of the jth bullet screen type, hkNumber of barrages for the kth emotional tendency, wkThe values are 1, 0, -1, and the weight of the kth emotional tendency is taken. According to the formula, the total score of each unit video clip corresponding to the source video can be obtained, and the key degree of each unit video clip can be reflected according to the total score.
Step S123: selecting a number of the unit video clips with the highest criticality as the key video clips.
It can be understood that the length of the video summary to be generated can be set according to actual requirements, for example, the length can be set actively by the creator of the source video, or can be set according to the duration of the source video; in an embodiment, the length of a video summary to be generated is determined to be N seconds, such as 60s, 300s, and the like, according to the duration of a source video, and the length of each unit video clip is 1 second, at this time, the first N unit video clips with the highest criticality may be selected from all the unit video clips as key video clips, that is, the video clips most interested by a user are selected, and the total duration of the N key video clips is made to be the same as the length of the video summary, so that the N key video clips may be spliced according to the corresponding time sequence of each key video clip in the source video, and a video summary corresponding to the source video is generated.
Because the influence of the user group scores of each unit video clip is considered when the key video clips are selected, the video abstract which accords with the preference of a specific user can be generated aiming at the specific user group. Because the influence of the emotional tendency score of each unit video clip is considered when the key video clip is selected, the video creator can know the user preference by carrying out emotional analysis on the barrage.
Referring to fig. 4, fig. 4 is a flowchart illustrating a second embodiment of a method for generating a video summary according to the present application. Specifically, the method for generating a video summary according to this embodiment may include the following steps:
step S41: a source video is acquired, and the source video is divided into a plurality of unit video clips.
Step S42: and screening a plurality of unit video clips from the plurality of unit video clips as key video clips according to the bullet screen information corresponding to each unit video clip.
Step S43: and splicing all the key video clips according to the time sequence to generate a video abstract corresponding to the source video.
In this implementation scenario, steps S41 to S43 provided in this embodiment are substantially similar to steps S11 to S13 in the previous embodiment, and are not repeated here.
Step S44: and taking the key video clips with the corresponding time periods as a video clip group.
It can be understood that, after the first N unit video segments with the highest degree of criticality are obtained as the key video segments, the N key video segments may be divided according to whether the corresponding time periods are continuous, and the key video segments with continuous corresponding time periods are used as a video segment group, for example, the time periods corresponding to the N key video segments in the source video are specifically: t is t1,t2,t3,t10,t11,t20,t26,t27,t35,t36,…tNThus, [ t ] can be expressed1,t2,t3]As a video clip set, [ t ]10,t11]As a group of video segments, and so on.
Step S45: and obtaining candidate keywords of the video clip group according to all the bullet screen texts corresponding to the video clip group.
For each video clip group, the video clip group comprises unit video clips which are continuous in time, so that the relevance of the barrage information corresponding to all the unit video clips in a certain video clip group is high, candidate keywords of the video clip group can be screened and obtained from all the barrage texts corresponding to the certain video clip group, and further the labels of the video clip group can be obtained from the candidate keywords.
Further, please refer to fig. 5, in which fig. 5 is a flowchart illustrating an embodiment of step S45 in fig. 4. In an embodiment, the step S45 may specifically include:
step S451: and acquiring all bullet screen texts corresponding to the video clip group, and eliminating invalid bullet screen texts to obtain valid bullet screen texts corresponding to the video clip group.
Step S452: and performing word segmentation processing on the effective bullet screen text corresponding to the video clip group to obtain a word set after word segmentation processing.
Step S453: and screening the word set subjected to word segmentation according to a preset disabled word library and a preset keyword library to obtain the candidate keywords.
It can be understood that after all the barrage texts corresponding to each video clip group are obtained, invalid barrages such as symbol barrages need to be removed first, so that all the valid barrage texts corresponding to the video clip group are obtained. Then, word segmentation processing can be performed on all effective barrage texts corresponding to the video segment group to obtain word sets after word segmentation processing, then word sets after word segmentation processing can be screened according to a preset stop word library and a preset keyword library to remove stop words and retain keywords, and then candidate keywords can be obtained, so that the tags of the video segment group can be obtained from the candidate keywords. It can be understood that different deactivation word banks and keyword banks need to be preset for different types of source videos, and the occurrence situation of each barrage type is different for different types of videos, so that different deactivation word banks and keyword banks can be set according to the type of the source video, a deactivation word bank generally includes words that do not appear for the current type of source videos or words that are not interested by a user, and a keyword bank generally includes words that are interested by the user for the current type of source videos.
Step S46: and calculating the importance degree of each candidate keyword by adopting a preset statistical analysis method.
Step S47: and selecting the candidate keyword with the highest importance degree as a label of the video segment group, and displaying the label of the video segment group on a progress bar of the video abstract.
The predetermined statistical analysis method can be used to perform statistical analysis on the keywords to evaluate the importance of a word or phrase to a corpus. The preset statistical analysis method may be a TF-IDF (term-inverse document frequency, a commonly used weighting technique for information retrieval and data mining) algorithm, and according to the TF-IDF principle, the importance degree of each candidate keyword may be calculated, and specifically, if a word or phrase occurs frequently in a video segment group and rarely occurs in the video segment group, the word or phrase is considered to have a good category distinguishing capability and is suitable for being used as a label of the video segment group. Therefore, each candidate keyword is scored by adopting the TF-IDF algorithm, the higher the score of the candidate keyword is, the higher the importance degree of the candidate keyword is, so that the candidate keyword with the highest score can be selected as the label of the group of video segments, and the label of the group of video segments is displayed on the progress bar of the video abstract.
Referring to fig. 6, fig. 6 is a flowchart illustrating a method for generating a video summary according to a third embodiment of the present application. Specifically, the method for generating a video summary according to this embodiment may include the following steps:
step S61: a source video is acquired, and the source video is divided into a plurality of unit video clips.
Step S62: and screening a plurality of unit video clips from the plurality of unit video clips as key video clips according to the bullet screen information corresponding to each unit video clip.
Step S63: and splicing all the key video clips according to the time sequence to generate a video abstract corresponding to the source video.
In this implementation scenario, steps S61 to S63 provided in this embodiment are substantially similar to steps S11 to S13 in the above embodiment, and are not repeated here.
Step S64: and obtaining effective barrage texts corresponding to all the key video clips, and performing duplication elimination processing to obtain the residual barrage texts.
Step S65: and performing theme clustering on the residual bullet screen texts by adopting a preset clustering algorithm, and selecting a theme containing the largest number of bullet screens as a candidate theme.
It can be understood that after all the bullet screen texts corresponding to each key video segment are obtained, invalid bullet screens such as symbol bullet screens and the like need to be removed, so that valid bullet screen texts corresponding to all the key video segments are obtained. For all the barrages, the same user may send the barrages for the same theme for multiple times, and therefore, the effective barrage texts corresponding to all the key video clips need to be deduplicated to obtain the remaining barrage texts. And then, performing theme clustering on the remaining bullet screen texts by adopting a preset clustering algorithm to obtain a plurality of themes, and counting the number of bullet screens contained in each theme, so that the theme containing the largest number of bullet screens can be selected as a candidate theme, specifically, the preset clustering algorithm can be a K-means (K mean value) clustering algorithm, and can also be other clustering algorithms.
Step S66: and selecting the bullet screens meeting preset conditions from the bullet screens corresponding to the candidate topics as the titles of the video abstract.
It can be understood that there are a plurality of barrages corresponding to the candidate topic, and therefore an appropriate barrage needs to be selected from these barrages as the title of the generated video summary. Because the number of the bullet screens corresponding to the candidate theme is large, a preset condition needs to be set to screen out a proper bullet screen. In one embodiment, the preset condition includes that the delivery time of the bullet screen is the earliest, and the length of the text of the bullet screen meets a preset length; because the bullet screen texts under the same candidate theme have great similarity, if the bullet screen selected as the title is too short, the complete semantics cannot be covered, and if the bullet screen selected as the title is too long, the bullet screen appears redundant, so that the bullet screen with the earliest publication time and proper length (for example, the length can be set to be 15 characters) can be selected from the bullet screens corresponding to the candidate theme to serve as the title of the video abstract. The video summary and the title may then be combined together to generate a video summary with the title.
It can be understood that the generated video abstract can reflect the attention points of the user more accurately and enables the user to quickly know the video content through the label and the title of the video abstract extracted through the barrage content, so that more attention of the user is attracted.
In addition, the video abstract generated by the method can be updated synchronously along with the updating of the barrage, and because the attention points of the user to the same source video may change at different periods, the interest evolution of the user can be tracked in real time through the barrage sent by the user, and then the video abstract meeting the requirements of the user under the condition is generated.
Referring to fig. 7, fig. 7 is a schematic diagram of a frame of an electronic device according to an embodiment of the present application. The electronic device 70 includes a memory 701 and a processor 702 coupled to each other, and the processor 702 is configured to execute program instructions stored in the memory 701 to implement the steps of any of the embodiments of the video summary generation method described above. In one particular implementation scenario, the electronic device 70 may include, but is not limited to: microcomputer, server.
In particular, the processor 702 is configured to control itself and the memory 701 to implement the steps of any of the above-described embodiments of the video summary generation method. Processor 702 may also be referred to as a CPU (Central Processing Unit). The processor 702 may be an integrated circuit chip having signal processing capabilities. The Processor 702 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 702 may be collectively implemented by an integrated circuit chip.
According to the scheme, after the source video is obtained, the processor 702 can divide the source video into the plurality of unit video clips, then screen out a plurality of unit video clips from the plurality of unit video clips as the key video clips according to the barrage information corresponding to each unit video clip, and then splice all the key video clips according to the time sequence to generate the video abstract corresponding to the source video. Because all key video clips are screened from a plurality of unit video clips according to the barrage information, the characteristics of interaction between the user and the video are integrated, and the unit video clips which the user is interested in can be captured more accurately, the contents focused by the user can be reflected in the generated video summary, and the personalized video summary can be generated; in addition, because the influence of the user group score of each unit video clip is considered when the key video clip is selected, a video abstract which accords with the preference of a specific user can be generated aiming at the specific user group; in addition, because the influence of the emotional tendency score of each unit video clip is considered when the key video clip is selected, the video creator can know the preference of the user by carrying out emotional analysis on the barrage; in addition, the labels and the titles of the video summaries are extracted through the barrage content, the generated video summaries can more accurately reflect the attention points of the users, and the users can quickly know the video content, so that more attention of the users is attracted; in addition, the video abstract generated by the method can be updated synchronously along with the updating of the barrage, and because the attention points of the user to the same source video may change at different periods, the interest evolution of the user can be tracked in real time through the barrage sent by the user, and then the video abstract meeting the current user requirement is generated.
Referring to fig. 8, fig. 8 is a block diagram of an embodiment of a computer-readable storage medium according to the present application. The computer readable storage medium 80 stores program instructions 800 capable of being executed by the processor, the program instructions 800 being configured to implement the steps of any of the above-described embodiments of the method for generating a video summary.
In the embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described model embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of division of a logical function, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.

Claims (13)

1. A method for generating a video summary, the method comprising:
acquiring a source video, and dividing the source video into a plurality of unit video clips;
screening a plurality of unit video clips from the plurality of unit video clips as key video clips according to the bullet screen information corresponding to each unit video clip;
and splicing all the key video clips according to the time sequence to generate a video abstract corresponding to the source video.
2. The method according to claim 1, wherein the screening out a plurality of unit video clips as key video clips according to the barrage information corresponding to each unit video clip comprises:
acquiring bullet screen information corresponding to each unit video clip, and performing type division on each piece of bullet screen information;
carrying out weighted summation on the bullet screen information based on the type of each piece of bullet screen information and the weighting coefficient of each type to obtain the key degree of the unit video clip;
selecting a number of the unit video clips with the highest criticality as the key video clips.
3. The generation method of claim 2, wherein the barrage information includes a user group of the barrage, a type of the barrage, and emotional tendencies of the barrage;
the acquiring of the bullet screen information corresponding to each unit video clip, and the type division of each bullet screen information include:
and acquiring the bullet screen texts of all bullet screen information corresponding to the unit video clip, dividing all the bullet screen texts according to the user groups of the bullet screens, the bullet screen types of the bullet screens and the emotional tendencies of the bullet screens, and counting the bullet screen quantity of all the user groups, the bullet screen quantity of all the bullet screen types and the bullet screen quantity of all the emotional tendencies.
4. The method according to claim 3, wherein the obtaining all bullet screen texts corresponding to the unit video segment includes:
acquiring all barrage texts with publication time in a second time period according to a first time period corresponding to the unit video clip, and taking the acquired barrage texts as all barrage texts corresponding to the unit video clip; the starting time of the first time period and the starting time of the second time period are different by a preset time length, and the time length of the first time period is the same as that of the second time period.
5. The generation method according to claim 3, wherein the obtaining the criticality of the unit video segment by performing weighted summation on the barrage information based on the type of each piece of barrage information and the weighting coefficient of each type comprises:
calculating to obtain the user group score of the unit video clip according to the number of the barrage of each user group and the preset weight of each user group;
calculating to obtain a bullet screen type score of the unit video clip according to the bullet screen quantity of each bullet screen type and the preset weight of each bullet screen type;
calculating and obtaining the emotional tendency score of the unit video clip according to the number of the barrage of each emotional tendency and the preset weight of each emotional tendency;
and summing the user group score, the barrage type score and the emotional tendency score to obtain the key degree of the unit video clip.
6. The generation method according to claim 5, wherein the preset weight of each type of barrage is set according to the type of the source video.
7. The generation method according to claim 1, characterized in that the generation method further comprises:
taking the key video clips with continuous corresponding time periods as a video clip group;
obtaining candidate keywords of the video clip group according to all bullet screen texts corresponding to the video clip group;
calculating the importance degree of each candidate keyword by adopting a preset statistical analysis method;
and selecting the candidate keyword with the highest importance degree as a label of the video segment group, and displaying the label of the video segment group on a progress bar of the video abstract.
8. The method according to claim 7, wherein obtaining the candidate keywords of the video segment group according to all the bullet screen texts corresponding to the video segment group comprises:
acquiring all bullet screen texts corresponding to the video clip group, and eliminating invalid bullet screen texts to obtain valid bullet screen texts corresponding to the video clip group;
performing word segmentation processing on the effective bullet screen text corresponding to the video clip group to obtain a word set after word segmentation processing;
and screening the word set subjected to word segmentation according to a preset disabled word library and a preset keyword library to obtain the candidate keywords.
9. The generation method according to claim 8, wherein the preset deactivation word bank and keyword bank are set according to the type of the source video.
10. The generation method according to claim 1, characterized in that the generation method further comprises:
obtaining effective barrage texts corresponding to all the key video clips, and performing duplication elimination processing to obtain residual barrage texts;
performing theme clustering on the residual bullet screen texts by adopting a preset clustering algorithm, and selecting a theme containing the largest number of bullet screens as a candidate theme;
and selecting the bullet screens meeting preset conditions from the bullet screens corresponding to the candidate topics as the titles of the video abstract.
11. The generation method of claim 10, wherein the preset condition includes that the publication time of the bullet screen is the earliest and the length of the bullet screen text satisfies a preset length.
12. An electronic device, comprising a memory and a processor coupled to each other, wherein the processor is configured to execute program instructions stored in the memory to implement the method for generating a video summary according to any one of claims 1 to 11.
13. A computer-readable storage medium having stored thereon program instructions, which when executed by a processor, implement the method of generating a video summary according to any one of claims 1 to 11.
CN202011622336.3A 2020-12-31 2020-12-31 Video abstract generation method, electronic equipment and computer readable storage medium Active CN113055741B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011622336.3A CN113055741B (en) 2020-12-31 2020-12-31 Video abstract generation method, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011622336.3A CN113055741B (en) 2020-12-31 2020-12-31 Video abstract generation method, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113055741A true CN113055741A (en) 2021-06-29
CN113055741B CN113055741B (en) 2023-05-30

Family

ID=76508922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011622336.3A Active CN113055741B (en) 2020-12-31 2020-12-31 Video abstract generation method, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113055741B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113992973A (en) * 2021-09-22 2022-01-28 阿里巴巴达摩院(杭州)科技有限公司 Video abstract generation method and device, electronic equipment and storage medium
CN113987264A (en) * 2021-10-28 2022-01-28 北京中科闻歌科技股份有限公司 Video abstract generation method, device, equipment, system and medium
CN115171014A (en) * 2022-06-30 2022-10-11 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN115767204A (en) * 2022-11-10 2023-03-07 北京奇艺世纪科技有限公司 Video processing method, electronic equipment and storage medium
CN116033207A (en) * 2022-12-09 2023-04-28 北京奇艺世纪科技有限公司 Video title generation method and device, electronic equipment and readable storage medium
CN116896654A (en) * 2023-09-11 2023-10-17 腾讯科技(深圳)有限公司 Video processing method and related device

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1894964A (en) * 2003-12-18 2007-01-10 皇家飞利浦电子股份有限公司 Method and circuit for creating a multimedia summary of a stream of audiovisual data
JP2011041164A (en) * 2009-08-18 2011-02-24 Nippon Telegr & Teleph Corp <Ntt> Method and program for video summarization
US20150046371A1 (en) * 2011-04-29 2015-02-12 Cbs Interactive Inc. System and method for determining sentiment from text content
US8984405B1 (en) * 2013-06-26 2015-03-17 R3 Collaboratives, Inc. Categorized and tagged video annotation
CN104469508A (en) * 2013-09-13 2015-03-25 中国电信股份有限公司 Method, server and system for performing video positioning based on bullet screen information content
CN106210902A (en) * 2016-07-06 2016-12-07 华东师范大学 A kind of cameo shot clipping method based on barrage comment data
US20170052964A1 (en) * 2015-08-19 2017-02-23 International Business Machines Corporation Video clips generation system
US20170070779A1 (en) * 2015-09-08 2017-03-09 Naver Corporation Method, system, apparatus, and non-transitory computer readable recording medium for extracting and providing highlight image of video content
CN107071587A (en) * 2017-04-25 2017-08-18 腾讯科技(深圳)有限公司 The acquisition methods and device of video segment
CN107105318A (en) * 2017-03-21 2017-08-29 华为技术有限公司 A kind of video hotspot fragment extracting method, user equipment and server
CN107197368A (en) * 2017-05-05 2017-09-22 中广热点云科技有限公司 Determine method and system of the user to multimedia content degree of concern
US20180077440A1 (en) * 2016-09-09 2018-03-15 Cayke, Inc. System and method of creating, analyzing, and categorizing media
CN108537139A (en) * 2018-03-20 2018-09-14 校宝在线(杭州)科技股份有限公司 A kind of Online Video wonderful analysis method based on barrage information
CN109089127A (en) * 2018-07-10 2018-12-25 武汉斗鱼网络科技有限公司 A kind of video-splicing method, apparatus, equipment and medium
CN109104642A (en) * 2018-09-26 2018-12-28 北京搜狗科技发展有限公司 A kind of video generation method and device
CN109729435A (en) * 2017-10-27 2019-05-07 优酷网络技术(北京)有限公司 The extracting method and device of video clip
CN110427897A (en) * 2019-08-07 2019-11-08 北京奇艺世纪科技有限公司 Analysis method, device and the server of video highlight degree
CN112115707A (en) * 2020-09-08 2020-12-22 九江学院 Emotion dictionary construction method for bullet screen emotion analysis and based on expressions and tone

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1894964A (en) * 2003-12-18 2007-01-10 皇家飞利浦电子股份有限公司 Method and circuit for creating a multimedia summary of a stream of audiovisual data
JP2011041164A (en) * 2009-08-18 2011-02-24 Nippon Telegr & Teleph Corp <Ntt> Method and program for video summarization
US20150046371A1 (en) * 2011-04-29 2015-02-12 Cbs Interactive Inc. System and method for determining sentiment from text content
US8984405B1 (en) * 2013-06-26 2015-03-17 R3 Collaboratives, Inc. Categorized and tagged video annotation
CN104469508A (en) * 2013-09-13 2015-03-25 中国电信股份有限公司 Method, server and system for performing video positioning based on bullet screen information content
US20170052964A1 (en) * 2015-08-19 2017-02-23 International Business Machines Corporation Video clips generation system
US20170070779A1 (en) * 2015-09-08 2017-03-09 Naver Corporation Method, system, apparatus, and non-transitory computer readable recording medium for extracting and providing highlight image of video content
CN106210902A (en) * 2016-07-06 2016-12-07 华东师范大学 A kind of cameo shot clipping method based on barrage comment data
US20180077440A1 (en) * 2016-09-09 2018-03-15 Cayke, Inc. System and method of creating, analyzing, and categorizing media
CN107105318A (en) * 2017-03-21 2017-08-29 华为技术有限公司 A kind of video hotspot fragment extracting method, user equipment and server
CN107071587A (en) * 2017-04-25 2017-08-18 腾讯科技(深圳)有限公司 The acquisition methods and device of video segment
CN107197368A (en) * 2017-05-05 2017-09-22 中广热点云科技有限公司 Determine method and system of the user to multimedia content degree of concern
CN109729435A (en) * 2017-10-27 2019-05-07 优酷网络技术(北京)有限公司 The extracting method and device of video clip
CN108537139A (en) * 2018-03-20 2018-09-14 校宝在线(杭州)科技股份有限公司 A kind of Online Video wonderful analysis method based on barrage information
CN109089127A (en) * 2018-07-10 2018-12-25 武汉斗鱼网络科技有限公司 A kind of video-splicing method, apparatus, equipment and medium
CN109104642A (en) * 2018-09-26 2018-12-28 北京搜狗科技发展有限公司 A kind of video generation method and device
CN110427897A (en) * 2019-08-07 2019-11-08 北京奇艺世纪科技有限公司 Analysis method, device and the server of video highlight degree
CN112115707A (en) * 2020-09-08 2020-12-22 九江学院 Emotion dictionary construction method for bullet screen emotion analysis and based on expressions and tone

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
SUN SHAN,ET AL.: "Movie summarization using bullet screen comments", 《MULTIMEDIA TOOLS APPL.》 *
XIAN Y,ET AL.: "Video highlight shot extraction with time-sync comment" *
洪庆;王思尧;赵钦佩;李江峰;饶卫雄;: "基于弹幕情感分析和聚类算法的视频用户群体分类" *
邓扬;张晨曦;李江峰;: "基于弹幕情感分析的视频片段推荐模型" *
高旭;: "基于弹幕的视频高潮探测分析" *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113992973A (en) * 2021-09-22 2022-01-28 阿里巴巴达摩院(杭州)科技有限公司 Video abstract generation method and device, electronic equipment and storage medium
CN113987264A (en) * 2021-10-28 2022-01-28 北京中科闻歌科技股份有限公司 Video abstract generation method, device, equipment, system and medium
CN115171014A (en) * 2022-06-30 2022-10-11 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN115171014B (en) * 2022-06-30 2024-02-13 腾讯科技(深圳)有限公司 Video processing method, video processing device, electronic equipment and computer readable storage medium
CN115767204A (en) * 2022-11-10 2023-03-07 北京奇艺世纪科技有限公司 Video processing method, electronic equipment and storage medium
CN116033207A (en) * 2022-12-09 2023-04-28 北京奇艺世纪科技有限公司 Video title generation method and device, electronic equipment and readable storage medium
CN116896654A (en) * 2023-09-11 2023-10-17 腾讯科技(深圳)有限公司 Video processing method and related device
CN116896654B (en) * 2023-09-11 2024-01-30 腾讯科技(深圳)有限公司 Video processing method and related device

Also Published As

Publication number Publication date
CN113055741B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN113055741B (en) Video abstract generation method, electronic equipment and computer readable storage medium
US10567329B2 (en) Methods and apparatus for inserting content into conversations in on-line and digital environments
Shardanand Social information filtering for music recommendation
US8521818B2 (en) Methods and apparatus for recognizing and acting upon user intentions expressed in on-line conversations and similar environments
KR102112973B1 (en) Estimating and displaying social interest in time-based media
Tapaswi et al. Aligning plot synopses to videos for story-based retrieval
US20110093343A1 (en) System and Method of Content Generation
Jin et al. MySpace video recommendation with map-reduce on qizmt
Irie et al. Automatic trailer generation
Christel et al. Techniques for the creation and exploration of digital video libraries
Kim et al. Toward a conceptual framework of key‐frame extraction and storyboard display for video summarization
CN111259245A (en) Work pushing method and device and storage medium
KR102183957B1 (en) Advertisemnet mediation server, and method for operating the same
Bost et al. Serial speakers: a dataset of tv series
CN112804580B (en) Video dotting method and device
CN113282789B (en) Content display method and device, electronic equipment and readable storage medium
Hong et al. Multimodal PLSA for movie genre classification
McGrady et al. Dialing for Videos: A Random Sample of YouTube
Yamamoto et al. Collaborative video scene annotation based on tag cloud
CN110309415B (en) News information generation method and device and readable storage medium of electronic equipment
CN110929035A (en) Information prediction method and system for film and television works
Over et al. Creating a web-scale video collection for research
Ariyasu et al. Message analysis algorithms and their application to social tv
EP4290394A1 (en) Enhanced natural language processing search engine for media content
KR20100111907A (en) Apparatus and method for providing advertisement using user&#39;s participating information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant