CN115209210A - Method and device for generating information based on bullet screen - Google Patents

Method and device for generating information based on bullet screen Download PDF

Info

Publication number
CN115209210A
CN115209210A CN202210849124.1A CN202210849124A CN115209210A CN 115209210 A CN115209210 A CN 115209210A CN 202210849124 A CN202210849124 A CN 202210849124A CN 115209210 A CN115209210 A CN 115209210A
Authority
CN
China
Prior art keywords
bullet screen
information
emotion
speed
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210849124.1A
Other languages
Chinese (zh)
Inventor
张若凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Original Assignee
Douyin Vision Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Douyin Vision Co Ltd filed Critical Douyin Vision Co Ltd
Priority to CN202210849124.1A priority Critical patent/CN115209210A/en
Publication of CN115209210A publication Critical patent/CN115209210A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4662Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms
    • H04N21/4665Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms involving classification methods, e.g. Decision trees
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Abstract

The embodiment of the specification provides a method and a device for generating information based on a bullet screen. One embodiment of the method comprises: acquiring a plurality of pieces of bullet screen information aiming at a target video, wherein the bullet screen information comprises bullet screen sending time and bullet screen content; determining emotion information of bullet screen content included in the plurality of pieces of bullet screen information; according to the bullet screen sending time of the multiple pieces of bullet screen information and the emotion information of the bullet screen content, the speed control information for the target video is generated, so that automatic speed-doubling playing of the target video is achieved, the automatic speed-doubling playing can better accord with the emotion rule that a user watches the target video, the video playing control efficiency is improved, and the user experience is improved.

Description

Method and device for generating information based on bullet screen
Technical Field
The embodiment of the specification relates to the technical field of video playing, in particular to a method and a device for generating information based on a bullet screen.
Background
When the video content is extremely rich, a large amount of videos can be watched by users. When watching videos, users can send 'barrage', and the barrage allows the users to make comments and feel when watching the videos. There are often highlights and non-highlights in the video, and the highlights may cause the user to send a pop-up to express praise, and the non-highlights may also cause the user to send a pop-up to express dissatisfaction. In a word, the barrage reflects the emotion of the user watching the video at the current moment to a certain extent. Meanwhile, in the case of massive video, in order to save time, a user sometimes uses a double-speed playing function when watching the video. Especially, when the user is not satisfied with the currently played content, the user generally uses the double-speed playing function. At the present stage, the speed-doubling playing mode is mainly manually adjusted by a user, and in the video playing process, the user needs to frequently and manually adjust the speed-doubling interval, the speed-doubling value and the like of speed-doubling playing, so that the efficiency is low, and the user experience is poor.
Disclosure of Invention
The embodiment of the specification describes a method and a device for generating information based on a bullet screen.
According to a first aspect, there is provided a method for generating information based on a bullet screen, including: acquiring a plurality of pieces of bullet screen information aiming at a target video, wherein the bullet screen information comprises bullet screen sending time and bullet screen content; determining emotion information of bullet screen contents included in the plurality of pieces of bullet screen information; and generating double-speed control information aiming at the target video according to the bullet screen sending time of the plurality of pieces of bullet screen information and the emotion information of bullet screen contents.
According to a second aspect, an apparatus for generating information based on a bullet screen is provided, which includes: the acquisition unit is configured to acquire a plurality of pieces of bullet screen information aiming at the target video, wherein the bullet screen information comprises bullet screen sending time and bullet screen content; a determining unit configured to determine emotion information of the bullet screen content included in the plurality of pieces of bullet screen information; and a generating unit configured to generate double-speed control information for the target video according to the bullet screen transmission time of the plurality of pieces of bullet screen information and emotion information of bullet screen content.
According to a third aspect, there is provided a computer program product comprising a computer program which, when executed by a processor, performs the method as set forth in any one of the first aspects.
According to a fourth aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of any of the first aspects.
According to a fifth aspect, there is provided an electronic device comprising a memory and a processor, wherein the memory stores executable code, and the processor executes the executable code to implement the method of any one of the first aspect.
According to the method and the device for generating information based on the bullet screen, firstly, a plurality of pieces of bullet screen information aiming at the target video can be acquired, wherein the bullet screen information comprises bullet screen sending time and bullet screen content. Then, emotion information of the bullet screen content included in the plurality of pieces of bullet screen information is determined. And finally, generating double-speed control information aiming at the target video according to the bullet screen sending time of the plurality of pieces of bullet screen information and the emotion information of bullet screen contents. Because the bullet screen reflects the emotion of the user watching the video at the moment of sending the bullet screen to a certain extent, the speed control information aiming at the target video is generated based on the information of the multiple bullet screens, the automatic speed-doubling playing of the target video is realized, the automatic speed-doubling playing can be more in line with the emotion rule of the user watching the target video, the video playing control efficiency is improved, and the user experience is improved.
Drawings
FIG. 1 shows a schematic diagram of one application scenario in which embodiments of the present description may be applied;
FIG. 2 illustrates a flow diagram of a method of generating information based on a bullet screen, according to one embodiment;
FIG. 3 shows a flow diagram of a method of generating information based on a bullet screen, according to another embodiment;
fig. 4A shows a schematic diagram of mapping bullet screen transmission times of pieces of bullet screen information and emotion values of bullet screen contents to points on a two-dimensional coordinate system in one example;
FIG. 4B is a schematic diagram illustrating a process for clustering the points in FIG. 4A using a density-based clustering algorithm;
FIG. 4C is a schematic diagram showing the results of clustering the points in FIG. 4A using a density-based clustering algorithm;
FIG. 4D is a schematic diagram showing the results of refining the cluster class of FIG. 4C;
FIG. 5 shows a schematic block diagram of an apparatus for generating information based on a barrage, in accordance with one embodiment;
fig. 6 shows a schematic structural diagram of an electronic device suitable for implementing embodiments of the present application.
Detailed Description
It is understood that before the technical solutions disclosed in the embodiments of the present disclosure are used, the type, the use range, the use scene, etc. of the personal information related to the present disclosure should be informed to the user and obtain the authorization of the user through a proper manner according to the relevant laws and regulations.
For example, in response to receiving an active request from a user, a prompt message is sent to the user to explicitly prompt the user that the requested operation to be performed would require the acquisition and use of personal information to the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operations of the technical solution of the present disclosure, according to the prompt information.
As an optional but non-limiting implementation manner, in response to receiving an active request from the user, the manner of sending the prompt information to the user may be, for example, a pop-up window, and the prompt information may be presented in a text manner in the pop-up window. In addition, a selection control for providing personal information to the electronic device by the user's selection of "agreeing" or "disagreeing" can be carried in the pop-up window.
It is understood that the above notification and user authorization process is only illustrative and not limiting, and other ways of satisfying relevant laws and regulations may be applied to the implementation of the present disclosure.
The technical solutions provided in the present specification are described in further detail below with reference to the accompanying drawings and embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. It should be noted that the embodiments and features of the embodiments in the present specification may be combined with each other without conflict.
As described above, the current-stage double-speed playing mode is mainly manually adjusted by the user, and is inefficient and poor in user experience. Therefore, the embodiment of the present specification provides a method for generating information based on a bullet screen, so as to implement automatic double-speed playing of a video. Fig. 1 shows a schematic diagram of one application scenario in which embodiments of the present specification may be applied. As shown in fig. 1, the server 101 may first obtain, from a plurality of clients 102, a plurality of pieces of bullet screen information that are historically input by a plurality of different users for the video a, wherein each piece of bullet screen information includes a bullet screen transmission time and bullet screen content. It is understood that the bullet screen sending time is a time relative to the video duration, for example, the bullet screen sending time of a certain bullet screen may be the 15 th second of video playing. Thereafter, the server 101 may determine emotion information of the bullet screen content included in each of the plurality of pieces of bullet screen information. And finally, generating double-speed control information aiming at the video A according to the bullet screen sending time of the plurality of pieces of bullet screen information and the emotion information of the bullet screen content. After generating the double speed control information for the video a, if there is a client 102 that sends a request for acquiring the video a to the server 101, the server 101 may send the video a and the double speed control information for the video a together to the client 102. In this way, the client 102 can automatically control the video a to play at double speed using the double speed control information.
With continued reference to fig. 2, fig. 2 illustrates a flowchart of a method for generating information based on a bullet screen, according to an embodiment. The method can be applied to a server. Here, the server may be a background server that provides support for video played on a terminal device used by the user. As shown in fig. 2, the method for generating information based on a bullet screen may include the following steps:
step 201, acquiring a plurality of pieces of bullet screen information aiming at a target video.
In this embodiment, the server may acquire bullet screen information input by a plurality of users for the target video history from clients installed in terminal devices of the plurality of users. Wherein, the bullet screen information can include bullet screen sending time and bullet screen content.
Generally, a user can play a video by using a client installed on a terminal device, and during the video playing process, the user can input bullet screen content aiming at the currently played content. For example, if the user feels that the currently played content is very wonderful, a pop-up content expression praise like "good stick", "true good", "praise", etc. may be transmitted. If the user feels that the currently played content is poor, bullet screen content similar to 'real rubbish', 'too hard to see' and the like can be sent to express discontent emotions. Based on this, the client may associatively send the barrage information and the video identification videoID of the target video to the server. In this way, according to the video identification videoID, the server can acquire a plurality of pieces of historical bullet screen information sent by a plurality of users for the target view. The videoID may be, for example, the name, number, etc. of the target video.
Here, the barrage information input by the user for the target video may be obtained based on the authorization of the user, for example, the method for obtaining and using the barrage information of the user may be referred to the descriptions of paragraphs 1 to 3 of the detailed embodiment section of the present disclosure.
Step 202, determining emotion information of the bullet screen content included in the plurality of pieces of bullet screen information.
In this embodiment, for each piece of acquired bullet screen information, the server may determine emotion information of bullet screen content included in the bullet screen information in various ways. For example, the server may determine emotional information of the barrage content using a pre-trained machine learning model. As an example, the emotional information of the barrage content may include a negative emotion and a positive emotion. At this time, the machine learning model may be a classification model for classifying the inputted bullet screen contents into a negative emotion and a positive emotion.
In some optional implementations, before step 202, the method for generating information based on a bullet screen may further include the following steps: first, a plurality of keywords are extracted from a plurality of pieces of bullet screen information of a plurality of videos acquired in advance. Then, the keywords are sorted from high to low according to the occurrence frequency, and the keywords arranged in the front preset position are selected as the matching keywords. Finally, emotion information set for the matching keyword may be received.
In this implementation manner, a plurality of bullet screen contents input by a plurality of users for a plurality of video histories may be acquired, and a plurality of keywords may be identified from the plurality of bullet screen contents by using various keyword extraction methods. Then, the occurrence frequency of each keyword in the multiple bullet screen contents is counted, the multiple keywords are ranked according to the sequence from high to low in the occurrence frequency, and the keywords arranged in the top preset positions (for example, the keywords arranged in the top 100 positions) are selected as the matching keywords. Then, the matching keywords may be sent to the terminal device used by the technician, so that the technician sets corresponding emotion information for each matching keyword according to the matching keywords displayed by the terminal device. Through the implementation mode, the keywords for matching can be determined according to the bullet screen contents input by the user aiming at the plurality of video histories, and the emotion information is set for the keywords for matching, so that the emotion information can be determined for the bullet screen contents based on the keywords for matching in the following.
Optionally, based on the foregoing implementation, the foregoing step 202 may specifically be performed as follows: and matching words contained in the bullet screen content of each bullet screen information with the matching keywords, and determining emotion information of the bullet screen content of each bullet screen information according to the matching result.
For example, for the bullet screen content of each piece of bullet screen information, a word segmentation device may be used to segment the bullet screen content, and then the words obtained by the word segmentation are matched with the keywords for matching. And finally, determining the emotion information of the bullet screen content according to the matching result. For example, the emotion information corresponding to the matching keyword matched with the segmentation word of the bullet screen content may be used as the emotion information of the bullet screen content.
And step 203, generating double-speed control information aiming at the target video according to the bullet screen sending time of the plurality of pieces of bullet screen information and the emotion information of bullet screen contents.
In this embodiment, the server may generate double-speed control information for the target video according to the bullet screen transmission time of the plurality of pieces of bullet screen information and the emotion information of the bullet screen content.
Practical analysis shows that bullet screens sent by a large number of users for a certain video are often concentrated in one or a few video subsections of the video, and are unevenly distributed in the whole video. And when the content of a certain video sub-segment is poor, the user is often triggered to send barrage content with negative emotion. Based on this, the server can generate double-speed control information for the video according to the bullet screen sending time of the plurality of pieces of bullet screen information and the emotion information of the bullet screen content. For example, first, one or more target video subsegments may be determined according to the bullet screen sending time of the pieces of bullet screen information. For example, the video subsections with the number of bullet screens exceeding a certain threshold in a unit time can be used as the target video subsections. And then, judging emotion information of bullet screen content of a bullet screen corresponding to the target video sub-segment, and determining the double speed value of the target video sub-segment according to the emotion information. For example, when a majority (e.g., more than 90%) of emotional information of the bullet screen content of the bullet screen corresponding to a certain target video sub-segment is negative emotions, it indicates that most users are not satisfied with the target video sub-segment. In this case, the target video sub-segment can be played at a double speed, for example, a double speed value of 3 times, 2 times, etc. is set. The target video sub-segment and the double speed value can form a piece of double speed control information to control the client to play the target video sub-segment at the double speed value.
In some optional implementation manners, the method for generating information based on a bullet screen may further include the following steps: and responding to video acquisition information sent by the client aiming at the target video, and sending the target video and the speed control information to the client so that the client can control the playing of the target video by using the speed control information. By the implementation mode, automatic speed doubling control on the target video can be realized.
Referring back to the above procedure, in the above-described embodiment of the present specification, first, a plurality of pieces of bullet screen information for a target video may be acquired, where the bullet screen information includes a bullet screen transmission time and bullet screen content. Then, emotion information of the bullet screen content included in the plurality of pieces of bullet screen information is determined. And finally, generating double-speed control information aiming at the target video according to the bullet screen sending time of the plurality of pieces of bullet screen information and the emotion information of bullet screen contents. Because the bullet screen reflects the emotion of the user watching the video at the moment of sending the bullet screen to a certain extent, the speed control information aiming at the target video is generated based on the information of the multiple bullet screens, the automatic speed-doubling playing of the target video is realized, the automatic speed-doubling playing can be more in line with the emotion rule of the user watching the target video, the video playing control efficiency is improved, and the user experience is improved.
With further reference to fig. 3, fig. 3 shows a flow diagram of a method of generating information based on a bullet screen, according to another embodiment. The flow of the method for generating information based on the bullet screen comprises the following steps:
step 301, acquiring multiple pieces of barrage information for a target video.
In this embodiment, the server may acquire bullet screen information input by a plurality of users for the target video history from clients installed in terminal devices of the plurality of users. The bullet screen information may include bullet screen sending time and bullet screen content.
Step 302, determining emotion information of the bullet screen content included in the plurality of pieces of bullet screen information.
In this embodiment, for each piece of acquired bullet screen information, the server may determine emotion information of bullet screen content included in the bullet screen information in various ways.
For example, the server may first extract a plurality of keywords from a plurality of pieces of bullet screen information of a plurality of videos acquired in advance. And then, sequencing the plurality of keywords according to the sequence of the occurrence frequency from high to low, and selecting the keywords arranged in the front preset position as the keywords for matching. The server may also receive emotional information set by the technician for the matching keywords.
In this example, the emotional information of the barrage content may include a negative emotion and a positive emotion. In addition, the negative emotion can be divided into at least one negative emotion value according to the emotion intensity, and the positive emotion can be divided into at least one positive emotion value. That is, the mood information includes at least one negative mood value and at least one positive mood value. For example, the emotion information may be divided into 6 levels of emotion values, which are-3, -2, -1, 2, and 3, respectively. Where, -3 means very negative, -2 means relatively negative, -1 means generally negative, 1 means generally positive, 2 means relatively positive, and 3 means very positive. For example, very positive keywords such as "power give", "refuel", "we are with you" and the like may set the sentiment value to 3, and very negative keywords such as "boring", "rotting", "garbage", "water" and the like may set the sentiment value to-3. It is understood that, in this example, the number of the levels, the meaning represented by each emotion value, and the like are only illustrative, and are not limited to the levels, the meaning represented by each emotion value, and the like, and in practice, different numbers of levels, different emotion values, and the like may be divided according to actual needs, and are not limited herein.
Therefore, the server can match words contained in the bullet screen content of each piece of bullet screen information with the keywords for matching, and the emotion value of the bullet screen content of each piece of bullet screen information is determined according to the matching result.
And 303, clustering the plurality of bullet screen information according to the bullet screen sending time of the plurality of bullet screen information and the emotion information of the bullet screen content to obtain at least one cluster.
In this embodiment, the server may employ a plurality of clustering algorithms, for example, a partition-based clustering algorithm, a hierarchy-based clustering algorithm, a density-based clustering algorithm, and the like, to cluster the plurality of pieces of bullet screen information according to the bullet screen sending time of the plurality of pieces of bullet screen information and the emotion information of the bullet screen content, so as to obtain at least one cluster. Each cluster may include at least one piece of bullet screen information. The cluster generated by clustering is a collection of a set of data objects (in this example, bullet screen information) that are similar to objects in the same cluster and different from objects in other clusters. As described above, the bullet screen sending time and the emotion value of the bullet screen content of different bullet screen information may be different, and by clustering a plurality of pieces of bullet screen information, the cluster obtained by clustering can more accurately reflect the rule of sending the bullet screen by a large number of users for the target video, and more accurately learn the start time and the end time of sending the bullet screen by a large number of users for the target video.
In some optional implementations, the step 303 may be implemented as follows:
s1, clustering points of a two-dimensional coordinate system mapped by bullet screen sending time of a plurality of pieces of bullet screen information and emotion information of bullet screen contents by using a preset radius and a density-based clustering algorithm to obtain at least one initial cluster.
In this implementation manner, a two-dimensional coordinate system may be established with the video duration of the target video as the abscissa and the emotion value as the ordinate, and the bullet screen sending time of the pieces of bullet screen information and the emotion value of the bullet screen content may be mapped to a point on the two-dimensional coordinate system. As shown in fig. 4A, fig. 4A shows a schematic diagram in which bullet screen transmission times of pieces of bullet screen information and emotion values of bullet screen contents are mapped to points on a two-dimensional coordinate system in one example. In this example, the sentiment values include a plurality of sentiment values of-3, -2, -1, 2, 3, etc. It is to be understood that the number of points, emotion values, and the like shown in fig. 4A are merely illustrative, and are not limitations on the number of points, emotion values, and the like. In practice, a different number of points, different emotion values, and the like may be set according to an actual scene.
The radius may be preset inside the server, for example, a technician may manually set based on a priori knowledge. In this way, after mapping the bullet screen sending time of the pieces of bullet screen information and the emotion value of the bullet screen content to the points on the two-dimensional coordinate system, the server may cluster the points by using the density-based clustering algorithm by using the preset radius, thereby obtaining at least one initial cluster. For example, clustering may be performed using a Density-Based Clustering algorithm, such as DBSCAN (Density-Based Spatial Clustering of Applications with Noise Density). As an example, as shown in fig. 4B and 4C, fig. 4B shows a schematic diagram of a process of clustering the points in fig. 4A using a density-based clustering algorithm, and fig. 4C shows a schematic diagram of a result of clustering the points in fig. 4A using a density-based clustering algorithm. As can be seen from fig. 4C, after the points in fig. 4A are clustered by using a density-based clustering algorithm, 3 clusters can be obtained. It is understood that fig. 4B and 4C are only schematic, and not limiting the clustering process, the clustering result, and the like, and in practice, different clustering results may be obtained by setting different parameters of the clustering algorithm, such as radius, density threshold, and the like.
Optionally, the preset radius may be determined based on a time interval for the user to send the bullet screen. In practice, there is a certain regularity in the sending of the bullet screen by the user, for example, the user usually sends the bullet screen within the first few seconds or the last few seconds of the section that wants to send the bullet screen, for example, for a wonderful section, the bullet screen such as "front high energy" may be sent in the first few seconds, and the bullet screen such as "real power" may be sent in the last few seconds. Therefore, the radius may be preset according to the time interval at which most users send the bullet screens. For example, if it is statistically found that the time interval for most users to send the bullet screen is 3S (seconds) on average, 3S may be set as the radius.
Because the result of the clustering algorithm depends on the radius, in order to reduce the interference, the initial cluster obtained by the initial clustering can be purified to ensure that the points in the obtained cluster are more compact, so that the rule of sending the bullet screen by a large number of users can be more accurately expressed.
And S2, reducing the radius according to a preset rule, clustering each initial cluster again until a preset condition is met, and stopping clustering to obtain at least one cluster.
In this implementation, a rule for decreasing the radius may be preset, for example, each radius decrease may be specified for a preset time period, for example, each radius decrease by a minimum granularity may be specified, for example, each radius decrease by 1 second is specified, and so on. Clustering each initial cluster again by adopting the reduced radius, judging whether a preset condition is met or not after the clustering is finished, and stopping clustering if the preset condition is met to obtain at least one cluster; if the initial cluster is not satisfied, the radius is reduced again according to the preset rule, and the initial cluster is clustered again until the preset condition is satisfied. For example, taking the initial cluster as the result of the clustering shown in fig. 4C as an example, the result shown in fig. 4D can be obtained by refining the initial cluster. FIG. 4D shows a schematic representation of the results obtained from the purification of the cluster class of FIG. 4C. As can be seen from fig. 4D and 4C, the points included in the cluster obtained after the purification are more compact. It is understood that fig. 4D is only schematic and not limiting the points and the like included in the cluster obtained after the purification. Through the implementation mode, the cluster with more compact contained points can be obtained, and the rule that the user sends the bullet screen can be more accurately expressed by the clusters, so that the speed control information generated based on the clusters is more accurate and better meets the requirements of the user.
Optionally, the preset condition for determining whether to continue clustering may be a condition set according to actual needs. For example, the preset condition may be that the radius is smaller than a preset value, that is, when the reduced radius is smaller than a preset value, the radius is no longer used for clustering again, and the clustering ends. For another example, the preset condition may be that a ratio of the number of points included in the clustered cluster obtained after the re-clustering to the number of points included in the initial clustered cluster is smaller than a preset ratio. The preset ratio can be set according to actual needs, and for example, can be set to 70%, 80%, and the like. For example, assuming that the initial cluster C1 is clustered again to obtain the cluster C2, and the ratio of the number of points included in the cluster C2 to the number of points included in the initial cluster C1 is smaller than the preset ratio, the clustering is stopped.
And 304, generating double-speed control information aiming at the target video according to the bullet screen sending time of the bullet screen information and the emotion information of the bullet screen content included in each class cluster.
In this embodiment, the server may generate double-speed control information for playing the target video according to at least one cluster obtained by clustering. For example, since the barrage information included in each clustered class is similar to each other, each class can be analyzed, so as to obtain the speed doubling control information by analyzing a group of barrage information with similar rules. For example, the bullet screen information included in each cluster may be statistically analyzed, and the double speed control information may be generated according to the statistical analysis result. For example, multiple speed values can be preset for different emotional information, so that for each cluster obtained by clustering, the earliest bullet screen sending time and the latest bullet screen sending time in the cluster can be determined, a time interval corresponding to the earliest bullet screen sending time and the latest bullet screen sending time is used as a multiple speed interval of multiple speed control information, then the dominant emotional information in the cluster is determined according to the statistical analysis result of the emotional information of the bullet screen content of the bullet screen information in the cluster, and the multiple speed value in the multiple speed interval is determined according to the dominant emotional information.
In some optional implementations, the step 304 may be further implemented as follows:
step 1), determining the starting time and the ending time of the double-speed operation aiming at the target video according to the bullet screen sending time of the bullet screen information included in each class cluster.
In this implementation manner, for each cluster obtained by clustering, the start time of a double-speed operation may be determined according to the bullet screen sending time of at least one piece of bullet screen information included in the cluster. For example, the earliest bullet screen transmission time in the cluster may be determined as the start time of a double speed operation, and the latest bullet screen transmission time in the cluster may be determined as the end time of the double speed operation. Thus, the start time and the end time of one double-speed operation may be determined for each cluster, and the start time and the end time of the double-speed operation may constitute a double-speed section of the double-speed operation.
And 2) determining a double-speed value of double-speed operation aiming at the target video based on the emotion information of the bullet screen content of the bullet screen information contained in each class cluster.
Practice shows that the barrage content reflects the emotion of the user when watching the video to a certain extent, and whether the video is played at double speed can be judged based on the emotion of the user when watching the video. For example, when the emotional information of the bullet screen content of the bullet screen corresponding to a certain video sub-segment is mostly negative (e.g., more than 90%), it indicates that most users are dissatisfied with the video sub-segment. In this case, the video sub-segment can be played at double speed, for example, a double speed value of 3 times, 2 times, etc. is set. For another example, when the emotion information of the bullet screen content of the bullet screen corresponding to a certain video sub-segment is mostly positive, it indicates that most users are satisfied with the video sub-segment, and in this case, the video sub-segment may not be played at double speed, that is, played at normal double speed (1-time speed), or the double speed value is set to be 1 time.
Based on the above, the emotion information of the bullet screen content of the bullet screen information contained in each cluster can be counted, and the speed doubling value of the speed doubling operation corresponding to the cluster is set according to the counting result. The speed-doubling interval and the speed-doubling value of the speed-doubling operation corresponding to the cluster can form speed-doubling control information aiming at the target video.
Optionally, the emotional information of the barrage content may include at least one negative emotional value and at least one positive emotional value. For example, the emotion information may be divided into 6 levels of emotion values, which are-3, -2, -1, 2, and 3, respectively. Where, -3 means very negative, -2 means relatively negative, -1 means generally negative, 1 means generally positive, 2 means relatively positive, and 3 means very positive. And the step 2) can be realized as follows:
for example, it may be determined whether a ratio N1/N2 of the number N1 of bullet screen information including a negative emotion value to the number N2 of bullet screen information including a positive emotion value in one cluster is greater than a preset value. The preset value can be set according to actual needs, for example, to 2, 3, and the like. If the ratio N1/N2 is larger than the preset value, the fact that most users are not satisfied with the video subsegment corresponding to the cluster is indicated, and at the moment, the speed doubling value of the speed doubling operation corresponding to the cluster can be determined according to the negative emotion value of the bullet screen information in the cluster. Generally, the less satisfactory a user is to a piece of video, the more desirable the speed at which the piece of video is played is. Based on the above, the occupation ratio of various negative emotion values in the cluster can be counted, and the speed doubling value of the speed doubling operation can be determined according to the occupation ratio of the various negative emotion values. For example, when the negative emotion value of the cluster is the largest, i.e., the ratio of-3 is the largest, which indicates that many users are very negative to the video sub-segment corresponding to the cluster, the speed value for the speed doubling operation may be set to be larger, e.g., to be 3 speed. When the negative emotion value of the cluster is the largest, the negative emotion value-2 is the largest, which indicates that many users are relatively negative to the video subsections corresponding to the cluster, and at this time, the speed value of the speed doubling operation can be set to be relatively large, for example, to be 2 speed. When the negative emotion value of the cluster is the largest, which means that many users generally have a negative impact on the video sub-segment corresponding to the cluster, the speed value of the speed doubling operation may be set to be slightly larger, for example, to be 1.5 times speed.
For another example, it may also be determined whether a ratio N2/N1 of the number N2 of bullet screen information including a positive emotion value to the number N1 of bullet screen information including a negative emotion value in one cluster is greater than the preset value, and if the ratio N2/N1 is greater than the preset value, it indicates that most users are satisfied with the video sub-segment corresponding to the cluster, and at this time, it is not necessary to perform double-speed playing on the video sub-segment. However, the playing speed of the bullet screen content in the double-speed interval of the double-speed operation can be determined according to the positive emotion value of the bullet screen information in the cluster, for example, the playing speed of the bullet screen content can be properly slowed down, so that the user can see the bullet screen content clearly. For example, when the percentage of 3 in the positive emotion values of the cluster is the largest, it indicates that many users are very positive for the video sub-segments corresponding to the cluster, and at this time, the speed value of the playing speed of the bullet screen content in the speed doubling interval corresponding to the speed doubling operation may be set to be smaller, for example, to be 0.3 speed of the original speed doubling. When the ratio of 2 in the positive emotion values of the cluster is the maximum, it indicates that many users are more positive for the video subsections corresponding to the cluster, and at this time, the speed value of the playing speed of the bullet screen content in the speed doubling interval corresponding to the speed doubling operation can be set to be smaller, for example, to be 0.5 speed. When the ratio of 1 in the positive emotion values of the cluster is the largest, it indicates that many users are generally positive for the video subsections corresponding to the cluster, and at this time, the speed value of the playing speed of the bullet screen content in the speed doubling interval corresponding to the speed doubling operation can be set to be slightly smaller, for example, set to be 0.75 speed.
It can be understood that when the number N1 of bullet screen information containing a negative emotion value in a cluster is not much different from the number N2 of bullet screen information containing a positive emotion value, it indicates that the emotion of the user is not clear for the video sub-segment corresponding to the cluster, one part of people is satisfied with the video segment, one part of people is not satisfied with the video segment, and the number of people in the two parts is not significantly different. Therefore, the speed doubling operation is not performed on the video subsegment corresponding to the cluster, so as to avoid causing the user to feel dislike.
According to an embodiment of another aspect, an apparatus for generating information based on a bullet screen is provided. The apparatus for generating information based on the bullet screen may be deployed in a server.
Fig. 5 shows a schematic block diagram of an apparatus for generating information based on a bullet screen according to an embodiment. As shown in fig. 5, the apparatus 500 for generating information based on a bullet screen includes: an obtaining unit 501 configured to obtain multiple pieces of bullet screen information for a target video, where the bullet screen information includes bullet screen sending time and bullet screen content; a determining unit 502 configured to determine emotion information of the bullet screen content included in the plurality of pieces of bullet screen information; a generating unit 503 configured to generate double-speed control information for the target video according to the bullet screen transmission time of the plurality of pieces of bullet screen information and emotion information of bullet screen content.
In some optional implementations of this embodiment, the generating unit 503 includes: a clustering unit (not shown in the figure), configured to cluster the plurality of pieces of bullet screen information according to the bullet screen sending time of the plurality of pieces of bullet screen information and emotion information of bullet screen content, so as to obtain at least one cluster; and an information generating unit (not shown in the figure) configured to generate double-speed control information for the target video according to the bullet screen transmission time of the bullet screen information and the emotion information of the bullet screen content included in each of the above-mentioned clusters.
In some optional implementation manners of this embodiment, the clustering unit is further configured to cluster, by using a density-based clustering algorithm, points of a two-dimensional coordinate system mapped by bullet screen sending times of the multiple pieces of bullet screen information and emotion information of bullet screen content, with a preset radius, so as to obtain at least one initial cluster; and reducing the radius according to a preset rule, clustering each initial cluster again until a preset condition is met, and stopping clustering to obtain at least one cluster.
In some optional implementations of this embodiment, the preset radius is determined based on a time interval for the user to send the bullet screen.
In some optional implementations of the embodiment, the preset condition includes one of: the radius is smaller than a preset value; and the ratio of the number of the points in the cluster obtained after clustering again to the number of the points in the initial cluster is smaller than the preset ratio.
In some optional implementations of this embodiment, the information generating unit includes: a time determining module configured to determine a start time and an end time of a double speed operation for the target video according to a bullet screen transmission time of bullet screen information included in each of the class clusters; and a double-speed value determining module configured to determine a double-speed value of a double-speed operation for the target video based on emotion information of the bullet screen content of the bullet screen information contained in each of the clusters.
In some optional implementations of this embodiment, the sentiment information includes at least one negative sentiment value and at least one positive sentiment value; and the double speed value determination module is further configured to: in response to the fact that the ratio of the number of the bullet screen information containing the negative emotion value to the number of the bullet screen information containing the positive emotion value in the cluster is larger than a preset value, determining a speed doubling value of speed doubling operation according to the negative emotion value of the bullet screen information in the cluster; and in response to the fact that the ratio of the number of the bullet screen information containing the positive emotion value to the number of the bullet screen information containing the negative emotion value in the cluster is larger than the preset value, determining the playing speed of the bullet screen content in a double-speed interval of the double-speed operation according to the positive emotion value of the bullet screen information in the cluster, wherein the double-speed interval of the double-speed operation is an interval formed by the starting time and the ending time of the double-speed operation.
In some optional implementations of this embodiment, the apparatus 500 further includes: an extracting unit (not shown in the figure) configured to extract a plurality of keywords from a plurality of pieces of bullet screen information of a plurality of videos acquired in advance; a sorting unit (not shown in the figure) configured to sort the plurality of keywords in an order from high to low in occurrence frequency, and select the keyword ranked in the top preset position as a matching keyword; a receiving unit (not shown in the figure) configured to receive emotion information set for the above-described keyword for matching.
In some optional implementation manners of this embodiment, the determining unit 502 is further configured to match words included in the bullet screen content of each piece of bullet screen information with the matching keywords, and determine emotion information of the bullet screen content of each piece of bullet screen information according to a matching result.
In some optional implementations of this embodiment, the apparatus 500 further includes: a sending unit (not shown in the figure) configured to send the target video and the multiple speed control information to the client in response to a video acquisition request sent by the client for the target video, so that the client can control the playing of the target video by using the multiple speed control information.
The above device embodiments correspond to the method embodiments, and specific descriptions may refer to descriptions of the method embodiments, which are not repeated herein. The device embodiment is obtained based on the corresponding method embodiment, has the same technical effect as the corresponding method embodiment, and for the specific description, reference may be made to the corresponding method embodiment.
According to an embodiment of another aspect, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method described in fig. 2.
According to another embodiment of the present invention, there is also provided an electronic device, including a memory and a processor, wherein the memory stores executable codes, and the processor executes the executable codes to implement the method described in fig. 2.
The foregoing describes certain embodiments of the present specification, and other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily have to be in the particular order shown or in sequential order to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Referring now to FIG. 6, a block diagram of an electronic device (e.g., the server in FIG. 1) 600 suitable for implementing embodiments of the present application is shown. The server shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, etc.; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be alternatively implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. When executed by the processing device 601, the computer program performs the above-described functions defined in the methods of the embodiments of the present application.
The embodiments of the present specification also provide a computer-readable storage medium on which a computer program is stored, which, when executed in a computer, causes the computer to perform the method provided in the specification.
It should be noted that the computer readable medium described in the embodiments of the present specification may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the present description, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present description, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (Radio Frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the server, cause the electronic device to: acquiring a plurality of pieces of bullet screen information aiming at a target video, wherein the bullet screen information comprises bullet screen sending time and bullet screen contents; determining emotion information of the bullet screen content included in the bullet screen information; and generating double-speed control information aiming at the target video according to the bullet screen sending time of the plurality of pieces of bullet screen information and the emotion information of bullet screen contents.
Computer program code for carrying out operations for embodiments of the present specification may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the storage medium and the computing device embodiments, since they are substantially similar to the method embodiments, they are described relatively simply, and reference may be made to some descriptions of the method embodiments for relevant points.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in connection with the embodiments of the invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The above-mentioned embodiments further describe the objects, technical solutions and advantages of the embodiments of the present invention in detail. It should be understood that the above description is only exemplary of the embodiments of the present invention, and is not intended to limit the scope of the present invention, and any modification, equivalent replacement, or improvement made on the basis of the technical solutions of the present invention should be included in the scope of the present invention.

Claims (13)

1. A method for generating information based on a bullet screen comprises the following steps:
acquiring a plurality of pieces of bullet screen information aiming at a target video, wherein the bullet screen information comprises bullet screen sending time and bullet screen content;
determining emotion information of bullet screen content included in the plurality of pieces of bullet screen information;
and generating double-speed control information aiming at the target video according to the bullet screen sending time of the plurality of pieces of bullet screen information and the emotion information of the bullet screen content.
2. The method of claim 1, wherein the generating double-speed control information for the target video according to the bullet screen transmission time of the plurality of pieces of bullet screen information and emotion information of bullet screen content comprises:
clustering the plurality of pieces of bullet screen information according to the bullet screen sending time of the plurality of pieces of bullet screen information and emotion information of bullet screen contents to obtain at least one cluster;
and generating double-speed control information aiming at the target video according to the bullet screen sending time of the bullet screen information included in each cluster and the emotion information of the bullet screen content.
3. The method of claim 2, wherein the clustering the plurality of pieces of bullet screen information according to bullet screen sending time of the plurality of pieces of bullet screen information and emotion information of bullet screen contents to obtain at least one cluster comprises:
clustering points of a two-dimensional coordinate system mapped by bullet screen sending time of the plurality of pieces of bullet screen information and emotion information of bullet screen contents by adopting a preset radius and using a density-based clustering algorithm to obtain at least one initial cluster;
and reducing the radius according to a preset rule, clustering each initial cluster again until a preset condition is met, and stopping clustering to obtain at least one cluster.
4. The method of claim 3, wherein the preset radius is determined based on a time interval during which a user sends a bullet screen.
5. The method according to claim 2, wherein the generating double-speed control information for the target video according to the bullet screen transmission time of bullet screen information and emotion information of bullet screen content included in each of the clusters comprises:
determining the starting time and the ending time of the double-speed operation aiming at the target video according to the bullet screen sending time of bullet screen information included in each class cluster;
and determining a double-speed value of double-speed operation aiming at the target video based on the emotion information of the bullet screen content of the bullet screen information contained in each cluster.
6. The method of claim 5, wherein the mood information comprises at least one negative mood value and at least one positive mood value; and
the determining a double-speed value of double-speed operation for the target video based on emotion information of the bullet screen content of the bullet screen information contained in each cluster class comprises:
in response to the fact that the ratio of the number of the bullet screen information containing the negative emotion value to the number of the bullet screen information containing the positive emotion value in the cluster is larger than a preset value, determining a speed doubling value of speed doubling operation according to the negative emotion value of the bullet screen information in the cluster;
and in response to the fact that the ratio of the number of the bullet screen information containing the positive emotion value to the number of the bullet screen information containing the negative emotion value in the cluster is larger than the preset value, determining the playing speed of the bullet screen content in the speed doubling interval of the speed doubling operation according to the positive emotion value of the bullet screen information in the cluster, wherein the speed doubling interval of the speed doubling operation is an interval formed by the starting time and the ending time of the speed doubling operation.
7. The method of claim 1, wherein prior to determining the emotional information of the bullet screen content included in the plurality of pieces of bullet screen information, the method further comprises:
extracting a plurality of keywords from a plurality of pieces of bullet screen information of a plurality of videos acquired in advance;
sequencing the keywords according to the sequence of the occurrence frequency from high to low, and selecting the keywords arranged in the front preset position as matching keywords;
receiving emotion information set for the matching keywords.
8. The method of claim 7, wherein the determining of emotion information of the bullet screen content included in the plurality of pieces of bullet screen information comprises:
and matching words contained in the bullet screen content of each bullet screen information with the matching keywords, and determining emotion information of the bullet screen content of each bullet screen information according to a matching result.
9. The method of claim 1, wherein the method further comprises:
and responding to a video acquisition request sent by a client aiming at the target video, and sending the target video and the double-speed control information to the client so that the client can control the playing of the target video by using the double-speed control information.
10. An apparatus for generating information based on a barrage, comprising:
the acquisition unit is configured to acquire a plurality of pieces of bullet screen information aiming at the target video, wherein the bullet screen information comprises bullet screen sending time and bullet screen content;
a determination unit configured to determine emotion information of the bullet screen content included in the plurality of pieces of bullet screen information;
and the generating unit is configured to generate double-speed control information aiming at the target video according to the bullet screen sending time of the plurality of pieces of bullet screen information and the emotion information of bullet screen content.
11. A computer program product comprising a computer program which, when executed by a processor, implements the method of any one of claims 1-9.
12. A computer-readable storage medium, having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of any one of claims 1-9.
13. An electronic device comprising a memory having stored therein executable code and a processor that, when executing the executable code, implements the method of any of claims 1-9.
CN202210849124.1A 2022-07-19 2022-07-19 Method and device for generating information based on bullet screen Pending CN115209210A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210849124.1A CN115209210A (en) 2022-07-19 2022-07-19 Method and device for generating information based on bullet screen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210849124.1A CN115209210A (en) 2022-07-19 2022-07-19 Method and device for generating information based on bullet screen

Publications (1)

Publication Number Publication Date
CN115209210A true CN115209210A (en) 2022-10-18

Family

ID=83582751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210849124.1A Pending CN115209210A (en) 2022-07-19 2022-07-19 Method and device for generating information based on bullet screen

Country Status (1)

Country Link
CN (1) CN115209210A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016197577A1 (en) * 2015-06-12 2016-12-15 百度在线网络技术(北京)有限公司 Method and apparatus for labelling comment information and computer device
CN107888948A (en) * 2017-11-07 2018-04-06 北京小米移动软件有限公司 Determine method and device, the electronic equipment of video file broadcasting speed
CN108495149A (en) * 2018-03-16 2018-09-04 优酷网络技术(北京)有限公司 Multimedia content playback method and device
CN108509033A (en) * 2018-03-13 2018-09-07 广东欧珀移动通信有限公司 Information processing method and related product
CN109309880A (en) * 2018-10-08 2019-02-05 腾讯科技(深圳)有限公司 Video broadcasting method, device, computer equipment and storage medium
CN110427897A (en) * 2019-08-07 2019-11-08 北京奇艺世纪科技有限公司 Analysis method, device and the server of video highlight degree
CN112969100A (en) * 2021-03-24 2021-06-15 西安闻泰信息技术有限公司 Video playing control method, device, equipment and medium
CN113033584A (en) * 2019-12-09 2021-06-25 Oppo广东移动通信有限公司 Data processing method and related equipment
CN113596520A (en) * 2021-02-08 2021-11-02 腾讯科技(深圳)有限公司 Video playing control method and device and electronic equipment
CN114550157A (en) * 2022-02-21 2022-05-27 上海哔哩哔哩科技有限公司 Bullet screen gathering identification method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016197577A1 (en) * 2015-06-12 2016-12-15 百度在线网络技术(北京)有限公司 Method and apparatus for labelling comment information and computer device
CN107888948A (en) * 2017-11-07 2018-04-06 北京小米移动软件有限公司 Determine method and device, the electronic equipment of video file broadcasting speed
CN108509033A (en) * 2018-03-13 2018-09-07 广东欧珀移动通信有限公司 Information processing method and related product
CN108495149A (en) * 2018-03-16 2018-09-04 优酷网络技术(北京)有限公司 Multimedia content playback method and device
CN109309880A (en) * 2018-10-08 2019-02-05 腾讯科技(深圳)有限公司 Video broadcasting method, device, computer equipment and storage medium
CN110427897A (en) * 2019-08-07 2019-11-08 北京奇艺世纪科技有限公司 Analysis method, device and the server of video highlight degree
CN113033584A (en) * 2019-12-09 2021-06-25 Oppo广东移动通信有限公司 Data processing method and related equipment
CN113596520A (en) * 2021-02-08 2021-11-02 腾讯科技(深圳)有限公司 Video playing control method and device and electronic equipment
CN112969100A (en) * 2021-03-24 2021-06-15 西安闻泰信息技术有限公司 Video playing control method, device, equipment and medium
CN114550157A (en) * 2022-02-21 2022-05-27 上海哔哩哔哩科技有限公司 Bullet screen gathering identification method and device

Similar Documents

Publication Publication Date Title
CN110446057B (en) Method, device and equipment for providing live auxiliary data and readable medium
US11735176B2 (en) Speaker diarization using speaker embedding(s) and trained generative model
US10332507B2 (en) Method and device for waking up via speech based on artificial intelligence
CN107463701B (en) Method and device for pushing information stream based on artificial intelligence
CN111428010B (en) Man-machine intelligent question-answering method and device
US20160283494A1 (en) Context-Aware Cognitive Processing
CN107193974B (en) Regional information determination method and device based on artificial intelligence
US11366574B2 (en) Human-machine conversation method, client, electronic device, and storage medium
CN109493888B (en) Cartoon dubbing method and device, computer-readable storage medium and electronic equipment
CN109582825B (en) Method and apparatus for generating information
CN113301442A (en) Method, apparatus, medium, and program product for determining live broadcast resource
CN112532507B (en) Method and device for presenting an emoticon, and for transmitting an emoticon
CN111816170A (en) Training of audio classification model and junk audio recognition method and device
JP2019045978A (en) Interaction control device, learning device, interaction control method, learning method, control program, and recording medium
CN108962226B (en) Method and apparatus for detecting end point of voice
CN111490929B (en) Video clip pushing method and device, electronic equipment and storage medium
CN109670111B (en) Method and device for pushing information
CN115209210A (en) Method and device for generating information based on bullet screen
CN112749327A (en) Content pushing method and device
CN111294662A (en) Bullet screen generation method, device, equipment and storage medium
CN110852801A (en) Information processing method, device and equipment
CN110797013A (en) Live broadcast entrance display method of voice live broadcast room, related equipment and storage medium
CN114328995A (en) Content recommendation method, device, equipment and storage medium
CN117150053A (en) Multimedia information recommendation model training method, recommendation method and device
US20220101871A1 (en) Live streaming control method and apparatus, live streaming device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination