CN109660871B - Bullet screen role information determination method, device and equipment - Google Patents
Bullet screen role information determination method, device and equipment Download PDFInfo
- Publication number
- CN109660871B CN109660871B CN201811540890.XA CN201811540890A CN109660871B CN 109660871 B CN109660871 B CN 109660871B CN 201811540890 A CN201811540890 A CN 201811540890A CN 109660871 B CN109660871 B CN 109660871B
- Authority
- CN
- China
- Prior art keywords
- bullet screen
- information
- video
- role
- role information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 84
- 238000004891 communication Methods 0.000 claims description 18
- 238000003860 storage Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 9
- 238000009826 distribution Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 23
- 238000012549 training Methods 0.000 description 10
- 238000004519 manufacturing process Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 6
- 238000012550 audit Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 230000002085 persistent effect Effects 0.000 description 4
- 238000012552 review Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 210000000003 hoof Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002688 persistence Effects 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4884—Data services, e.g. news ticker for displaying subtitles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/4667—Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Television Signal Processing For Recording (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention provides a bullet screen role information determination method, a bullet screen role information determination device and bullet screen role information determination equipment, wherein the method comprises the following steps: acquiring each video frame in a video to be analyzed of the bullet screen role information to be determined; detecting an identification head portrait and an identification name corresponding to the identification head portrait in each video frame through image recognition aiming at each video frame; and taking the identification head portrait and the identification name corresponding to each video frame as the initial bullet screen role information corresponding to the video to be analyzed. According to the method, the device and the equipment for determining the bullet screen role information, the bullet screen role information does not need to be manually configured, and the consumption of manpower resources in the bullet screen role information determining process can be reduced. And the efficiency of determining the bullet screen role information can be improved, and the bullet screen role information can be determined efficiently.
Description
Technical Field
The invention relates to the technical field of internet, in particular to a bullet screen role information determining method, device and equipment.
Background
With the popularization of bullet screen culture, a single character type bullet screen cannot meet the requirements of users, so that character bullet screens containing head portrait icons of characters such as protists are more and more popular with users, and users can publish bullet screens by means of character characters in dramas and the like, so that spitting grooves and the like under certain specific scenes are more closely expressed. Among them, the character avatar and name included in the character bullet screen may be referred to as bullet screen character information.
Currently, the bullet screen role information included in the role bullet screen is manually configured by operation staff. For example, an operation staff member views a video, selects a part of characters in the video as a bullet screen character, and configures the character icon and the character name of the character included in the video as bullet screen character information.
However, the inventor finds that the prior art has at least the following problems in the process of implementing the invention:
a large amount of online videos all need to be configured with bullet screen role information manually, and excessive human resources can be occupied.
Disclosure of Invention
The embodiment of the invention aims to provide a bullet screen role information determination method, a bullet screen role information determination device and bullet screen role information determination equipment, so that the consumption of manpower resources in the bullet screen role information determination process is reduced. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for determining bullet screen role information, including:
acquiring each video frame in a video to be analyzed of the bullet screen role information to be determined;
detecting an identification head portrait in each video frame and an identification name corresponding to the identification head portrait through image recognition aiming at each video frame; and determining the initial bullet screen role information corresponding to the video to be analyzed according to the identification head portrait and the identification name respectively corresponding to each video frame.
Optionally, the method further includes:
acquiring a plurality of pieces of bullet screen information corresponding to the video to be analyzed;
determining the role name in the video to be analyzed and description information for describing the role name according to the part of speech and the word frequency of the words included in the plurality of pieces of bullet screen information;
and determining the personalized bullet screen role information according to the role name and the description information.
Optionally, after the obtaining of the plurality of pieces of bullet screen information corresponding to the video to be analyzed, the method further includes:
determining the bullet screen heat degree of the bullet screen information according to the attribute information of each bullet screen information and the bullet screen quantity of the bullet screen information;
the determining, according to parts of speech and word frequency of words included in the pieces of bullet screen information, a role name in the video to be analyzed and description information describing the role name includes:
and when the bullet screen heat exceeds a preset heat threshold, determining the role name in the video to be analyzed and description information for describing the role name according to the part of speech and the word frequency of words included in the multiple pieces of bullet screen information.
Optionally, the identification head portrait in each video frame and the identification name corresponding to the identification head portrait are detected through image recognition; and after determining the initial bullet screen role information corresponding to the video to be analyzed according to the identification head portrait and the identification name respectively corresponding to each video frame, the method further comprises the following steps:
distributing the initial bullet screen role information to a Content Distribution Network (CDN) so that a user terminal can obtain the initial bullet screen role information from the CDN;
after the determining the personalized bullet screen character information according to the character name and the description information, the method further includes:
and distributing the personalized bullet screen role information to the CDN so that the user terminal can obtain the personalized bullet screen role information from the CDN.
Optionally, after determining the initial bullet screen role information corresponding to the video to be analyzed according to the identification avatar and the identification name respectively corresponding to each video frame, the method further includes:
and persistently storing the initial bullet screen role information to a database.
In a second aspect, an embodiment of the present invention provides a bullet screen role information determining apparatus, including:
the first acquisition module is used for acquiring each video frame in the video to be analyzed of the bullet screen role information to be determined;
the detection module is used for detecting the identification head portrait in each video frame and the identification name corresponding to the identification head portrait through image recognition aiming at each video frame; and determining the initial bullet screen role information corresponding to the video to be analyzed according to the identification head portrait and the identification name respectively corresponding to each video frame.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring a plurality of pieces of bullet screen information corresponding to the video to be analyzed;
the first determining module is used for determining the role name in the video to be analyzed and description information for describing the role name according to the part of speech and the word frequency of the words included in the plurality of pieces of bullet screen information; and determining the personalized bullet screen role information according to the role name and the description information.
Optionally, the apparatus further comprises:
the second determining module is used for determining the bullet screen heat degree of the multiple pieces of bullet screen information according to the attribute information of each piece of bullet screen information and the bullet screen quantity of the multiple pieces of bullet screen information after the multiple pieces of bullet screen information corresponding to the video to be analyzed are obtained;
the first determining module is specifically configured to determine, when the barrage popularity exceeds a preset popularity threshold, the role name in the video to be analyzed and the description information describing the role name according to the part of speech and the word frequency of the words included in the pieces of barrage information.
Optionally, the apparatus further comprises:
the distribution module is used for detecting the identification head portrait in each video frame and the identification name corresponding to the identification head portrait through image recognition aiming at each video frame; after initial bullet screen role information corresponding to the video to be analyzed is determined according to the identification head portrait and the identification name respectively corresponding to each video frame, the initial bullet screen role information is distributed to a Content Distribution Network (CDN), so that a user terminal can obtain the initial bullet screen role information from the CDN; after the personalized bullet screen role information is determined according to the role name and the description information, the personalized bullet screen role information is distributed to the CDN, so that the user terminal can obtain the personalized bullet screen role information from the CDN.
Optionally, the apparatus further comprises:
and the storage module is used for persistently storing the initial bullet screen role information to a database after determining the initial bullet screen role information corresponding to the video to be analyzed according to the identification head portrait and the identification name respectively corresponding to each video frame.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the method steps of the first aspect when executing the program stored in the memory.
In yet another aspect of the present invention, there is also provided a computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method steps of the first aspect.
In yet another aspect of the present invention, the present invention also provides a computer program product containing instructions which, when executed on a computer, cause the computer to perform the method steps of the first aspect.
According to the method, the device and the equipment for determining the bullet screen role information, provided by the embodiment of the invention, each video frame in a video to be analyzed of the bullet screen role information to be determined can be obtained; aiming at each video frame, detecting an identification head portrait in the video frame and an identification name corresponding to the identification head portrait through image identification; and taking the identification head portrait and the identification name corresponding to each video frame as the initial bullet screen role information corresponding to the video to be analyzed. Therefore, the bullet screen role information does not need to be configured manually, and the consumption of manpower resources in the bullet screen role information determination process can be reduced. And the efficiency of determining the bullet screen role information can be improved, and the bullet screen role information can be determined efficiently. Of course, it is not necessary for any product or method of practicing the invention to achieve all of the above advantages at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a flowchart of a bullet screen role information determining method according to an embodiment of the present invention;
fig. 2 is another flowchart of a bullet screen role information determining method according to an embodiment of the present invention;
FIG. 3 is a flow chart of an embodiment provided by the present invention;
fig. 4 is a schematic structural diagram of a bullet screen role information determining apparatus according to an embodiment of the present invention;
fig. 5 is another schematic structural diagram of a bullet screen role information determination apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiment of the present invention will be described below with reference to the drawings in the embodiment of the present invention.
With the popularization of bullet screen culture, a single character type bullet screen cannot meet the requirements of users, so that character bullet screens containing head portrait icons of characters such as protists are more and more popular with users, and users can publish bullet screens by means of character characters in dramas and the like, so that spitting grooves and the like under certain specific scenes are more closely expressed. Among them, the character avatar and name included in the character bullet screen may be referred to as bullet screen character information.
Currently, the bullet screen role information included in the role bullet screen is manually configured by operation staff. For example, an operation staff member views a video, selects a part of characters in the video as a bullet screen character, and configures the character icon and the character name of the character included in the video as bullet screen character information.
A large amount of online videos all need to be configured with bullet screen role information manually, and excessive human resources can be occupied.
In addition, when a plurality of videos are online simultaneously, bullet screen role information is configured manually, the videos cannot be covered in time, namely, the corresponding bullet screen role information cannot be determined for each video quickly, and the efficiency of determining the bullet screen role information is influenced. And the bullet screen role information manually configured by the existing operation staff generally selects a certain fixed avatar and name of a certain character in the video, so that the character states in different scenario scenes cannot be abundantly expressed, and the enthusiasm of users is seriously hindered.
The method aims to solve the problems that a large amount of operation workers need to manually configure the bullet screen role information and occupy too much human resources when a large amount of new videos are online every day, and all videos can be guaranteed to be capable of configuring the bullet screen role information in time for a user to select. The embodiment of the invention provides a bullet screen role information determining method, which comprises the steps of obtaining each video frame in a video to be analyzed of bullet screen role information to be determined; detecting an identification head portrait and an identification name corresponding to the identification head portrait in each video frame through image recognition aiming at each video frame; and taking the identification head portrait and the identification name corresponding to each video frame as the initial bullet screen role information corresponding to the video to be analyzed. Therefore, the bullet screen role information does not need to be configured manually, and the consumption of manpower resources in the bullet screen role information determination process can be reduced. The efficiency of bullet screen role information determination is improved, and bullet screen role information is determined efficiently.
Meanwhile, in order to avoid the problems of the head portrait, the name and the like of the uniform role barrage of the user, the embodiment of the invention can determine the personalized barrage role information aiming at the video according to the barrage information generated aiming at the video, so that the role barrage has more interest, the user experience is improved, and the user is stimulated to participate in generation and publication of the barrage and the like. The rights and interests awareness of the members can be improved by setting the exclusive rights and the like of the members, and the rights and interests experience of the members is improved.
The following describes in detail the bullet screen role information determination method provided by the embodiment of the present invention.
The bullet screen role information determining method provided by the embodiment of the invention can be applied to servers, such as a back-end server of a video player, a back-end server of a video website and the like with a bullet screen function and a role bullet screen function, and the like.
An embodiment of the present invention provides a method for determining bullet screen role information, as shown in fig. 1, the method may include:
s101, obtaining each video frame in the video to be analyzed of the bullet screen role information to be determined.
The video to be analyzed may be any video for which the bullet screen role information is to be determined, may be a video for which a bullet screen can be published, and the like.
The server may obtain each video frame in the video to be analyzed from a storage location where each video frame in the video to be analyzed is stored.
S102, aiming at each video frame, detecting an identification head portrait in the video frame and an identification name corresponding to the identification head portrait through image recognition; and taking the identification head portrait and the identification name corresponding to each video frame as initial bullet screen role information corresponding to the video to be analyzed.
The identifying avatar may include a person avatar, an animal avatar, and the like in the video frame. If the video to be analyzed is a television show, the identification avatar may be an avatar of a leading corner in the television show episode.
The identification name may include a person name, an animal name, and the like, and specifically may be a character name and the like.
Specifically, the identification avatar and the identification name corresponding to the identification avatar in the video frame are detected through image recognition, for example, the person avatar in the video frame can be detected through face recognition.
In an optional embodiment, for each video frame, detecting, by image recognition, an identification avatar in the video frame and an identification name corresponding to the identification avatar may include:
and inputting the video frame to a pre-trained image recognition model aiming at each video frame to obtain an identification head portrait in the video frame and an identification name corresponding to the identification head portrait.
The image recognition model is obtained by training based on a plurality of training samples and is a pre-trained image recognition model for detecting the identification head portrait and the identification name in the video frame.
Specifically, a plurality of training samples are obtained, the training samples are respectively marked, and the identification head portrait and the identification name in the training samples are marked to obtain a plurality of marked training samples. In particular by means of manual marking.
And inputting a plurality of labeled training samples into a preset network model, and training the preset network model to obtain a trained image recognition model.
Specifically, the preset network model may include parameters to be measured, a plurality of labeled training samples are input into the preset network model, and the parameters to be measured are adjusted so that the output of the preset network model infinitely approaches to the pre-labeled identification avatar and identification name, and if a cost function between the output of the preset network model and the labeled identification avatar and identification name converges, the parameters to be measured are determined, and the obtained preset network model including the determined parameters to be measured is the image recognition model obtained by training.
And inputting the video frame into an image recognition model aiming at each video frame to obtain an identification head portrait in the video frame and an identification name corresponding to the identification head portrait.
In the embodiment of the invention, through image recognition, the identification head portrait in the video frame and the identification name corresponding to the identification head portrait are detected; and taking the identification head portrait and the identification name corresponding to each video frame as initial bullet screen role information corresponding to the video to be analyzed. Therefore, the bullet screen role information does not need to be configured manually, the consumption of manpower resources in the bullet screen role information determination process can be reduced, the efficiency of bullet screen role information determination can be improved, and the bullet screen role information can be determined efficiently.
Compared with the prior art, the bullet screen role information manually configured by operation staff generally selects a certain fixed avatar and name of a certain character in a video, so that the character states in different scenario scenes cannot be abundantly expressed, and the enthusiasm of users is seriously hindered. The bullet screen role information determining method provided by the embodiment of the invention can detect all identification head portraits and identification names included in the video to be analyzed, and takes the identification head portraits and the identification names corresponding to all video frames as the initial bullet screen role information corresponding to the video to be analyzed. So make bullet screen role information abundanter, can provide the user to make the user express the suggestion under the different scenario scenes fully, improve the enthusiasm etc. that the user published the bullet screen, and provide initial bullet screen role information and can avoid this to wait to analyze the video and do not have the condition of bullet screen role information.
On the basis of the above embodiment, in an alternative embodiment of the present invention, in step 102: detecting an identification head portrait and an identification name corresponding to the identification head portrait in each video frame through image identification aiming at each video frame; and after the identification head portrait and the identification name respectively corresponding to each video frame are used as the initial bullet screen role information corresponding to the video to be analyzed, the method further comprises the following steps:
and persistently storing the initial bullet screen role information to a database.
Specifically, the initial bullet screen role information may be saved to a database or the like. Therefore, persistence of the initial bullet screen role information is achieved. Persistent storage may also be understood as permanent storage.
Because the initial bullet screen role information corresponding to the video to be analyzed is determined according to the content of the video to be analyzed, for example, the identification head portrait of each video frame in the video to be analyzed and the identification name corresponding to the identification head portrait, and the initial bullet screen role information is persisted to the database. Therefore, the corresponding bullet screen role information of the video to be analyzed can be ensured to exist necessarily.
On the basis of the above embodiment, an optional embodiment of the present invention may further include a process of auditing the initial bullet screen role information.
Because the identification head portrait and the identification name respectively corresponding to each video frame serving as the initial bullet screen role information are automatically generated, in order to avoid the illegal condition of the initial bullet screen role information, the embodiment of the invention can be used for auditing the initial bullet screen role information.
Specifically, the method may be performed by an automatic review method, or may be performed by a manual review method.
Specifically, the automatic audit may be performed by the electronic device according to a rule. The rule may include a rule that specifies allowable contents of the initial bullet screen character information, and in this case, it is possible to verify whether the initial bullet screen character information conforms to the rule. And if so, determining that the initial bullet screen role information passes the audit. The rule may also include a rule that specifies that the initial bullet screen role information does not allow the content, at this time, it is possible to verify whether the initial bullet screen role information conforms to the preset verification rule, and if not, it is determined that the initial bullet screen role information passes the audit. The rules may be determined according to video requirements, operation requirements, and the like.
The manual review mode may be to manually check the initial bullet screen role information, determine whether the initial bullet screen role information meets the rule, or determine whether the content that does not meet the rule appears in the initial bullet screen role information, and so on. The initial bullet screen role information which does not accord with the rule can be marked so as to facilitate the subsequent processing of deleting the initial bullet screen role information which does not accord with the rule and the like.
Therefore, the 'validity' of the initial bullet screen role information can be ensured.
In addition, since the initial bullet screen character information is determined based on the video itself to be analyzed, an illegal name or avatar or the like does not appear in general.
On the basis of the above embodiment, in an alternative embodiment of the present invention, in step 102: detecting an identification head portrait and an identification name corresponding to the identification head portrait in each video frame through image identification aiming at each video frame; and after the identification head portrait and the identification name respectively corresponding to each video frame are used as the initial bullet screen role information corresponding to the video to be analyzed, the method further comprises the following steps:
and distributing the initial barrage role information to a Content Delivery Network (CDN) so that the user terminal acquires the initial barrage role information from the CDN.
The front end can directly acquire the initial bullet screen role information from the CDN, and decoupling is achieved with the rear end interface.
Thus, the user can obtain the required content nearby, the network congestion and other conditions are solved, and the response speed of the user access is improved.
On the basis of the above embodiment, in an alternative embodiment of the present invention, in step 102: detecting an identification head portrait and an identification name corresponding to the identification head portrait in each video frame through image identification aiming at each video frame; and after the identification head portrait and the identification name respectively corresponding to each video frame are used as the initial bullet screen role information corresponding to the video to be analyzed, the method further comprises the following steps:
and displaying the initial bullet screen role information.
The server determines initial bullet screen role information to provide to the user for use in publishing the role bullet screen.
The initial bullet screen character information can be displayed by displaying bullet screen options on a playing page, such as displaying a character head portrait option and the like.
In order to determine personalized bullet screen role information for each video, on the basis of the foregoing embodiment, in an optional embodiment of the present invention, as shown in fig. 2, the method may further include:
s201, acquiring a plurality of bullet screen information corresponding to the video to be analyzed.
With the richness of the barrage aiming at the video to be analyzed, more accurate role information can be obtained from the barrage information and fed back to the user so as to stimulate the user to actively generate the barrage content.
Specifically, the barrage information for the video to be analyzed, which has been generated by the user, may be acquired.
S202, determining the role name and the description information for describing the role name in the video to be analyzed according to the part of speech and the word frequency of the words included in the plurality of pieces of bullet screen information.
S203, determining personalized bullet screen role information according to the role name and the description information.
Specifically, a plurality of pieces of barrage information for the video to be analyzed may be collected, a word may be segmented for each piece of barrage information to obtain a plurality of words, and then the part of speech of each word may be analyzed, for example, a noun may be regarded as a role name, an adjective closer to the noun may be regarded as description information describing the role name, and the like.
The server determines personalized bullet screen role information, which can be understood as secondary production of bullet screen information for the video to be analyzed.
And after the personalized bullet screen role information is determined, the personalized bullet screen role information can be audited. Specifically, the auditing process is similar to the auditing process of the initial bullet screen role information.
On the basis of the above embodiment, in an alternative embodiment of the present invention, in step S201: after acquiring the multiple pieces of bullet screen information corresponding to the video to be analyzed, the method may further include:
and determining the bullet screen heat degree of the plurality of bullet screen information according to the attribute information of each bullet screen information and the bullet screen quantity of the plurality of bullet screen information.
The attribute information may include: the length of the bullet screen text, the amount of bullet screen praise, the amount of bullet screen report, the bullet screen release time, and/or the bullet screen type, etc.
Since the number of bullet screens in different videos is different, even the number of bullet screens in most videos is not enough to generate high-quality character names, it is necessary to perform a distinguishing treatment according to the bullet screen heat degree of the videos.
Specifically, the number of bullet screens of a plurality of pieces of bullet screen information for the text to be analyzed and the length of the bullet screen text of each piece of bullet screen information can be considered by the bullet screen heat degree, the amount of praise of the bullet screen, the amount of report of the bullet screen, the release time of the bullet screen, a plurality of dimensions such as the type of the bullet screen can be considered, wherein the parameters of different dimensions can be adjusted according to actual conditions, and different parameters can be normalized by using a Gaussian formula.
In an implementation manner, the bullet screen heat corresponding to the text to be analyzed can be calculated by the following formula:
the heat of the bullet screen is equal to the number of the bullet screens and the heat of the single bullet screen;
the single bullet screen popularity is f1 (bullet screen text length) + f2 (bullet screen praise amount) + f3 (bullet screen report amount) + f4 (bullet screen release time) + f5 (bullet screen type).
Among them, f1, f2, f3, f4 and f5 may set different parameters such as functions according to attribute characteristics or operation policies and the like.
After determining the heat of the bullet screen, determining whether the heat of the bullet screen corresponding to the video to be analyzed exceeds a preset heat threshold, and executing the steps S202 and 2303 when the heat of the bullet screen exceeds the preset heat threshold. If the heat degree of the bullet screen does not exceed the preset heat degree threshold value, the heat degree of the bullet screen is recorded so as to facilitate subsequent calculation and use.
After the personalized bullet screen role information is determined according to the role name and the description information, the personalized bullet screen role information can be distributed to the CDN, so that the user terminal can obtain the personalized bullet screen role information from the CDN.
The front end can directly acquire personalized bullet screen role information from the CDN, and decoupling is achieved with the rear end interface.
Thus, the user can obtain the required content nearby, the network congestion and other conditions are solved, and the response speed of the user access is improved.
On the basis of the above embodiment, in an alternative embodiment of the present invention, in step S203: after the personalized bullet screen role information is determined according to the role name and the description information, the method further comprises the following steps:
and displaying the personalized bullet screen role information.
The server determines personalized bullet screen role information to provide the user with the character bullet screen publishing use so as to enrich the user's options.
Specifically, the server determines personalized bullet screen role information, which can be understood as secondary production of bullet screen information for the video to be analyzed. The server determines personalized bullet screen role information, can be understood as secondary production of bullet screen information aiming at the video to be analyzed, and the flag bit of secondary production can be understood as an identification bit for identifying whether secondary production is carried out aiming at the video to be analyzed.
Similar to the displaying of the initial bullet screen character information, the displaying of the personalized bullet screen character information can be performed by displaying bullet screen options on the playing page, such as displaying a character avatar option and the like.
In an optional embodiment, the initial bullet screen role information and the personalized bullet screen role information corresponding to the video to be analyzed can be displayed at the same time.
The initial bullet screen role information determined based on each video frame in the video to be analyzed, namely the initial bullet screen role information determined based on the video content of the video to be analyzed and the personalized bullet screen role information based on the bullet screen information aiming at the video to be analyzed can be displayed in parallel. Specifically, the barrage options corresponding to the initial barrage role information and the individual barrage options corresponding to the individual barrage role information can be displayed at the same time.
And the video information-based traditional role barrage options are displayed in parallel for the user to select, so that the user can select more individually.
The embodiment of the present invention further provides a specific embodiment, as shown in fig. 3.
S301, identifying the character head portrait and the name.
Specifically, each video frame in the video to be analyzed can be acquired, and for each video frame, an identification head portrait in the video frame and an identification name corresponding to the identification head portrait are detected through image recognition; and taking the identification head portrait and the identification name corresponding to each video frame as initial bullet screen role information corresponding to the video to be analyzed.
The initial bullet screen role information can be provided for the video to be analyzed so as to determine an initial role bullet screen option and avoid the situation that the video to be analyzed does not have bullet screen role information.
And S302, persisting to a database.
The method can be used as the guarantee that the role barrage can successfully appear in the new video, and the effect is similar to the result of manual configuration in operation. A new video may be understood as a video that is newly brought online and for which bullet screen information has not yet been generated.
Persisting to the database may be understood as saving the initial bullet screen role information to the database.
And S303, checking.
As with the embodiments described above, the machine review may be automatic or manual. Specifically, the examination process described above with reference to the above embodiment may be performed.
And S304, distributing the initial bullet screen role information to the CDN.
Therefore, the front end can directly acquire the bullet screen role information through the CDN, and decoupling is achieved with the rear end interface.
S305, calculating the heat of the bullet screen.
And determining the bullet screen heat degree of the plurality of bullet screen information according to the attribute information of each bullet screen information and the bullet screen quantity of the plurality of bullet screen information.
Specifically, the calculation process has been described in detail in the above embodiments, and the calculation process of the above embodiments may be referred to.
Can regularly carry out the calculation of bullet curtain heat for can carry out the secondary production to the bullet curtain information that waits to analyze the video and correspond when bullet curtain heat surpasss preset heat threshold value, with the definite individuality bullet curtain role information to this wait to analyze the video, select when providing the user with supply the user to issue the bullet curtain, improve the interest, promote user experience, attract more users to participate in the bullet curtain and issue.
S306, judging whether the heat degree of the bullet screen exceeds a preset heat degree threshold value.
If the heat of the bullet screen exceeds the preset heat threshold, executing step S308; and if the heat degree of the bullet screen does not exceed the preset heat degree threshold value, executing step detection records and persisting results to a database.
The preset heat threshold value can be determined according to actual conditions and can be divided into different grades.
S307, the detection records and the results are persisted to a database.
S308, judging whether the star name is included.
S309, determining words associated with the star names.
In the process of determining the character name and the description information, the weight of the words associated with the star name can be increased, so that the exposure of the star name is increased, and a star effect is formed.
S310, determining role names and description information according to parts of speech and the like.
And calculating words for describing roles according to the word frequency, the word property and the like of the words in the bullet screen information corresponding to the video to be analyzed.
If the star name is included, the star name can be directly used, and aiming at the bullet screen information, a personalized recommendation result aiming at a certain star is constructed according to emotion analysis, for example, the analogy of 'emperor' is 'big pig hoof', and the like, and the use frequency of the bullet screens of a plurality of characters can be rapidly accumulated through the star effect.
If the name words are common words, the original role names are added if the words are adjectives, and if the words are name words, the name words can be directly used as the role names, and the result is determined to be the personalized bullet screen role information to be analyzed.
Therefore, the user is encouraged to use the role barrage to perform secondary creation on the part of the scenario corresponding to the video, so that the stay time of the user is prolonged, and the opening rate is further improved. And the pictures and the like generated aiming at the specific expressions of the star can quickly form large-scale simulation of the users, attract the barrage users to use the role barrage to perform secondary creation of the plot, form unique video barrage culture and further attract more users to use the barrage function.
The process from step S308 to step S310 is to determine the personalized bullet screen role information for the video to be analyzed according to the bullet screen information of the video to be analyzed, that is, to perform the secondary production process.
And S311, generating a bullet screen option.
And displaying the bullet screen role information. Before the display, the personalized bullet screen role information can be audited, and specifically, the auditing process is similar to the process of auditing the initial bullet screen role information.
After the personalized bullet screen role information passes the audit, the front end can display the personalized bullet screen role information according to the flag bit of secondary production, and the personalized bullet screen role information can be displayed in parallel with the initial bullet screen role information based on the video information so as to be selected by a user, and the user can select the personalized bullet screen role information more. Specifically, a role barrage option of the UGC version corresponding to the personalized barrage role information and a role barrage option corresponding to the initial barrage role information can be displayed.
In the embodiment of the invention, the pressure of operators can be greatly relieved, different bullet screen role information in different dramas can be quickly covered on all videos, different head portraits and names of the same role in the same drama can be provided for users, more users are attracted to participate in bullet screen culture, the opening rate is further improved, and the like.
An embodiment of the present invention provides a bullet screen role information determining apparatus, as shown in fig. 4, including:
a first obtaining module 401, configured to obtain each video frame in a video to be analyzed of bullet screen role information to be determined;
a detecting module 402, configured to detect, through image recognition, an identification avatar and an identification name corresponding to the identification avatar in each video frame; and taking the identification head portrait and the identification name corresponding to each video frame as the initial bullet screen role information corresponding to the video to be analyzed.
In the embodiment of the invention, through image recognition, the identification head portrait in the video frame and the identification name corresponding to the identification head portrait are detected; and taking the identification head portrait and the identification name corresponding to each video frame as initial bullet screen role information corresponding to the video to be analyzed. Therefore, the bullet screen role information does not need to be configured manually, and the consumption of manpower resources in the bullet screen role information determination process can be reduced. And the efficiency of determining the bullet screen role information can be improved, and the bullet screen role information can be determined efficiently.
Optionally, as shown in fig. 5, the apparatus may further include:
a second obtaining module 501, configured to obtain multiple pieces of bullet screen information corresponding to a video to be analyzed;
a first determining module 502, configured to determine, according to parts of speech and word frequencies of words included in the pieces of barrage information, a role name and description information describing the role name in the video to be analyzed; and determining personalized bullet screen role information according to the role name and the description information.
Optionally, the apparatus further comprises:
the second determining module is used for determining the bullet screen heat degree of the plurality of pieces of bullet screen information according to the attribute information of each piece of bullet screen information and the bullet screen number of the plurality of pieces of bullet screen information after the plurality of pieces of bullet screen information corresponding to the video to be analyzed are obtained;
the first determining module 502 is specifically configured to determine, when the bullet screen heat exceeds the preset heat threshold, a character name in the video to be analyzed and description information describing the character name according to parts of speech and word frequency of words included in the multiple pieces of bullet screen information.
Optionally, the apparatus further comprises:
the distribution module is used for detecting the identification head portrait and the identification name corresponding to the identification head portrait in each video frame through image recognition; after the identification head portrait and the identification name corresponding to each video frame are used as initial bullet screen role information corresponding to the video to be analyzed, the initial bullet screen role information is distributed to a Content Distribution Network (CDN) so that a user terminal can obtain the initial bullet screen role information from the CDN; after the personalized bullet screen role information is determined according to the role name and the description information, the personalized bullet screen role information is distributed to the CDN, so that the user terminal can obtain the personalized bullet screen role information from the CDN.
Optionally, the apparatus further comprises:
and the storage module is used for persistently storing the initial bullet screen role information to a database after the identification head portrait and the identification name which respectively correspond to each video frame are used as the initial bullet screen role information corresponding to the video to be analyzed.
It should be noted that the apparatus for determining bullet screen role information provided in the embodiment of the present invention is an apparatus applying the method for determining bullet screen role information, and all embodiments of the method for determining bullet screen role information are applicable to the apparatus and can achieve the same or similar beneficial effects.
An embodiment of the present invention further provides an electronic device, as shown in fig. 6, including a processor 601, a communication interface 602, a memory 603, and a communication bus 604, where the processor 601, the communication interface 602, and the memory 603 complete mutual communication through the communication bus 604.
A memory 603 for storing a computer program;
the processor 601 is configured to implement the method steps of the bullet screen character information determining method when executing the program stored in the memory 603.
In the embodiment of the invention, through image recognition, the identification head portrait in the video frame and the identification name corresponding to the identification head portrait are detected; and taking the identification head portrait and the identification name corresponding to each video frame as initial bullet screen role information corresponding to the video to be analyzed. Therefore, the bullet screen role information does not need to be configured manually, and the consumption of manpower resources in the bullet screen role information determination process can be reduced. And the efficiency of determining the bullet screen role information can be improved, and the bullet screen role information can be determined efficiently.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In yet another embodiment provided by the present invention, a computer-readable storage medium is further provided, which has instructions stored therein, which when run on a computer, cause the computer to perform the method steps of the bullet screen character information determination method described above.
In the embodiment of the invention, through image recognition, the identification head portrait in the video frame and the identification name corresponding to the identification head portrait are detected; and taking the identification head portrait and the identification name corresponding to each video frame as initial bullet screen role information corresponding to the video to be analyzed. Therefore, the bullet screen role information does not need to be configured manually, and the consumption of manpower resources in the bullet screen role information determination process can be reduced. And the efficiency of determining the bullet screen role information can be improved, and the bullet screen role information can be determined efficiently.
In another embodiment of the present invention, a computer program product containing instructions is also provided, which when run on a computer causes the computer to perform the method steps of the bullet screen character information determination method described above.
In the embodiment of the invention, through image recognition, the identification head portrait in the video frame and the identification name corresponding to the identification head portrait are detected; and taking the identification head portrait and the identification name corresponding to each video frame as initial bullet screen role information corresponding to the video to be analyzed. Therefore, the bullet screen role information does not need to be configured manually, and the consumption of manpower resources in the bullet screen role information determination process can be reduced. And the efficiency of determining the bullet screen role information can be improved, and the bullet screen role information can be determined efficiently.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the invention may be generated, in whole or in part, when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and similar parts between the embodiments may be referred to each other, and each embodiment focuses on differences from other embodiments. In particular, for the apparatus, the device, the computer-readable storage medium, and the computer program product embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and in relation to the description, reference may be made to some of the description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (9)
1. A bullet screen role information determination method is characterized by comprising the following steps:
acquiring each video frame in a video to be analyzed of the bullet screen role information to be determined;
detecting an identification head portrait in each video frame and an identification name corresponding to the identification head portrait through image recognition aiming at each video frame; respectively corresponding identification head portraits and identification names of all the video frames as initial bullet screen role information corresponding to the video to be analyzed;
the method further comprises the following steps:
acquiring a plurality of pieces of bullet screen information corresponding to the video to be analyzed, which are generated by a user;
determining the role name in the video to be analyzed and description information for describing the role name according to the part of speech and the word frequency of the words included in the plurality of pieces of bullet screen information;
and determining personalized bullet screen role information according to the role name and the description information.
2. The method according to claim 1, wherein after the obtaining of the plurality of pieces of bullet screen information corresponding to the video to be analyzed, the method further comprises:
determining the bullet screen heat degree of the bullet screen information according to the attribute information of each bullet screen information and the bullet screen quantity of the bullet screen information;
the determining, according to parts of speech and word frequency of words included in the pieces of bullet screen information, a role name in the video to be analyzed and description information describing the role name includes:
and when the bullet screen heat degree exceeds a preset heat degree threshold value, determining the role name in the video to be analyzed and description information for describing the role name according to the part of speech and the word frequency of the words included in the plurality of pieces of bullet screen information.
3. The method according to claim 1, wherein after the identifying avatar and the identifying name respectively corresponding to each video frame are used as the initial bullet screen role information corresponding to the video to be analyzed, the method further comprises:
distributing the initial bullet screen role information to a Content Distribution Network (CDN) so that a user terminal can obtain the initial bullet screen role information from the CDN;
after the personalized bullet screen character information is determined according to the character name and the description information, the method further comprises the following steps:
and distributing the personalized bullet screen role information to the CDN so that the user terminal can obtain the personalized bullet screen role information from the CDN.
4. The method according to any one of claims 1 to 3, wherein after determining the initial barrage character information corresponding to the video to be analyzed according to the identification avatar and the identification name respectively corresponding to each video frame, the method further comprises:
and persistently storing the initial bullet screen role information to a database.
5. A bullet screen character information determination apparatus, comprising:
the first acquisition module is used for acquiring each video frame in the video to be analyzed of the bullet screen role information to be determined;
the detection module is used for detecting the identification head portrait in each video frame and the identification name corresponding to the identification head portrait through image recognition aiming at each video frame; respectively corresponding identification head portraits and identification names of all the video frames as initial bullet screen role information corresponding to the video to be analyzed;
the device further comprises:
the second acquisition module is used for acquiring a plurality of pieces of bullet screen information corresponding to the video to be analyzed, which are generated by a user;
the first determining module is used for determining the role name in the video to be analyzed and description information for describing the role name according to the part of speech and the word frequency of the words included in the plurality of pieces of bullet screen information; and determining personalized bullet screen role information according to the role name and the description information.
6. The apparatus of claim 5, further comprising:
the second determining module is used for determining the bullet screen heat degree of the plurality of pieces of bullet screen information according to the attribute information of each piece of bullet screen information and the bullet screen quantity of the plurality of pieces of bullet screen information after the plurality of pieces of bullet screen information corresponding to the video to be analyzed are obtained;
the first determining module is specifically configured to determine, when the barrage popularity exceeds a preset popularity threshold, the role name in the video to be analyzed and the description information describing the role name according to the part of speech and the word frequency of the words included in the pieces of barrage information.
7. The apparatus of claim 5, further comprising:
the distribution module is used for distributing the initial bullet screen role information to a Content Distribution Network (CDN) after the identification head portrait and the identification name corresponding to each video frame are used as the initial bullet screen role information corresponding to the video to be analyzed so that a user terminal can obtain the initial bullet screen role information from the CDN; after the personalized bullet screen role information is determined according to the role name and the description information, the personalized bullet screen role information is distributed to the CDN, so that the user terminal can obtain the personalized bullet screen role information from the CDN.
8. The apparatus of any one of claims 5 to 7, further comprising:
and the storage module is used for persistently storing the initial bullet screen role information to a database after determining the initial bullet screen role information corresponding to the video to be analyzed according to the identification head portrait and the identification name respectively corresponding to each video frame.
9. An electronic device, comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the communication bus;
the memory is used for storing a computer program;
the processor, when executing the program stored in the memory, implementing the method steps of any of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811540890.XA CN109660871B (en) | 2018-12-17 | 2018-12-17 | Bullet screen role information determination method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811540890.XA CN109660871B (en) | 2018-12-17 | 2018-12-17 | Bullet screen role information determination method, device and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109660871A CN109660871A (en) | 2019-04-19 |
CN109660871B true CN109660871B (en) | 2021-06-25 |
Family
ID=66113701
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811540890.XA Active CN109660871B (en) | 2018-12-17 | 2018-12-17 | Bullet screen role information determination method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109660871B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110460899B (en) * | 2019-06-28 | 2021-12-07 | 咪咕视讯科技有限公司 | Bullet screen content display method, terminal equipment and computer readable storage medium |
CN113395201B (en) * | 2021-06-10 | 2024-02-23 | 广州繁星互娱信息科技有限公司 | Head portrait display method, device, terminal and server in chat session |
CN116091136B (en) * | 2023-01-28 | 2023-06-23 | 深圳市人马互动科技有限公司 | Telephone marketing method and device based on speaker |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105916057A (en) * | 2016-04-18 | 2016-08-31 | 乐视控股(北京)有限公司 | Video barrage display method and device |
CN106101747A (en) * | 2016-06-03 | 2016-11-09 | 腾讯科技(深圳)有限公司 | A kind of barrage content processing method and application server, user terminal |
CN106303745A (en) * | 2016-08-16 | 2017-01-04 | 腾讯科技(深圳)有限公司 | The treating method and apparatus of barrage |
CN107948708A (en) * | 2017-11-14 | 2018-04-20 | 优酷网络技术(北京)有限公司 | Barrage methods of exhibiting and device |
CN108235105A (en) * | 2018-01-22 | 2018-06-29 | 上海硬创投资管理有限公司 | A kind of barrage rendering method, recording medium, electronic equipment, information processing system |
CN108401175A (en) * | 2017-12-20 | 2018-08-14 | 广州虎牙信息科技有限公司 | A kind of processing method, device, storage medium and the electronic equipment of barrage message |
CN108495168A (en) * | 2018-03-06 | 2018-09-04 | 优酷网络技术(北京)有限公司 | The display methods and device of barrage information |
CN108540845A (en) * | 2018-03-30 | 2018-09-14 | 优酷网络技术(北京)有限公司 | Barrage method for information display and device |
CN108683956A (en) * | 2018-06-19 | 2018-10-19 | 广州虎牙信息科技有限公司 | Direct broadcasting room barrage special efficacy configuration method, device and storage medium, server |
CN108717464A (en) * | 2018-05-31 | 2018-10-30 | 中国联合网络通信集团有限公司 | photo processing method, device and terminal device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105357586B (en) * | 2015-09-28 | 2018-12-14 | 北京奇艺世纪科技有限公司 | Video barrage filter method and device |
CN106303730B (en) * | 2016-07-28 | 2018-05-11 | 百度在线网络技术(北京)有限公司 | A kind of method and apparatus for being used to provide combination barrage information |
-
2018
- 2018-12-17 CN CN201811540890.XA patent/CN109660871B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105916057A (en) * | 2016-04-18 | 2016-08-31 | 乐视控股(北京)有限公司 | Video barrage display method and device |
CN106101747A (en) * | 2016-06-03 | 2016-11-09 | 腾讯科技(深圳)有限公司 | A kind of barrage content processing method and application server, user terminal |
CN106303745A (en) * | 2016-08-16 | 2017-01-04 | 腾讯科技(深圳)有限公司 | The treating method and apparatus of barrage |
CN107948708A (en) * | 2017-11-14 | 2018-04-20 | 优酷网络技术(北京)有限公司 | Barrage methods of exhibiting and device |
CN108401175A (en) * | 2017-12-20 | 2018-08-14 | 广州虎牙信息科技有限公司 | A kind of processing method, device, storage medium and the electronic equipment of barrage message |
CN108235105A (en) * | 2018-01-22 | 2018-06-29 | 上海硬创投资管理有限公司 | A kind of barrage rendering method, recording medium, electronic equipment, information processing system |
CN108495168A (en) * | 2018-03-06 | 2018-09-04 | 优酷网络技术(北京)有限公司 | The display methods and device of barrage information |
CN108540845A (en) * | 2018-03-30 | 2018-09-14 | 优酷网络技术(北京)有限公司 | Barrage method for information display and device |
CN108717464A (en) * | 2018-05-31 | 2018-10-30 | 中国联合网络通信集团有限公司 | photo processing method, device and terminal device |
CN108683956A (en) * | 2018-06-19 | 2018-10-19 | 广州虎牙信息科技有限公司 | Direct broadcasting room barrage special efficacy configuration method, device and storage medium, server |
Also Published As
Publication number | Publication date |
---|---|
CN109660871A (en) | 2019-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10810499B2 (en) | Method and apparatus for recommending social media information | |
CN109660871B (en) | Bullet screen role information determination method, device and equipment | |
CN109218390B (en) | User screening method and device | |
US20170171336A1 (en) | Method and electronic device for information recommendation | |
CN108874832B (en) | Target comment determination method and device | |
US8463648B1 (en) | Method and apparatus for automated topic extraction used for the creation and promotion of new categories in a consultation system | |
US20220398314A1 (en) | Artificial intelligence-based explicit content blocking device | |
CN108810642B (en) | Bullet screen display method and device and electronic equipment | |
CN110941738B (en) | Recommendation method and device, electronic equipment and computer-readable storage medium | |
CN112364202A (en) | Video recommendation method and device and electronic equipment | |
CN111522724B (en) | Method and device for determining abnormal account number, server and storage medium | |
US9501580B2 (en) | Method and apparatus for automated selection of interesting content for presentation to first time visitors of a website | |
US20150262238A1 (en) | Techniques for Topic Extraction Using Targeted Message Characteristics | |
US10104429B2 (en) | Methods and systems of dynamic content analysis | |
CN104486649A (en) | Video content rating method and device | |
US11226991B2 (en) | Interest tag determining method, computer device, and storage medium | |
CN111178983B (en) | User gender prediction method, device, equipment and storage medium | |
CN109948096B (en) | Webpage activity configuration system | |
CN113688310A (en) | Content recommendation method, device, equipment and storage medium | |
CN110198490B (en) | Live video theme classification method and device and electronic equipment | |
CN111695357A (en) | Text labeling method and related product | |
CN104580100A (en) | Method, device and server for identifying malicious message | |
KR102081553B1 (en) | Big Data-Based Monitoring System of Promotional Content for Cultural Media | |
CN113221845A (en) | Advertisement auditing method, device, equipment and storage medium | |
JP6122138B2 (en) | Method and device for optimizing information diffusion between communities linked by interaction similarity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |