CN109189889B - Bullet screen recognition model establishing method, device, server and medium - Google Patents

Bullet screen recognition model establishing method, device, server and medium Download PDF

Info

Publication number
CN109189889B
CN109189889B CN201811052795.5A CN201811052795A CN109189889B CN 109189889 B CN109189889 B CN 109189889B CN 201811052795 A CN201811052795 A CN 201811052795A CN 109189889 B CN109189889 B CN 109189889B
Authority
CN
China
Prior art keywords
bullet screen
sample
bullet
neural network
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811052795.5A
Other languages
Chinese (zh)
Other versions
CN109189889A (en
Inventor
王非池
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Douyu Network Technology Co Ltd
Original Assignee
Wuhan Douyu Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Douyu Network Technology Co Ltd filed Critical Wuhan Douyu Network Technology Co Ltd
Priority to CN201811052795.5A priority Critical patent/CN109189889B/en
Publication of CN109189889A publication Critical patent/CN109189889A/en
Application granted granted Critical
Publication of CN109189889B publication Critical patent/CN109189889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4314Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Abstract

The invention discloses a bullet screen identification model establishing method, a bullet screen identification model establishing device, a bullet screen identification model establishing server and a bullet screen identification model establishing medium. The method comprises the following steps: training a pre-constructed convolutional neural network by using a bullet screen training sample pair; taking the trained convolutional neural network as a bullet screen recognition model; wherein, bullet screen training sample to including bullet screen sample word vector and the bullet screen type value that corresponds with bullet screen sample word vector, bullet screen type value includes normal bullet screen output value and unusual bullet screen output value. Adopt the bullet screen recognition model that above-mentioned technical scheme training obtained, can effectively filter unusual bullet screen, improve the discernment correct rate and the recognition efficiency of unusual bullet screen, can realize the autonomic increment study of bullet screen recognition model simultaneously.

Description

Bullet screen recognition model establishing method, device, server and medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a bullet screen recognition model establishing method, a bullet screen recognition model establishing device, a bullet screen recognition model establishing server and a bullet screen recognition model establishing medium.
Background
The bullet screen is an important component of network live broadcast and is an objective reaction of the heat of a live broadcast room. Can carry out the interdynamic through the barrage between the user, the anchor also can directly know user's idea through the barrage, promotes user's watching experience.
With the development of live broadcast platforms, the number of people watched by ultra-large anchor broadcasters explodes, which causes the amount of bullet screens in a short time in the live broadcast process to reach thousands or even tens of thousands of orders. If the massive barrage is displayed on the live broadcast interface in a short time, on one hand, live broadcast contents of the anchor are shielded because the barrage is too dense; on the other hand, the method causes high load of user software service and consumes a large amount of flow and memory.
Because the large quantity of low-quality barrages are contained in the sea barrages in a short time, the effective control of the quantity of the barrages displayed on the live broadcast interface can be realized in a low-quality barrage identification mode. In the prior art, low-quality barrages (i.e., abnormal barrages) are usually filtered by manual screening or keyword regular matching. However, the method has poor recognition effect on the low-quality bullet screen and low recognition efficiency. Meanwhile, the established bullet screen recognition model cannot perform incremental adjustment on training samples in the training process, so that autonomous incremental learning in model training cannot be realized.
Disclosure of Invention
The embodiment of the invention provides a bullet screen identification model establishing method, a bullet screen identification model establishing device, a bullet screen identification model establishing server and a bullet screen identification model establishing medium, and aims to realize filtering of low-quality bullet screens.
In a first aspect, an embodiment of the present invention provides a bullet screen recognition model establishing method, including:
training a pre-constructed convolutional neural network by using a bullet screen training sample pair;
wherein, the bullet screen training sample pair includes: the bullet screen sample word vector and a bullet screen type value corresponding to the bullet screen sample word vector; the bullet screen type value comprises a normal bullet screen output value and an abnormal bullet screen output value;
and taking the trained convolutional neural network as the bullet screen recognition model.
In a second aspect, an embodiment of the present invention further provides a bullet screen recognition model establishing apparatus, including:
the training module is used for training the pre-constructed convolutional neural network by using the bullet screen training sample pair;
wherein, the bullet screen training sample pair includes: at least two bullet screen sample word vectors and bullet screen type values corresponding to the bullet screen sample word vectors; the bullet screen type value comprises a normal bullet screen output value and an abnormal bullet screen output value;
and the model generation module is used for taking the trained convolutional neural network as the bullet screen recognition model.
In a third aspect, an embodiment of the present invention further provides a server, including:
one or more processors;
storage means for storing one or more programs;
the one or more programs are executed by the one or more processors, so that the one or more processors implement a bullet screen recognition model building method as provided in the embodiments of the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a bullet screen recognition model building method as provided in the first aspect.
In the embodiment of the invention, a pre-constructed convolutional neural network is trained by using bullet screen training sample pairs; taking the trained convolutional neural network as a bullet screen recognition model; wherein, bullet screen training sample to including bullet screen sample word vector and the bullet screen type value that corresponds with bullet screen sample word vector, bullet screen type value includes normal bullet screen output value and unusual bullet screen output value. Adopt the bullet screen recognition model that above-mentioned technical scheme training obtained, can effectively filter unusual bullet screen, improve the discernment correct rate and the recognition efficiency of unusual bullet screen, can realize the autonomic increment study of bullet screen recognition model simultaneously.
Drawings
Fig. 1 is a flowchart of a bullet screen recognition model building method according to a first embodiment of the present invention;
fig. 2 is a flowchart of a bullet screen recognition model establishing method in the second embodiment of the present invention;
fig. 3 is a flowchart of a bullet screen recognition model establishing method in the third embodiment of the present invention;
fig. 4 is a flowchart of a bullet screen recognition model establishing method in the fourth embodiment of the present invention;
FIG. 5A is a schematic diagram of a convolutional neural network model in accordance with a fifth embodiment of the present invention;
fig. 5B is a flowchart of a bullet screen recognition model establishing method in the fifth embodiment of the present invention;
fig. 6 is a structural diagram of a bullet screen recognition model device in the sixth embodiment of the present invention;
fig. 7 is a schematic hardware structure diagram of a server according to a seventh embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a bullet screen recognition model building method in an embodiment of the present invention. The embodiment of the invention is suitable for the condition of filtering the bullet screen in the live broadcast process, and the method can be executed by a bullet screen identification model establishing device which is realized by software and/or hardware and is specifically configured in a server.
The bullet screen recognition model building method shown in fig. 1 includes:
and S110, training the pre-constructed convolutional neural network by using the bullet screen training sample pair.
Wherein, the bullet screen training sample pair includes: the bullet screen sample word vector and a bullet screen type value corresponding to the bullet screen sample word vector; the bullet screen type value comprises a normal bullet screen output value and an abnormal bullet screen output value.
The bullet screen training sample can directly obtain each bullet screen sample word vector and a bullet screen type value corresponding to the bullet screen sample word vector from a training sample library; and the method can also be obtained by obtaining an original bullet screen sample from an original bullet screen library and carrying out coding pretreatment on the original bullet screen sample.
It should be noted that, when the bullet screen sample word vector is obtained by performing encoding preprocessing on the original bullet screen sample, the preprocessing process may be performed before the convolutional neural network is trained, or may be performed during the convolutional neural network is trained.
Illustratively, the convolutional neural network comprises an input layer, a hidden layer, a fully-connected layer, and an output layer in end-to-end connection; the hidden layer comprises at least two computing network branches; the computing network branch includes a convolutional layer, an active layer connected to the convolutional layer, a pooling layer connected to the active layer, and a folding layer connected to the pooling layer. The method comprises the steps of obtaining a bullet screen sample word vector by carrying out encoding preprocessing on an original bullet screen sample on an input layer.
Illustratively, the bullet screen sample word vector may be formed by encoding the acquired original bullet screen samples to form a word vector.
For example, the manner of obtaining the original bullet screen sample may be: acquiring an original bullet screen sample from a bullet screen library, and identifying a normal bullet screen and an abnormal bullet screen in an abnormal keyword matching mode; or obtaining original bullet screen samples from a bullet screen library, and manually identifying normal bullet screens and abnormal bullet screens.
For example, the bullet screen type value may be determined in the following manner: setting the bullet screen type value of the normal bullet screen as a normal bullet screen output value; and setting the bullet screen type value of the abnormal bullet screen as an abnormal bullet screen output value. Wherein, the normal barrage output value is different from the abnormal barrage output value. For example, the normal bullet screen output value may be set to 0, and the corresponding abnormal bullet screen may be set to 1.
Wherein, each normal bullet screen and the normal bullet screen output value corresponding to the normal bullet screen form a normal bullet screen training sample pair; and forming an abnormal bullet screen training sample pair by each abnormal bullet screen and the output value of the abnormal bullet screen corresponding to the abnormal bullet screen. Wherein, bullet screen training sample is to including normal bullet screen training sample to and unusual bullet screen training sample to.
Specifically, the normal bullet screen training sample pairs and the abnormal bullet screen training sample pairs are used as input samples to train the convolutional neural network.
And S120, taking the trained convolutional neural network as the bullet screen recognition model.
Whether the convolutional neural network is trained or not can be evaluated through carrying out a model on the trained convolutional neural network, and when the model evaluation result meets the model generation condition, the trained convolutional neural network is used as a bullet screen recognition model.
Illustratively, the model evaluation process may be: adopting evaluation bullet screen sample word vectors as input samples to be input into the bullet screen recognition model to obtain a prediction result of each evaluation bullet screen sample word vector; and obtaining a model evaluation result according to the prediction result and the actual bullet screen type value of the evaluation bullet screen sample word vector so as to evaluate the bullet screen identification model.
Illustratively, the model evaluation result may be at least one of: the probability that normal barrage is predicted to normal barrage, the probability that unusual barrage is predicted to unusual barrage, the probability that normal barrage is predicted to unusual barrage, and the probability that unusual barrage is predicted to normal barrage, etc.
Correspondingly, the model generation condition may be whether the model evaluation result satisfies a corresponding set threshold, where different model evaluation results correspond to different set thresholds, and each set threshold is set as an empirical value by a technician or set by the technician according to an application requirement.
In the embodiment of the invention, a pre-constructed convolutional neural network is trained by using bullet screen training sample pairs; taking the trained convolutional neural network as a bullet screen recognition model; wherein, bullet screen training sample to including bullet screen sample word vector and the bullet screen type value that corresponds with bullet screen sample word vector, bullet screen type value includes normal bullet screen output value and unusual bullet screen output value. Adopt the bullet screen recognition model that above-mentioned technical scheme training obtained, can effectively filter unusual bullet screen, improve the discernment correct rate and the recognition efficiency of unusual bullet screen, can realize the autonomic increment study of bullet screen recognition model simultaneously.
Example two
Fig. 2 is a flowchart of a bullet screen recognition model establishing method in the second embodiment of the present invention. The embodiment of the invention performs additional optimization on the basis of the technical scheme of each embodiment.
Further, before the operation of training the pre-constructed convolutional neural network by using the bullet screen training samples, additionally obtaining at least two original bullet screen samples; according to a preset standard word list, carrying out unique hot coding on each original bullet screen sample to generate an initial bullet screen sample word vector; and performing dimension reduction processing on the initial bullet screen sample word vector to generate the bullet screen sample word vector so as to perfect the generation mode of the bullet screen sample word vector and improve the precision and stability of the model.
Further, before the operation of performing one-hot coding on each original bullet screen sample according to a preset standard word list to generate an initial bullet screen sample word vector, adding abnormal characters in each original bullet screen sample, and updating the original bullet screen sample; and/or removing bullet screen samples with the same content in the original bullet screen samples, and updating the original bullet screen samples to preprocess the original bullet screen samples, so that the precision and the stability of the training model are further improved.
The bullet screen recognition model building method shown in fig. 2 includes:
and S210, obtaining an original bullet screen sample.
Wherein the original bullet screen sample represents an unprocessed original bullet screen sent by the user. Wherein the original bullet screen comprises a normal bullet screen and an abnormal bullet screen. Wherein, unusual barrage includes at least one of following: a squirt bullet screen, a vulgar bullet screen, an advertising bullet screen, and a political involvement bullet screen. The normal barrage refers to a non-abnormal barrage or a non-abnormal barrage related to the content watched by the user.
Preferably, the number of the normal barrage and the number of the abnormal barrage are distributed in a balanced manner; and the quantity of various types of bullet screens in the abnormal bullet screens is distributed in a balanced manner.
Specifically, the manner of obtaining the original bullet screen sample may be: the method comprises the steps of obtaining the bullet screen identification model locally from a current server for establishing the bullet screen identification model, obtaining the bullet screen identification model from other servers in communication connection with the current server, or obtaining the bullet screen identification model from a cloud.
S220, removing abnormal characters in each original bullet screen sample, and updating the original bullet screen sample; and/or removing bullet screen samples with the same content in the original bullet screen samples, and updating the original bullet screen samples.
The abnormal characters comprise characters without actual meanings such as emoticons, pictures, messy codes and the like and non-recognizable characters. Because the removal of the abnormal characters does not affect the semantics of the original bullet screen sample, the abnormal characters can be removed.
Illustratively, each character in the original bullet screen sample can be compared with a preset standard word list; and if the preset standard word list does not contain the corresponding characters, determining that the characters which are not contained are abnormal characters, and updating the original bullet screen sample by removing the abnormal characters in the original bullet screen sample. The preset standard word list comprises a certain number of universal characters. The number of the general-purpose characters is large, and may be 20000, for example.
Exemplarily, each character of the original bullet screen sample can be compared with a preset abnormal word list; and if the preset abnormal word list contains the corresponding characters, determining that the contained characters are abnormal characters, and updating the original bullet screen sample by removing the abnormal characters in the original bullet screen sample. The preset abnormal word list comprises a certain number of abnormal characters. The number of the abnormal characters is large, and may be 20000, for example.
When different users send the same or similar viewing barrage, or the same user repeatedly sends the same viewing barrage, the obtained original barrage sample or the updated original barrage sample is the same. Adopt the same training sample to train the model, will waste training time, cause the influence to the stability of model simultaneously, consequently need carry out the heavy processing of removing to original bullet screen sample.
Specifically, when the obtained original bullet screen sample or the updated original bullet screen sample is the same, the repeated original bullet screen sample is removed, so that the original bullet screen sample is updated.
It should be noted that the abnormal character removing operation and the deduplication operation may be performed simultaneously or sequentially, and the order and the execution times of the abnormal character removing operation and the deduplication operation are not limited at all.
And S230, performing one-hot coding on each original bullet screen sample according to a preset standard word list to generate an initial bullet screen sample word vector.
The dimension of the word vector of the initial bullet screen sample corresponding to one original bullet screen sample is the same as the number of the universal characters contained in the preset standard word list.
S240, performing dimension reduction processing on the initial bullet screen sample word vector to generate the bullet screen sample word vector.
Because the dimension of the word vector of the initial bullet screen sample is large, the training time is long due to the fact that the calculated amount in the training process is too large when the model training is directly carried out by adopting the word vector of the initial bullet screen sample, and meanwhile, overfitting easily occurs to the trained model, so that the precision of the model is influenced. Therefore, the original bullet screen sample word vector needs to be subjected to dimension reduction processing to generate a bullet screen sample word vector. And the vector latitude of the bullet screen sample word vector is far smaller than that of the initial bullet screen sample word vector. Illustratively, the initial bullet screen sample word vector is 20000 dimensions, and the bullet screen sample word vector is 200 dimensions.
Illustratively, the initial bullet screen sample word vectors may be subjected to dimension reduction processing by using a skip-gram model, or by using a feature extraction method, so as to obtain the bullet screen sample word vectors.
And S250, training the pre-constructed convolutional neural network by using the bullet screen training sample pair.
Wherein, the bullet screen training sample pair includes: at least two bullet screen sample word vectors and bullet screen type values corresponding to the bullet screen sample word vectors; the bullet screen type value comprises a normal bullet screen output value and an abnormal bullet screen output value.
And S260, taking the trained convolutional neural network as the bullet screen recognition model.
According to the method and the device, the original bullet screen sample is additionally obtained before model training, and the original bullet screen sample is subjected to one-hot coding to generate the bullet screen sample word vector, so that the generation mode of the training sample is perfected, and the model precision and stability are improved; the method comprises the steps of adding an abnormal character removing step and/or a repeated sample removing step before generating the bullet screen sample word vector to update the original bullet screen sample, so that the original bullet screen sample is preprocessed, and the precision and the stability of a training model are further improved.
EXAMPLE III
Fig. 3 is a flowchart of a bullet screen recognition model establishing method in the third embodiment of the present invention. The embodiment of the invention is optimized on the basis of the technical scheme of each embodiment.
Further, the operation of training the pre-constructed neural network by using the bullet screen training sample pairs is refined into the operation of selecting the bullet screen training sample pairs with the set number; sequentially acquiring a bullet screen training sample pair, inputting the bullet screen training sample pair into a pre-constructed convolutional neural network to obtain an output result of the convolutional neural network based on a bullet screen sample word vector, and adjusting weighting parameters in the pre-constructed convolutional neural network based on the output result; and returning to execute the operation of obtaining a bullet screen sample word vector and inputting the bullet screen sample word vector into the convolutional neural network until a preset training end condition is reached so as to perfect the training process of the convolutional neural network.
As shown in fig. 3, a bullet screen recognition model building method includes:
s310, selecting a set number of bullet screen training sample pairs.
Wherein, the bullet screen training sample pair includes: the bullet screen sample word vector and a bullet screen type value corresponding to the bullet screen sample word vector; the bullet screen type value comprises a normal bullet screen output value and an abnormal bullet screen output value.
The number of bullet screen training sample pairs is set to a fixed value by a technician according to experience, or is set to an adjustable value according to the requirement of a training process.
And S320, sequentially obtaining a bullet screen training sample pair and inputting the bullet screen training sample pair into a pre-constructed convolutional neural network to obtain an output result of the convolutional neural network based on the bullet screen sample word vector.
And in the selected bullet screen training sample pair, obtaining a bullet screen training sample pair as an input sample according to a set sequence or randomly, and inputting the bullet screen training sample pair into a pre-constructed convolutional neural network for training. After the convolutional neural network is trained, an output result corresponding to the input sample is obtained.
Wherein the convolutional neural network comprises: the input layer, the hidden layer, the full connection layer and the output layer are connected end to end; the hidden layer comprises at least two computing network branches; the computing network branch includes a convolutional layer, an active layer connected to the convolutional layer, a pooling layer connected to the active layer, and a folding layer connected to the pooling layer.
Each layer in the convolutional neural network comprises a weighting parameter, and when the barrage training samples are input into the convolutional neural network, the weighting parameters of each layer are combined for calculation to obtain an output result. Calculating by using preset weighting parameters when the convolutional neural network is trained for the first time; the weighting parameters of each layer are adjusted in each training process, so that when the convolutional neural network is trained again, calculation is carried out according to the adjusted weighting parameters.
S330, adjusting weighting parameters in the pre-constructed convolutional neural network based on the output result.
Specifically, a cross entropy function between the output result and the bullet screen type value is calculated; and updating the weighting parameters of each layer in the pre-constructed convolutional neural network layer by layer in the reverse direction along the direction of the minimized cross entropy function by adopting a set weight updating algorithm.
S340, judging whether the bullet screen training samples for training reach a set number; if yes, go to step S350, if no, go back to step S320.
And S350, taking the trained convolutional neural network as the bullet screen recognition model.
According to the embodiment of the invention, the bullet screen training samples are input into the convolutional neural network one by one to obtain the output result, the weighting parameters of the convolutional neural network are adjusted through the output result, the weighting parameters are continuously optimized, and finally the bullet screen recognition model is generated.
Example four
Fig. 4 is a flowchart of a bullet screen recognition model establishing method in the fourth embodiment of the present invention. The embodiment of the invention performs additional optimization on the basis of the technical scheme of each embodiment.
Further, after the operation of taking the trained convolutional neural network as the bullet screen recognition model, additionally obtaining a bullet screen prediction sample as an input sample and inputting the bullet screen prediction sample into the bullet screen recognition model; marking the bullet screen prediction sample according to the prediction result of the bullet screen recognition model; according to the marking result, it is right the original barrage that the barrage sample corresponds shows to through the unusual barrage of effective filtering of barrage identification model, only show normal barrage, realized the effective control to barrage quantity and barrage quality.
As shown in fig. 4, a bullet screen recognition model establishing method includes:
and S410, training the pre-constructed convolutional neural network by using the bullet screen training sample pair.
Wherein, the bullet screen training sample pair includes: the bullet screen sample word vector and a bullet screen type value corresponding to the bullet screen sample word vector; the bullet screen type value comprises a normal bullet screen output value and an abnormal bullet screen output value.
And S420, taking the trained convolutional neural network as the bullet screen recognition model.
And S430, obtaining a bullet screen prediction sample as an input sample and inputting the bullet screen prediction sample into the bullet screen recognition model.
And the bullet screen prediction sample comprises a bullet screen prediction sample word vector.
For example, the manner of obtaining the bullet screen prediction sample may be: the method comprises the steps of directly obtaining the current server locally, obtaining the current server from other servers in communication connection with the current server, or obtaining the current server from a cloud.
The bullet screen prediction sample can be an original prediction bullet screen, and can also be a bullet screen prediction sample word vector generated after the original prediction bullet screen is preprocessed.
The method for generating the bullet screen prediction sample word vector by preprocessing the original prediction bullet screen comprises the following steps: carrying out one-hot coding on the original prediction bullet screen according to a preset standard table to generate an initial bullet screen prediction sample word vector; and performing dimension reduction processing on the initial bullet screen prediction sample word vector to generate a bullet screen prediction sample word vector.
Correspondingly, the bullet screen prediction sample word vector is used as an input sample and is input into the bullet screen recognition model.
And S440, marking the bullet screen prediction sample according to the prediction result of the bullet screen recognition model.
Specifically, when the output result of the bullet screen identification model is an abnormal bullet screen output value, marking a corresponding bullet screen prediction sample as an abnormal bullet screen; and when the output result of the bullet screen recognition model is a normal bullet screen output value, marking the corresponding bullet screen prediction sample as a normal bullet screen.
And S450, displaying the original bullet screen corresponding to the bullet screen sample according to the marking result.
Specifically, the original bullet screen input by the user corresponding to the abnormal bullet screen is removed, and the original bullet screen input by the user corresponding to the normal bullet screen is displayed on the live broadcast interface.
On the basis of the technical solutions of the above embodiments, the method further includes:
and correspondingly storing the bullet screen prediction sample and the prediction result.
According to the embodiment of the invention, the bullet screen prediction samples and the prediction results are correspondingly stored, so that subsequent comparison and verification are facilitated, and data is provided for model optimization or establishment of other models.
EXAMPLE five
The embodiment of the invention provides a preferable embodiment on the basis of the technical solutions of the above embodiments. Fig. 5A shows a structure diagram of a convolutional neural network model used in an embodiment of the present invention.
The convolutional neural network comprises an input layer, a hidden layer, a full connection layer and an output layer which are connected end to end; the hidden layer comprises at least two computing network branches; the computing network branch includes a convolutional layer, an active layer connected to the convolutional layer, a pooling layer connected to the active layer, and a folding layer connected to the pooling layer.
As shown in fig. 5B, a bullet screen recognition model establishing method includes:
and S510, obtaining a bullet screen training sample.
Wherein, bullet screen training sample includes normal bullet screen and unusual bullet screen. Wherein, the abnormal bullet screen comprises a spray bullet screen, a vulgar bullet screen, an advertisement bullet screen and an administration bullet screen. Wherein, normal barrage and unusual barrage quantity are balanced, and each unusual barrage quantity is balanced.
S521, cleaning the bullet screen training sample, removing abnormal characters, and obtaining a pure bullet screen without repetition so as to update the bullet screen training sample.
And S522, vectorizing the bullet screen training samples to generate bullet screen sample word vectors.
And coding the bullet screen training samples word by word according to the standard word bank to generate a one-hot vector, and taking the generated one-hot vector as a bullet screen sample word vector. The standard word library is expanded by technicians according to needs, for example, high-frequency words in the process of watching a specific scene of live broadcasting can be added. Specifically, the number of high-frequency words in the standard word library is 20000, and the corresponding bullet screen sample word vector is 20000 dimensions.
S521-S522 correspond to the input layer operation in fig. 5A, and are used to implement bullet screen preprocessing.
S530, performing dimension reduction processing on the bullet screen sample word vectors according to the skip-gram model, and updating the bullet screen sample word vectors.
Specifically, the bullet screen sample word vector after dimension reduction is 200 dimensions.
Wherein S530 corresponds to the embedding layer operation in fig. 5A, for implementing vector dimension reduction.
Taking 64 neurons as an example to form a computational network branch, the step of processing the bullet screen sample word vector participates in S541-S544. Wherein, the number of the network branches is set by the staff. Exemplarily, there are 3 in the present embodiment.
S541, performing convolution processing according to the bullet screen sample word vector and a preset convolution kernel function to obtain a first hidden layer value.
In particular, according to the formula
Figure BDA0001795019960000141
A first hidden layer value is obtained.
Wherein x iskForming a vector for each characteristic value of a position corresponding to a current convolution window in the bullet screen sample word vector, wherein k is the size of the window; wijAnd setting the weight value of the corresponding position of the preset convolution kernel function, wherein i and j are respectively the corresponding positions of the convolution kernel, and y is a first hidden layer value.
Illustratively, the convolutional layer of the embodiment of the present invention has 3 × 200 dimensions.
Where the convolution kernel function can be considered as a receptive field activation in machine vision, the output is larger if the corresponding shape location is more similar to the shape of the convolution kernel.
S541 corresponds to the operation of the convolution layer in fig. 5A, and is used to perform feature extraction on the bullet screen sample word vector.
And S542, processing the first hidden layer value by adopting a linear rectification function to obtain a second hidden layer value.
Specifically, a second hidden layer value is obtained according to the formula Φ (x) ═ max (0, x);
wherein x is a first hidden layer value and Φ (x) is a second hidden layer value.
S542 corresponds to the operation of the first active layer in fig. 5A, and is used to map the convolutional layer result.
And S543, processing each second hidden layer value in the calculation network branch by adopting a maximum pooling function to obtain a third hidden layer value.
Specifically, max (x) is given by the formula y1,x2,..…,xk) And obtaining a third hidden layer value.
Wherein x isiFor the second hidden layer value, i is 1 … k, and k is the pooling step size.
Wherein, the step length of the pooling is a parameter to be optimized. The pooling step size is preset to 50 in the present embodiment.
S543 corresponds to the pooling level operation in fig. 5A, and is used for implementing hidden level output dimensionality reduction.
And S544, merging the third hidden layer values corresponding to the neurons in the calculation network branch into a third hidden layer vector.
Where S544 corresponds to the folding layer in fig. 5A for folding the dimension vector into a one-dimensional vector.
And S550, weighting the third hidden layer vector of each calculation network branch to obtain a fourth hidden layer vector.
S550 corresponds to the full connection layer in fig. 5A, and is used to splice hidden layer vectors output by each computing network branch.
And S560, classifying the fourth hidden layer vector by adopting the activation parameters to obtain a classification result.
In particular, according to the formula
Figure BDA0001795019960000151
Obtaining a classification numerical value; comparing the classification value with a preset threshold value, and if the classification value is greater than the preset threshold value, determining that the bullet screen is abnormal; and if the value is not greater than the preset threshold value, the bullet screen is normal.
Wherein x isiAnd j is the maximum value in the fourth hidden layer vector, and the number of the neurons of the calculation network branches is j. Where σ (x) is a classification value. Illustratively, the preset threshold may be 0.5.
Where S560 corresponds to the second active layer and the output layer in fig. 5A. The activation layer is used for mapping the bullet screen type to a numerical value between 0 and 1; the output layer is used for outputting the bullet screen type.
And S570, calculating a cross entropy function of the classification result and the actual bullet screen type value, and optimizing the parameters to be optimized of each layer in the convolutional neural network by adopting a back propagation algorithm.
In particular, according to the formula
Figure BDA0001795019960000152
Obtaining a residual value;
wherein n is the number of bullet screen training samples, loss is the residual error value, yiIs the actual bullet screen type value, yi' is the classification result of model prediction.
According to the formula
Figure BDA0001795019960000161
Minimizing the cross entropy of the model; wherein theta is a parameter to be optimized for each layer, and ziIs the output of the middle layer.
Specifically, the loss function can be continuously propagated to the parameters to be optimized of the previous layers through a back propagation algorithm, that is, a chain rule.
And S580, inputting the forecast barrage sample data into the optimized model to obtain a model output value, and evaluating the model according to the model output value and the real output value.
In particular, according to the formula
Figure BDA0001795019960000162
Evaluating the model;
wherein TP represents that the abnormal bullet screen is judged to be the abnormal bullet screen, TN represents that the normal bullet screen is judged to be the normal bullet screen, FP represents that the normal bullet screen is judged to be the abnormal bullet screen, and FN represents that the abnormal bullet screen is judged to be the normal bullet screen; accuracy is the model accuracy; recall is the model recall rate; f-measure is the model F score.
Wherein, the higher the F score, the better the model effect.
It should be noted that, in the process of training the model, the trained model may also be subjected to incremental training. Specifically, the incremental training process comprises: and sampling samples which are not identified by the model to form a new training set at one side for iterative training. It will be appreciated that, because of the small number of incremental learning samples, the number of iterations is not likely to be excessive in order to avoid the overfitting phenomenon.
EXAMPLE six
Fig. 6 is a structural diagram of a bullet screen recognition model device in the sixth embodiment of the present invention. The embodiment of the invention can be suitable for the condition of filtering the barrage in the live broadcasting process, and the device is realized by software and/or hardware and is specifically configured in a server. The bullet screen recognition model building device shown in fig. 6 includes: a training module 610 and a model generation module 620.
The training module 610 is configured to train a pre-constructed convolutional neural network by using bullet screen training sample pairs;
wherein, the bullet screen training sample pair includes: at least two bullet screen sample word vectors and bullet screen type values corresponding to the bullet screen sample word vectors; the bullet screen type value comprises a normal bullet screen output value and an abnormal bullet screen output value;
and the model generating module 620 is configured to use the trained convolutional neural network as the bullet screen recognition model.
In the embodiment of the invention, a training module trains a pre-constructed convolutional neural network by using a bullet screen training sample pair; the trained convolutional neural network is used as a bullet screen recognition model through a model generation module; wherein, bullet screen training sample to including bullet screen sample word vector and the bullet screen type value that corresponds with bullet screen sample word vector, bullet screen type value includes normal bullet screen output value and unusual bullet screen output value. Adopt the bullet screen recognition model that above-mentioned technical scheme training obtained, can effectively filter unusual bullet screen, improve the discernment correct rate and the recognition efficiency of unusual bullet screen, can realize the autonomic increment study of bullet screen recognition model simultaneously.
Further, the convolutional neural network includes: the input layer, the hidden layer, the full connection layer and the output layer are connected end to end;
the hidden layer comprises at least two computing network branches;
the computing network branch includes a convolutional layer, an active layer connected to the convolutional layer, a pooling layer connected to the active layer, and a folding layer connected to the pooling layer.
Further, the apparatus further comprises:
the acquisition module is used for acquiring an original bullet screen sample before the bullet screen training sample is used for training the pre-constructed convolutional neural network;
the encoding module is used for carrying out single-hot encoding on each original barrage sample according to a preset standard word list to generate an initial barrage sample word vector;
and the dimension reduction module is used for carrying out dimension reduction processing on the initial bullet screen sample word vector to generate the bullet screen sample word vector.
Further, the apparatus further comprises:
the removing module is used for carrying out unique hot coding on each original bullet screen sample according to a preset standard word list, removing abnormal characters in each original bullet screen sample before generating an initial bullet screen sample word vector, and updating the original bullet screen sample; and/or
And removing the bullet screen samples with the same content in the original bullet screen samples, and updating the original bullet screen samples.
Further, the training module 610 includes:
the selection unit is used for selecting a set number of bullet screen training sample pairs;
the training unit is used for sequentially obtaining a bullet screen training sample pair and inputting the bullet screen training sample pair into a pre-constructed convolutional neural network to obtain an output result of the convolutional neural network based on a bullet screen sample word vector, and adjusting weighting parameters in the pre-constructed convolutional neural network based on the output result;
and the circulating unit is used for returning to execute the operation of obtaining a bullet screen sample word vector and inputting the bullet screen sample word vector into the convolutional neural network until a preset training end condition is reached.
Further, when the weighting parameter in the pre-constructed convolutional neural network is adjusted based on the output result, the training unit is specifically configured to:
calculating a cross entropy function between the output result and the bullet screen type value;
and updating the weighting parameters of each layer in the pre-constructed convolutional neural network layer by layer in the reverse direction along the direction of the minimized cross entropy function by adopting a set weight updating algorithm.
Further, the apparatus further includes a prediction module, specifically including:
the acquisition unit is used for acquiring a bullet screen prediction sample as an input sample and inputting the bullet screen prediction sample into the bullet screen recognition model after the trained convolutional neural network is used as the bullet screen recognition model;
the prediction unit is used for marking the bullet screen prediction sample according to the prediction result of the bullet screen recognition model;
and the display unit is used for displaying the original bullet screen corresponding to the bullet screen sample according to the marking result.
The bullet screen recognition model establishing device can execute the bullet screen recognition model establishing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the bullet screen recognition model establishing method.
EXAMPLE seven
Fig. 7 is a schematic hardware configuration diagram of a server according to a seventh embodiment of the present invention, where the server includes a processor 710 and a storage device 720.
One or more processors 710;
a storage device 720 for storing one or more programs.
The processor 710 in fig. 7 is taken as an example, the processor 710 and the storage 720 in the server may be connected by a bus or other means, and fig. 7 is taken as an example of connection by a bus.
In this embodiment, the processor 710 in the server may train the pre-constructed convolutional neural network by using the bullet screen training sample pair, and may further use the trained convolutional neural network as a bullet screen recognition model.
The storage device 720 in the server is used as a computer-readable storage medium for storing one or more programs, which may be software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the bullet screen recognition model building method in the embodiment of the present invention (for example, the training module 610 and the model generating module 620 shown in fig. 6). The processor 710 executes various functional applications and data processing of the server by running software programs, instructions and modules stored in the storage device 720, that is, implements the bullet screen recognition model building method in the above method embodiments.
The storage 720 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data (such as the convolutional neural network, the bullet screen sample word vector, the bullet screen type value, and the bullet screen recognition model in the above embodiments). Additionally, the storage 720 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the storage 720 may further include memory located remotely from the processor 710, which may be connected to a server over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a bullet screen recognition model establishing apparatus, implements a bullet screen recognition model establishing method provided in the embodiments of the present invention, where the method includes: training a pre-constructed convolutional neural network by using a bullet screen training sample pair; wherein, the bullet screen training sample pair includes: the bullet screen sample word vector and a bullet screen type value corresponding to the bullet screen sample word vector; the bullet screen type value comprises a normal bullet screen output value and an abnormal bullet screen output value; and taking the trained convolutional neural network as the bullet screen recognition model.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute the bullet screen recognition model building method according to the embodiments of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A bullet screen recognition model building method is characterized by comprising the following steps:
training a pre-constructed convolutional neural network by using a bullet screen training sample pair;
wherein, the bullet screen training sample pair includes: the bullet screen sample word vector and a bullet screen type value corresponding to the bullet screen sample word vector; the bullet screen type value comprises a normal bullet screen output value and an abnormal bullet screen output value;
taking the trained convolutional neural network as the bullet screen recognition model;
taking the trained convolutional neural network as the bullet screen recognition model, and the method comprises the following steps:
performing model evaluation on the trained convolutional neural network, and if the model evaluation result meets the model generation condition, taking the trained convolutional neural network as a bullet screen recognition model; the model evaluation result comprises at least one of the following: the probability that normal bullet curtain is predicated as normal bullet curtain, the probability that unusual bullet curtain is predicated as unusual bullet curtain, the probability that normal bullet curtain is predicated as unusual bullet curtain to and the probability that unusual bullet curtain is predicated as normal bullet curtain.
2. The method of claim 1, wherein the convolutional neural network comprises: the input layer, the hidden layer, the full connection layer and the output layer are connected end to end;
the hidden layer comprises at least two computing network branches;
the computing network branch includes a convolutional layer, an active layer connected to the convolutional layer, a pooling layer connected to the active layer, and a folding layer connected to the pooling layer.
3. The method of claim 1, further comprising, prior to said training a pre-constructed convolutional neural network using bullet screen training samples:
obtaining an original bullet screen sample;
according to a preset standard word list, carrying out unique hot coding on each original bullet screen sample to generate an initial bullet screen sample word vector;
and performing dimension reduction processing on the initial bullet screen sample word vector to generate the bullet screen sample word vector.
4. The method of claim 3, wherein before performing the one-hot encoding on each original bullet screen sample according to the predetermined standard vocabulary to generate the initial bullet screen sample word vector, the method further comprises:
removing abnormal characters in each original bullet screen sample, and updating the original bullet screen sample; and/or
And removing the bullet screen samples with the same content in the original bullet screen samples, and updating the original bullet screen samples.
5. The method of claim 1, wherein training the pre-constructed neural network using bullet screen training sample pairs comprises:
selecting a set number of bullet screen training sample pairs;
sequentially acquiring a bullet screen training sample pair, inputting the bullet screen training sample pair into a pre-constructed convolutional neural network to obtain an output result of the convolutional neural network based on a bullet screen sample word vector, and adjusting weighting parameters in the pre-constructed convolutional neural network based on the output result;
and returning to execute the operation of obtaining a bullet screen sample word vector and inputting the bullet screen sample word vector into the convolutional neural network until a preset training end condition is reached.
6. The method of claim 5, wherein the adjusting the weighting parameters in the pre-constructed convolutional neural network based on the output results comprises:
calculating a cross entropy function between the output result and the bullet screen type value;
and updating the weighting parameters of each layer in the pre-constructed convolutional neural network layer by layer in the reverse direction along the direction of the minimized cross entropy function by adopting a set weight updating algorithm.
7. The method according to any one of claims 1-6, wherein after said using the trained convolutional neural network as the bullet screen recognition model, further comprising:
acquiring a bullet screen prediction sample as an input sample and inputting the bullet screen prediction sample into the bullet screen recognition model;
marking the bullet screen prediction sample according to the prediction result of the bullet screen recognition model;
and displaying the original bullet screen corresponding to the bullet screen sample according to the marking result.
8. The utility model provides a bullet screen identification model building device which characterized in that includes:
the training module is used for training the pre-constructed convolutional neural network by using the bullet screen training sample pair;
wherein, the bullet screen training sample pair includes: at least two bullet screen sample word vectors and bullet screen type values corresponding to the bullet screen sample word vectors; the bullet screen type value comprises a normal bullet screen output value and an abnormal bullet screen output value;
the model generation module is used for taking the trained convolutional neural network as the bullet screen recognition model;
the model generation module is specifically configured to:
performing model evaluation on the trained convolutional neural network, and if the model evaluation result meets the model generation condition, taking the trained convolutional neural network as a bullet screen recognition model; the model evaluation result comprises at least one of the following: the probability that normal bullet curtain is predicated as normal bullet curtain, the probability that unusual bullet curtain is predicated as unusual bullet curtain, the probability that normal bullet curtain is predicated as unusual bullet curtain to and the probability that unusual bullet curtain is predicated as normal bullet curtain.
9. A server, comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs are executable by the one or more processors to cause the one or more processors to implement a bullet screen recognition model building method as claimed in any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements a bullet screen recognition model building method according to any one of claims 1 to 7.
CN201811052795.5A 2018-09-10 2018-09-10 Bullet screen recognition model establishing method, device, server and medium Active CN109189889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811052795.5A CN109189889B (en) 2018-09-10 2018-09-10 Bullet screen recognition model establishing method, device, server and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811052795.5A CN109189889B (en) 2018-09-10 2018-09-10 Bullet screen recognition model establishing method, device, server and medium

Publications (2)

Publication Number Publication Date
CN109189889A CN109189889A (en) 2019-01-11
CN109189889B true CN109189889B (en) 2021-03-12

Family

ID=64915758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811052795.5A Active CN109189889B (en) 2018-09-10 2018-09-10 Bullet screen recognition model establishing method, device, server and medium

Country Status (1)

Country Link
CN (1) CN109189889B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110166802B (en) * 2019-05-06 2022-11-01 腾讯科技(深圳)有限公司 Bullet screen processing method and device and storage medium
CN111225227A (en) * 2020-01-03 2020-06-02 网易(杭州)网络有限公司 Bullet screen publishing method, bullet screen model generating method and bullet screen publishing device
CN111930943B (en) * 2020-08-12 2022-09-02 中国科学技术大学 Method and device for detecting pivot bullet screen
CN112767106B (en) * 2021-01-14 2023-11-07 中国科学院上海高等研究院 Automatic auditing method, system, computer readable storage medium and auditing equipment
CN115550672B (en) * 2021-12-30 2023-11-03 北京国瑞数智技术有限公司 Bullet screen burst behavior identification method and system in network live broadcast environment

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105357586A (en) * 2015-09-28 2016-02-24 北京奇艺世纪科技有限公司 Video bullet screen filtering method and device
CN106028072A (en) * 2016-05-16 2016-10-12 武汉斗鱼网络科技有限公司 Method and device for controlling bullet screen in live room
CN107277643A (en) * 2017-07-31 2017-10-20 合网络技术(北京)有限公司 The sending method and client of barrage content
CN107396144A (en) * 2017-06-30 2017-11-24 武汉斗鱼网络科技有限公司 A kind of barrage distribution method and device
CN107480123A (en) * 2017-06-28 2017-12-15 武汉斗鱼网络科技有限公司 A kind of recognition methods, device and the computer equipment of rubbish barrage
CN107592578A (en) * 2017-09-22 2018-01-16 广东欧珀移动通信有限公司 Information processing method, device, terminal device and storage medium
CN107608964A (en) * 2017-09-13 2018-01-19 上海六界信息技术有限公司 Screening technique, device, equipment and the storage medium of live content based on barrage
CN107613392A (en) * 2017-09-22 2018-01-19 广东欧珀移动通信有限公司 Information processing method, device, terminal device and storage medium
CN107645686A (en) * 2017-09-22 2018-01-30 广东欧珀移动通信有限公司 Information processing method, device, terminal device and storage medium
CN108513175A (en) * 2018-03-29 2018-09-07 网宿科技股份有限公司 A kind of processing method and system of barrage information
CN110198453A (en) * 2019-05-23 2019-09-03 武汉瓯越网视有限公司 Live content filter method, storage medium, equipment and system based on barrage
CN111225227A (en) * 2020-01-03 2020-06-02 网易(杭州)网络有限公司 Bullet screen publishing method, bullet screen model generating method and bullet screen publishing device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105357586A (en) * 2015-09-28 2016-02-24 北京奇艺世纪科技有限公司 Video bullet screen filtering method and device
CN106028072A (en) * 2016-05-16 2016-10-12 武汉斗鱼网络科技有限公司 Method and device for controlling bullet screen in live room
CN107480123A (en) * 2017-06-28 2017-12-15 武汉斗鱼网络科技有限公司 A kind of recognition methods, device and the computer equipment of rubbish barrage
CN107396144A (en) * 2017-06-30 2017-11-24 武汉斗鱼网络科技有限公司 A kind of barrage distribution method and device
CN107277643A (en) * 2017-07-31 2017-10-20 合网络技术(北京)有限公司 The sending method and client of barrage content
CN107608964A (en) * 2017-09-13 2018-01-19 上海六界信息技术有限公司 Screening technique, device, equipment and the storage medium of live content based on barrage
CN107592578A (en) * 2017-09-22 2018-01-16 广东欧珀移动通信有限公司 Information processing method, device, terminal device and storage medium
CN107613392A (en) * 2017-09-22 2018-01-19 广东欧珀移动通信有限公司 Information processing method, device, terminal device and storage medium
CN107645686A (en) * 2017-09-22 2018-01-30 广东欧珀移动通信有限公司 Information processing method, device, terminal device and storage medium
CN108513175A (en) * 2018-03-29 2018-09-07 网宿科技股份有限公司 A kind of processing method and system of barrage information
CN110198453A (en) * 2019-05-23 2019-09-03 武汉瓯越网视有限公司 Live content filter method, storage medium, equipment and system based on barrage
CN111225227A (en) * 2020-01-03 2020-06-02 网易(杭州)网络有限公司 Bullet screen publishing method, bullet screen model generating method and bullet screen publishing device

Also Published As

Publication number Publication date
CN109189889A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN109189889B (en) Bullet screen recognition model establishing method, device, server and medium
CN110457589B (en) Vehicle recommendation method, device, equipment and storage medium
CN109344884B (en) Media information classification method, method and device for training picture classification model
CN110033023B (en) Image data processing method and system based on picture book recognition
US20210019599A1 (en) Adaptive neural architecture search
CN111931062A (en) Training method and related device of information recommendation model
CN111026971A (en) Content pushing method and device and computer storage medium
CN113469289B (en) Video self-supervision characterization learning method and device, computer equipment and medium
CN108509827B (en) Method for identifying abnormal content in video stream and video stream processing system and method
CN113220886A (en) Text classification method, text classification model training method and related equipment
CN111488985A (en) Deep neural network model compression training method, device, equipment and medium
CN114780831A (en) Sequence recommendation method and system based on Transformer
CN110264407B (en) Image super-resolution model training and reconstruction method, device, equipment and storage medium
CN113127737B (en) Personalized search method and search system integrating attention mechanism
CN112347787A (en) Method, device and equipment for classifying aspect level emotion and readable storage medium
CN111639230B (en) Similar video screening method, device, equipment and storage medium
CN114283350A (en) Visual model training and video processing method, device, equipment and storage medium
CN110956038A (en) Repeated image-text content judgment method and device
CN115618101A (en) Streaming media content recommendation method and device based on negative feedback and electronic equipment
CN117216281A (en) Knowledge graph-based user interest diffusion recommendation method and system
CN113244627B (en) Method and device for identifying plug-in, electronic equipment and storage medium
CN113792659A (en) Document identification method and device and electronic equipment
CN113971644A (en) Image identification method and device based on data enhancement strategy selection
CN113011532A (en) Classification model training method and device, computing equipment and storage medium
CN108665455B (en) Method and device for evaluating image significance prediction result

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant