CN110213660B - Program distribution method, system, computer device and storage medium - Google Patents

Program distribution method, system, computer device and storage medium Download PDF

Info

Publication number
CN110213660B
CN110213660B CN201910448059.XA CN201910448059A CN110213660B CN 110213660 B CN110213660 B CN 110213660B CN 201910448059 A CN201910448059 A CN 201910448059A CN 110213660 B CN110213660 B CN 110213660B
Authority
CN
China
Prior art keywords
program
vector
programs
anchor
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910448059.XA
Other languages
Chinese (zh)
Other versions
CN110213660A (en
Inventor
黄坚毅
丁宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Lizhi Network Technology Co ltd
Original Assignee
Guangzhou Lizhi Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Lizhi Network Technology Co ltd filed Critical Guangzhou Lizhi Network Technology Co ltd
Priority to CN201910448059.XA priority Critical patent/CN110213660B/en
Publication of CN110213660A publication Critical patent/CN110213660A/en
Application granted granted Critical
Publication of CN110213660B publication Critical patent/CN110213660B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4662Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4662Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms
    • H04N21/4666Learning process for intelligent management, e.g. learning user preferences for recommending movies characterized by learning algorithms using neural networks, e.g. processing the feedback provided by the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies

Abstract

The invention relates to a program distribution method, a system, computer equipment and a storage medium, wherein the method comprises the steps of obtaining a program vector of a new program to be distributed and a user preference program; inputting a new program into a pre-established program vector model, and calculating a program vector of the new program; the program vector model is obtained by learning and training historical programs based on a neural network algorithm; calculating the cosine similarity of the new program and the user preference program according to the program vector of the new program and the program vector of the user preference program; and when the cosine similarity is larger than a preset threshold value, distributing the new program to be distributed to the user corresponding to the program preferred by the user. The method can quickly complete the distribution of the new program by finding out the program most similar to the new program and recommending or distributing the program according to the most similar program, and has high distribution accuracy.

Description

Program distribution method, system, computer device and storage medium
Technical Field
The present invention relates to the field of network technologies, and in particular, to a method, a system, a computer device, and a storage medium for distributing programs.
Background
With the continuous development of internet technology, some video platforms (such as lichee, yy live broadcast and the like) appear one after another and develop rapidly. As platforms continue to expand, a large number of programs are uploaded to the platforms every minute for the platform users to listen to, and how to distribute the programs is a very complicated task. For example, litchi is a UGC platform, hundreds of thousands of programs are uploaded by litchi every day, and how to quickly recommend the programs to users is a great problem.
Currently, collaborative filtering or tag-based recommendation methods are commonly employed; but collaborative filtering relies on a large amount of user data, and for a new program, it is impossible to accumulate a large amount of user data just before the program is uploaded, which is a great bottleneck; and the new program is recommended based on the label, so that the quality is easy to be uneven, and a good effect is difficult to achieve.
Disclosure of Invention
Based on this, it is necessary to provide a program distribution method, system, computer device, and storage medium for solving the problem that the current program distribution method needs to rely on a large amount of user data or the quality of recommended programs is not good.
A method of distributing a program, the method comprising the steps of:
acquiring vectors of a new program to be distributed and a user preference program;
inputting the new program into a program vector model established in advance, and calculating a vector of the new program; the program vector model is obtained by learning and training historical programs based on a neural network algorithm;
calculating the cosine similarity of the new program and the user preference program according to the vector of the new program and the vector of the user preference program;
and when the cosine similarity is larger than a preset threshold value, distributing the new program to be distributed to a user opposite to the user preference program.
In one embodiment, the pre-established program vector model includes:
acquiring the historical program from a program database;
selecting a training sample according to the click rate of the historical program;
and extracting a anchor vector, a label vector and a quality vector from the training sample, and inputting the anchor vector, the label vector and the quality vector into the neural network model for learning training to obtain the program vector model.
In one embodiment, the step of inputting the anchor vector, the tag vector, and the quality vector into the neural network model for learning and training to obtain the program vector model includes:
inputting the anchor vector, the label vector and the quality vector of any two programs in the training sample into a hidden layer of the neural network model respectively to generate vectors of the two programs respectively;
calculating a Hadamard product of vectors of the two programs;
subtracting the vectors of the two programs to obtain a difference value;
calculating the click rate of the two programs according to the quality vector;
inputting the Hadamard product, the difference value and the click rate into a sigmoid layer of the neural network model, and calculating the similarity of two programs;
sequentially calculating the similarity of every two programs in the training sample;
calculating a loss value according to the similarity and the loss function;
and when the loss value is smaller than a preset value, obtaining the program vector model.
In one embodiment, before the step of inputting the anchor vector, the tag vector, and the quality vector into the neural network model for learning training to obtain the program vector model, the method includes:
training by adopting a word2vec algorithm according to the preference degree of the user to the anchor in the historical program to obtain an anchor vector model;
training by adopting a word2vec algorithm according to the label system of the historical program to obtain a label vector model;
and training by adopting a one-hot bucket algorithm according to the quality system of the historical program to obtain a quality vector model.
In one embodiment, the step of training the anchor vector by using a word2vec algorithm to obtain an anchor vector model includes:
constructing a user anchor scoring matrix;
analyzing the user anchor scoring matrix by adopting a roulette algorithm, and selecting a preset number of anchors for each user according to the analysis result to form an anchor matrix;
and training the anchor matrix by adopting a word2vec algorithm to obtain an anchor vector model.
In one embodiment, the quality system includes a play count, a click rate, a play completion rate and a release time, and the step of obtaining the quality vector model by using one-hot bucket algorithm training according to the quality system in the historical program includes:
and selecting the playing number, the click rate, the playing completion rate and the release time of the historical program, and performing barrel division by adopting a one-hot barrel division algorithm to obtain a quality vector model.
In one embodiment, the tag hierarchy includes a keyword tag; in the step of training by adopting a word2vec algorithm according to the co-occurrence relation of the historical programs to obtain a label vector model, the method comprises the following steps:
extracting all keywords of the historical program and labels corresponding to the keywords;
and training the label corresponding to the keyword by adopting a word2vec algorithm to obtain a keyword label model.
A distribution system of programs, comprising:
the information acquisition module is used for acquiring program vectors of new programs to be distributed and user preference programs;
the vector calculation module is used for inputting the new program into a pre-established program vector model to calculate a program vector of the new program; the program vector model is obtained by learning and training historical programs based on a neural network algorithm;
the cosine similarity calculation module is used for calculating the cosine similarity between the new program and the user preference program according to the program vector of the new program and the program vector of the user preference program;
and the program distribution module is used for distributing the new program to be distributed to the user opposite to the user preference program when the cosine similarity is larger than a preset threshold value.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program implements the steps of:
acquiring program vectors of a new program to be distributed and a user preference program;
inputting the new program into a pre-established program vector model, and calculating a program vector of the new program; the program vector model is obtained by learning and training historical programs based on a neural network algorithm;
calculating the cosine similarity of the new program and the user preference program according to the program vector of the new program and the program vector of the user preference program;
and when the cosine similarity is larger than a preset threshold value, distributing the new program to be distributed to a user opposite to the user preference program.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of:
acquiring program vectors of a new program to be distributed and a user preference program;
inputting the new program into a pre-established program vector model, and calculating a program vector of the new program; the program vector model is obtained by learning and training historical programs based on a neural network algorithm;
calculating the cosine similarity of the new program and the user preference program according to the program vector of the new program and the program vector of the user preference program;
and when the cosine similarity is larger than a preset threshold value, distributing the new program to be distributed to a user opposite to the user preference program.
According to the program distribution method, the system, the computer equipment and the storage medium, firstly, program vectors of programs to be distributed and user preference programs are obtained, then, the new programs to be distributed are input into a pre-established program vector model to be calculated to obtain the program vectors of the new programs, cosine similarity between the new programs and the user preference programs is calculated according to the program vectors of the new programs and the program vectors of the user preference programs, then, the cosine similarity is compared with a preset threshold, and when the cosine similarity is larger than the preset threshold, the new programs are similar to the user preference programs, so that the new programs are distributed to users corresponding to the user preference programs; according to the program distribution method, the program vector of the new program is calculated, the similarity between the new program and the user preference program is determined through the program vector of the new program and the program vector of the user preference program, the program most similar to the new program is found out, and recommendation or distribution is carried out according to the most similar program, so that distribution of the new program can be completed quickly, and the distribution accuracy is high.
Drawings
Fig. 1 is a schematic structural diagram of a program distribution method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a structure of registration information of a registration apparatus according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a program distribution method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a program distribution method according to an embodiment of the present invention;
fig. 5 is a schematic flow chart of a program distribution method according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
The present invention will be described in further detail with reference to preferred embodiments and the accompanying drawings. It is to be understood that the following examples are illustrative only and are not intended to limit the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. It should be noted that, for the convenience of description, only some but not all of the matters related to the present invention are shown in the drawings.
[ PROBLEMS ] to explain the related art
It should be noted that the terms "first \ second \ third" related to the embodiments of the present invention are merely used for distinguishing similar objects, and do not represent a specific ordering for the objects, and it should be understood that "first \ second \ third" may exchange a specific order or sequence order if allowed. It should be understood that the terms first, second, and third, as used herein, are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than those illustrated or otherwise described herein.
The terms "comprises" and "comprising," and any variations thereof, of embodiments of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, article, or apparatus that comprises a list of steps or (module) elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The method for distributing the programs is applied to the terminal, and the terminal can be a personal computer, a notebook computer and the like. The terminal can run corresponding application programs, and new programs to be distributed can be input into the corresponding application programs to quickly complete the distribution of the new programs.
In one embodiment, as shown in fig. 1, a method for distributing a program is provided, which is described by taking an example that the method is applied to a terminal, and includes the following steps:
step S102, acquiring program vectors of new programs to be distributed and user preference programs;
the new program to be distributed refers to a program which is not recommended or distributed to the user just after being received by the platform (or just uploaded to the platform); vectors refer in mathematics (also known as euclidean vectors, geometric vectors, vectors) to quantities having a size and a direction; the program vector is a quantity having a size and a direction to represent the program, and can be used to represent the characteristics of the program. The program vector generally comprises a anchor vector, a tag vector and a quality vector, wherein the anchor vector is calculated according to the co-occurrence behavior of an anchor; the label vector is calculated according to the label corresponding to the program; the quality vector is calculated according to the program playing number, the click rate, the playing completion rate and the release time; the user preferred program generally refers to a history program obviously preferred or liked by the user, and a program vector of the user preferred program is calculated in advance and can be stored in a terminal or a database.
Step S104, inputting a new program into a program vector model established in advance, and calculating a vector of the new program; the program vector model is obtained by learning and training historical programs based on a neural network algorithm;
the pre-established program vector model is obtained by learning and training historical programs by adopting a neural network algorithm; the historical programs refer to programs that the platform receives and distributes to in the past, and may include all programs that have completed distribution.
Step S106, calculating cosine similarity of the new program and the user preference program according to the program vector of the new program and the program vector of the user preference program;
and step S108, when the cosine similarity is larger than a preset threshold, distributing the new program to be distributed to the user corresponding to the program preferred by the user.
Specifically, the cosine similarity, also called cosine similarity, is to evaluate the similarity between two vectors by calculating the cosine value of the included angle between them. The cosine value can be used to characterize the similarity of the two vectors. The smaller the angle, the closer the cosine value is to 1, and the more identical their directions are, the more similar. In this embodiment, the cosine similarity between the new program and the user-preferred program is calculated according to the program vector of the new program and the program vector of the user preference, so as to determine the similarity between the new program and the user-preferred program, find out the user-preferred program similar to the selected program, and then push the new program to the user corresponding to the user-preferred program.
In one alternative embodiment, the cosine similarity is calculated by the following formula:
Figure GDA0002117677820000071
wherein, similarity represents cosine similarity, AiProgram vector representing new program, BiA program vector representing a user preferred program.
In addition, the preset threshold may be a range value, which is a value greater than 0 and less than 1 in this embodiment, and is determined according to actual requirements, and when the preset threshold is larger, the similarity between the selected new program and the program preferred by the user is higher.
The program distribution method firstly obtains a program vector of a program to be distributed and a user preference program, then inputs a new program to be distributed into a pre-established program vector model to calculate to obtain a program vector of the new program, calculates the cosine similarity between the new program and the user preference program according to the program vector of the new program and the program vector of the user preference program, compares the cosine similarity with a preset threshold, and when the cosine similarity is greater than the preset threshold, indicates that the new program is similar to the user preference program, so that the new program is distributed to a user corresponding to the user preference program; according to the program distribution method, the program vector of the new program is calculated, the similarity between the new program and the user preference program is determined through the program vector of the new program and the program vector of the user preference program, the program most similar to the new program is found out, and recommendation or distribution is carried out according to the most similar program, so that distribution of the new program can be completed quickly, and the distribution accuracy is high.
In one embodiment, as shown in fig. 2, the pre-established program vector model includes:
step S202, obtaining historical programs from a program database;
step S204, selecting a training sample according to the click rate of the historical program;
and step S206, extracting the anchor vector, the label vector and the quality vector from the training sample, and inputting the anchor vector, the label vector and the quality vector into the neural network model for learning training to obtain a program vector model.
Specifically, a training sample is selected according to the click rate of the historical program, wherein the sample with the click rate larger than 0 is selected as a positive sample, and the sample with the click rate of 0 is selected as a negative sample; in order to ensure that the number of positive and negative samples is basically equal, the training samples are selected and then subjected to sampling processing. The click rate is exposure times/click times; for example, the number of exposures is 1, the number of clicks is 1, and the click rate is 1/1; for data accuracy, with reference to the laplacian smoothing curve, a constant is added to the denominator, for example, 5 is added (which can be confirmed according to actual conditions), and it is considered that the program with 5 individual exposures has statistical significance, i.e. 1/(1+5) is about 0.1667, and the number of positive samples is selected to be 500W; then, the analysis shows that the exposure times are more than 5, and the sample with the click rate of 0 is also 500w, and the sample is taken as a negative sample, so that the positive sample and the negative sample jointly form a training sample; the accuracy of the selected training sample can enable the trained program vector model to be more accurate, and errors are reduced.
Program vectors typically include a anchor vector, a tag vector, and a quality vector; therefore, when the training sample is adopted for model training, the anchor vector, the label vector and the quality vector are firstly extracted from the historical program, and then the extracted anchor vector, label vector and quality vector are input into the neural network model for training, so that the program vector model is obtained.
In one embodiment, the step of inputting the anchor vector, the tag vector and the quality vector into the neural network model for learning and training to obtain the program vector model includes:
respectively inputting the anchor vector, the label vector and the quality vector of any two programs in the training sample into a hidden layer of a neural network model to respectively generate vectors of the two programs;
calculating a Hadamard product of vectors of the two programs;
subtracting the vectors of the two programs to obtain a difference value;
calculating the click rate of the two programs according to the quality vector;
inputting the Hadamard product, the difference value and the click rate into a sigmoid layer of a neural network model, and calculating the similarity of the two programs;
sequentially calculating the similarity of every two programs in the training sample;
calculating a loss value according to the similarity and the loss function;
and when the loss value is smaller than the preset value, obtaining a program vector model.
Specifically, in the process of training the model, (1) the anchor vectors, the label vectors and the quality vectors corresponding to any two programs in the training sample are input, the anchor vectors, the label vectors and the quality vectors corresponding to the two programs are input into a shared neural network, and the shared neural network has N layers of hidden layers. In an optional embodiment, according to the measurement of complexity and effect, the neural network selects three layers, a hidden layer in the middle is added with relu to prevent overfitting, and vectors of two programs are respectively generated after passing through the neural network; (2) the vectors of the two programs are then hadamard-product (multiplication of the corresponding terms), followed by subtraction of the two vectors, and the results are then merged together. The multiplication and the subtraction mainly provide for calculating how large the difference between the two program vectors is, and the merging function is to combine the two differences to better depict how large the difference between the two programs is. (3) Then calculating the click rate of the two programs; (4) and (3) connecting the results of (2) and (3) together, and predicting the similarity between the two programs through a sigmoid layer, wherein the similarity is usually a number of 0-1. Tensorflow is used as the main calculation framework in the embodiment, wherein the cross entropy is used as the loss function; and finally, calculating loss (loss value) in the whole training process according to the similarity and the loss function, and when the loss value is smaller than a preset value, indicating that the model training is finished. The preset value is a preset value, which is not a unique value, and can be any value in a certain range, and is usually determined according to the actual requirements of model training.
In one embodiment, as shown in fig. 3, before the step of inputting the anchor vector, the tag vector, and the quality vector into the neural network model for learning and training to obtain the program vector model, the method includes:
step S302, training by adopting a word2vec algorithm according to the preference degree of a user to the anchor in the historical program to obtain an anchor vector model;
step S304, training by adopting a word2vec algorithm according to a label system of a historical program to obtain a label vector model;
and step S306, training by adopting a one-hot bucket algorithm according to the quality system of the historical program to obtain a quality vector model.
In this embodiment, the anchor vector, the tag vector and the quality vector are trained in advance, so that random initialization is not required when a program vector model is obtained, and the network training speed is greatly increased.
Specifically, a anchor vector is trained to obtain an anchor vector model; typically based on the relationship between the anchor and the historical program that was played before the anchor, i.e., based on the user's preference for the anchor in the historical program (typically using a scoring system). The label vector is obtained by training by adopting a word2vec method according to a label system of the historical program; the quality label is trained according to a quality system one-hot bucket algorithm of the historical program to obtain a quality vector model. Wherein the label system is a set of all label classifications of the program; the quality system refers to the combination of all quality parameters of a program.
In addition, the anchor may send different programs, and the different programs have a label system and a quality system (as shown in fig. 4), including a first-level label and a scene label; the primary label is usually the mark of the program category (classification), and the scene label is the mark of the program applicable to gender, applicable time/scene, applicable mood and applicable education stage. The first-level label comprises a second-level label, and the second-level label is obtained by performing one-step refinement on the first-level label; the secondary labels include entity labels and keyword labels. The main indicators used by the quality system to evaluate the quality of the program include the number of plays, the click rate, the play completion rate, the release time, and the like.
For ease of understanding, a detailed embodiment is given; as shown in table 1, the secondary label belongs to the sub-classification under the primary classification, the secondary label ensures coarse granularity as much as possible, the complete orthogonal system as much as possible, and any program can be classified into the primary classification and the secondary classification. The lower diagram is a primary and secondary label system of the litchi program.
TABLE 1 Primary and Secondary Label systems
Figure GDA0002117677820000101
Figure GDA0002117677820000111
Entity: name of person, place, work, song, etc
Key words: meaning complete and explicit, is to describe a category of programs that is not individual, is to describe the subject of the program or the effect that the program wants to cause to people.
Scene labeling: suitable for sex, suitable time/scene, suitable mood, suitable for education stage, etc.
In one embodiment, the step of training the anchor vector by adopting the word2vec algorithm to obtain the anchor vector model includes:
constructing a user anchor scoring matrix;
analyzing the user anchor scoring matrix by adopting a roulette algorithm, and selecting a preset number of anchors for each user according to the analysis result to form an anchor matrix;
and training the anchor matrix by adopting a word2vec algorithm to obtain an anchor vector model.
The specific process of training the anchor vector model is as follows: (1) firstly, a scoring matrix of a user and a anchor is constructed, the preference degree of the user to the anchor is described mainly according to the actions of playing, praise, attention and the like of the user to the anchor, and the preference value Score is used by peopleiNormalizing to 0-1. For example, the following formats:
User1,NJ1,Score1
User1,NJ2,Score2
User2,NJ3,Score3
(2) then, according to the user id, the user preferred anchor list is counted and converted into the following format
User1 NJ1:Score11,NJ2:Score12,NJ3:Score13………
User2 NJ2:Score21,NJ2:Score22,NJ4:Score24………
(3) First, the roulette algorithm:
NJ1,,NJ2……NJn
suppose we play anchor NJ1,,NJ2......NJnIs scored as Score1,,Score2......ScorenThen NJiProbability of being drawn is Pi
Figure GDA0002117677820000121
Then, we draw 10 anchor for each user according to the user, anchor scoring matrix, considering accuracy and computational performance, and construct the following format, according to the random roulette algorithm, based on the preference weight of the user to the anchor, the greater the preference, the higher the probability of being drawn:
User1 NJ1,NJ2,NJ1,NJ3
User2 NJ2,NJ3,NJ2,NJ4……
(4) with the former UserId removed and the separator blank, convert to the following format
NJ1 NJ2 NJ1 NJ3……
NJ2 NJ3 NJ2 NJ4……
(5) We input the above constructed samples into word2vec, obtaining the anchor vector model. When the program is input into the anchor vector model, the anchor vector of the program can be obtained.
Word2vec, a group of correlation models used to generate Word vectors. These models are shallow, two-layer neural networks that are trained to reconstruct linguistic word text. The network is represented by words and the input words in adjacent positions are guessed, and the order of the words is unimportant under the assumption of the bag-of-words model in word2 vec. After training is completed, the word2vec model can be used to map each word to a vector, which can be used to represent word-to-word relationships, and the vector is a hidden layer of the neural network. That is, Word2vec can express a Word into a vector form quickly and effectively through an optimized training model according to a given corpus.
In one embodiment, the step of obtaining the quality vector model by using one-hot bucket algorithm training according to the quality system in the historical program includes:
and selecting the playing number, the click rate, the playing completion rate and the release time of the historical program, and performing barrel division by adopting a one-hot barrel division algorithm to obtain a quality vector model.
In particular, one-hot encoding is a process of converting class variables into a form that is readily utilized by machine learning algorithms. In this embodiment, the number of plays, the click rate, the play completion rate, and the distribution time of the history program are converted into vectors composed of 0 or 1.
Taking the number of broadcast as an example, we sort the number of broadcast of programs from small to large, calculate the number of programs corresponding to each broadcast number, divide the number of broadcast into 5 equal parts, which are 20%, 40%, 60%, and 80%, respectively, and assume that the four quantiles are 200, 4000, 60000, and 100000, then the programs below 200 broadcast number account for 20%, the programs below 4000 broadcast number account for 40%, the programs below 60000 broadcast number account for 60%, and the programs below 100000 account for 80%. If the broadcast number of a program is 321, the quantile of data greater than 20% is smaller than the quantile of 40% of the program, and the vector value of the broadcast number corresponding to the program is [0, 1, 0, 0, 0 ]. Wherein, quantile is also called quantile point, which means that a random variable is arranged from small to large and divided into several equal numerical points; the specific value of the quantile can be selected according to actual requirements.
In addition, the click rate, the play-out rate, and the distribution time are performed in the same manner.
In one embodiment, the tag hierarchy includes a keyword tag; in the step of training by adopting a word2vec algorithm according to the co-occurrence relation of the historical programs to obtain a label vector model, the method comprises the following steps:
extracting all keywords of the historical program and labels corresponding to the keywords;
and training the label corresponding to the keyword by adopting a word2vec algorithm to obtain a keyword label model.
Specifically, in the keyword training process, (1) the keywords of all programs of each anchor are gathered together without performing deduplication processing, and then the tags corresponding to the anchors are randomly disturbed, for example, the corresponding relationship of the anchor programs:
NJ1,Audio1
NJ1,Audio2
NJ1,Audio3
NJ1,Audio4
NJ2,Audio5
………
program corresponding label
Audio1:Ta.g1,Tag2
Audio2:Tag2
Audio3:Tag3
Audio4:Tag5
Audio5:Tag6
………
Then the anchor corresponds to a tag of
NJ1:Tag1,Tag2,Tag2,Tag3
NJ2:Tag5,Tag6
………
(2) Removing the NJ from the front, and randomly scrambling the programs
Tag1,Tag2,Tag2,Tag3
Tag5,Tag6
………
(3) We input the above constructed samples into word2vec, obtaining the keyword vector model. When the program is input into the keyword vector model, the keyword vector of the program can be obtained.
In one embodiment, the tag hierarchy includes entity tags; in the step of training by adopting a word2vec algorithm according to the co-occurrence relation of the historical programs to obtain a label vector model, the method comprises the following steps: training a first-level or second-level label model; the specific process is similar to the training keywords, and therefore, is not described herein again.
In one embodiment, the tag hierarchy includes entity tags; in the step of training by adopting a word2vec algorithm according to the co-occurrence relation of the historical programs to obtain a label vector model, the method comprises the following steps: training a solid model; the process of training the solid model comprises the following steps: (1) and constructing a scoring matrix of the user and the anchor, and a relation between the anchor and the entity, and constructing the scoring matrix of the user and the entity according to the association of the anchor.
User anchor scoring matrix format:
User1,NJ1,Score1
User1,NJ2,Score2
User2,NJ3,Score3
………
the corresponding relation of the anchor program is as follows:
NJ1,Audio1
NJ1,Audio2
NJ1,Audio3
NJ1,Audio4
NJ2,Audio5
………
and the program corresponding entity:
Audio1:Entity1,Entity2
Audio2:Entity3
Audio3:Enity4,Entity2
Audio5:Enity5
………
the anchor correspondent entity is then:
NJ1:Entity1,Entity2,Entity3,Entity4
NJ2:Entity5
………
finally, a scoring matrix of the user and the entity is constructed
User1 Entity1:Score11,Entity2:Score12,Entity3:Score13………
User2 Entity2:Score21,Entity3:Score22,Entity4:Score24………
(2) According to the random roulette algorithm, the preference of the user to the entity is weighted, the greater the preference, the higher the probability of being drawn, and the following format is constructed
User1 Entity1,Entity2,Entity1,Entity3
User2 Entity2,Entity3,Entity2,Entity4……
(3) With the former UserId removed and the separator blank, convert to the following format
Entity1,Entity2,Entity1,Entity3
Entity2,Entity3,Entity2,Entity4……
(4) We input the above constructed samples into word2vec, obtaining the entity vector model. When the program is input into the entity vector model, the entity word vector of the program can be obtained.
According to the distribution method of the programs, the invention also provides a distribution system of the programs.
Fig. 5 is a schematic structural diagram of a program distribution method according to an embodiment of the present invention. As shown in fig. 5, the distribution system of programs in this embodiment,
the information acquisition module 10 is configured to acquire a program vector of a new program to be distributed and a user preference program;
the vector calculation module 20 is configured to calculate a program vector of the new program by inputting the new program to a pre-established program vector model; the program vector model is obtained by learning and training historical programs based on a neural network algorithm;
a cosine similarity calculation module 30, configured to calculate cosine similarity between the new program and the user-preferred program according to the program vector of the new program and the program vector of the user-preferred program;
and the program distribution module 40 is configured to distribute the new program to be distributed to a user who is opposite to the user-preferred program when the cosine similarity is greater than a preset threshold.
In one embodiment, the method comprises the following steps:
a historical program obtaining module, configured to obtain a historical program from a program database;
the training sample selection module is used for selecting a training sample according to the click rate of the historical program;
and the program vector model obtaining module is used for extracting the anchor vector, the label vector and the quality vector from the training sample, and inputting the anchor vector, the label vector and the quality vector into the neural network model for learning training to obtain a program vector model.
In one embodiment, the program vector model obtaining module comprises:
the vector generation module is used for respectively inputting the anchor vectors, the label vectors and the quality vectors of any two programs in the training sample into the hidden layer of the neural network model to respectively generate the vectors of the two programs;
the Hadamard product calculating module is used for calculating the Hadamard product of the vectors of the two programs;
the difference value calculation module is used for subtracting the vectors of the two programs to obtain a difference value;
the click rate calculation module is used for calculating the click rates of the two programs according to the quality vectors;
the similarity calculation module is used for inputting the Hadamard product, the difference value and the click rate into a sigmoid layer of the neural network model and calculating the similarity of the two programs; sequentially calculating the similarity of every two programs in the training sample;
the loss value calculating module is used for calculating a loss value according to the similarity and the loss function;
and the program vector model obtaining module is used for obtaining a program vector model when the loss value is smaller than a preset value.
In one embodiment, the method further comprises the following steps:
the anchor vector model obtaining module is used for training by adopting a word2vec algorithm according to the preference degree of a user to an anchor in a historical program to obtain an anchor vector model;
the label vector model obtaining module is used for training by adopting a word2vec algorithm according to a label system of the historical program to obtain a label vector model;
and the quality vector model obtaining module is used for obtaining the quality vector model by adopting one-hot bucket algorithm training according to the quality system of the historical program.
In one embodiment, the anchor vector model derivation module comprises:
the scoring matrix building module is used for building a user anchor scoring matrix;
the system comprises an anchor matrix travel module, a user anchor scoring matrix generation module and an anchor matrix selection module, wherein the anchor matrix travel module is used for analyzing the user anchor scoring matrix by adopting a roulette algorithm and selecting a preset number of anchors for each user according to the analysis result to form an anchor matrix;
and the anchor vector model obtaining module is used for training the anchor matrix by adopting a word2vec algorithm to obtain an anchor vector model.
In one embodiment, the quality system includes play count, click-through rate, play-out rate, and release time, and includes:
and the quality vector model obtaining module is used for selecting the playing number, the click rate, the playing completion rate and the release time of the historical program and adopting a one-hot bucket-dividing algorithm to perform bucket division to obtain the quality vector model.
In one embodiment, the tag hierarchy includes a keyword tag; the method comprises the following steps:
the keyword tag extraction module is used for extracting all keywords of the historical program and tags corresponding to the keywords;
and the keyword label model obtaining module is used for training the label corresponding to the keyword by adopting a word2vec algorithm to obtain a keyword label model.
For specific limitations of the distribution system of the program, reference may be made to the above limitations of the distribution system of the program, which are not described herein again. The respective modules in the distribution system of the above-described programs may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used to store fault case data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of distributing a program.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring vectors of a new program to be distributed and a user preference program;
inputting a new program into a program vector model established in advance, and calculating a vector of the new program; the program vector model is obtained by learning and training historical programs based on a neural network algorithm;
calculating the cosine similarity of the new program and the user preference program according to the vector of the new program and the vector of the user preference program;
and when the cosine similarity is larger than a preset threshold value, distributing the new program to be distributed to the user corresponding to the program preferred by the user.
In one embodiment, the processor, when executing the computer program, further performs the steps of: the identification model of the abnormal data of the disconnecting link state is obtained by the following steps:
acquiring historical programs from a program database;
selecting a training sample according to the click rate of the historical program;
and extracting a anchor vector, a label vector and a quality vector from the training sample, and inputting the anchor vector, the label vector and the quality vector into the neural network model for learning training to obtain a program vector model.
In one embodiment, the processor, when executing the computer program, further performs the steps of: the identification model of the abnormal data of the disconnecting link state is obtained by the following steps: inputting the anchor vector, the label vector and the quality vector into a neural network model for learning and training to obtain a program vector model, wherein the step comprises the following steps of:
respectively inputting the anchor vector, the label vector and the quality vector of any two programs in the training sample into a hidden layer of a neural network model to respectively generate vectors of the two programs;
calculating a Hadamard product of vectors of the two programs;
subtracting the vectors of the two programs to obtain a difference value;
calculating the click rate of the two programs according to the quality vector;
inputting the Hadamard product, the difference value and the click rate into a sigmoid layer of a neural network model, and calculating the similarity of the two programs;
sequentially calculating the similarity of every two programs in the training sample;
calculating a loss value according to the similarity and the loss function;
and when the loss value is smaller than the preset value, obtaining a program vector model.
In one embodiment, the processor, when executing the computer program, further performs the steps of: the identification model of the abnormal data of the disconnecting link state is obtained by the following steps: before the step of inputting the anchor vector, the label vector and the quality vector into the neural network model for learning and training to obtain the program vector model, the method comprises the following steps:
training by adopting a word2vec algorithm according to the preference degree of the user to the anchor in the historical program to obtain an anchor vector model;
training by adopting a word2vec algorithm according to a label system of the historical program to obtain a label vector model;
and training by adopting a one-hot bucket algorithm according to a quality system of the historical program to obtain a quality vector model.
In one embodiment, the processor, when executing the computer program, further performs the steps of: the identification model of the abnormal data of the disconnecting link state is obtained by the following steps: the step of training the anchor vector by adopting a word2vec algorithm to obtain an anchor vector model comprises the following steps of:
constructing a user anchor scoring matrix;
analyzing the user anchor scoring matrix by adopting a roulette algorithm, and selecting a preset number of anchors for each user according to the analysis result to form an anchor matrix;
and training the anchor matrix by adopting a word2vec algorithm to obtain an anchor vector model.
In one embodiment, the processor, when executing the computer program, further performs the steps of: the identification model of the abnormal data of the disconnecting link state is obtained by the following steps: the quality system comprises a playing number, a click rate, a playing completion rate and release time, and the step of training by adopting a one-hot bucket algorithm according to the quality system in the historical program to obtain the quality vector model comprises the following steps:
and selecting the playing number, the click rate, the playing completion rate and the release time of the historical program, and performing barrel division by adopting a one-hot barrel division algorithm to obtain a quality vector model.
In one embodiment, the processor, when executing the computer program, further performs the steps of: the identification model of the abnormal data of the disconnecting link state is obtained by the following steps: the label system comprises a keyword label; in the step of training by adopting a word2vec algorithm according to the co-occurrence relation of the historical programs to obtain a label vector model, the method comprises the following steps:
extracting all keywords of the historical program and labels corresponding to the keywords;
and training the label corresponding to the keyword by adopting a word2vec algorithm to obtain a keyword label model.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring vectors of a new program to be distributed and a user preference program;
inputting a new program into a program vector model established in advance, and calculating a vector of the new program; the program vector model is obtained by learning and training historical programs based on a neural network algorithm;
calculating the cosine similarity of the new program and the user preference program according to the vector of the new program and the vector of the user preference program;
and when the cosine similarity is larger than a preset threshold value, distributing the new program to be distributed to the user corresponding to the program preferred by the user.
In one embodiment, the computer program when executed by the processor further performs the steps of: the method of the program vector model established in advance comprises the following steps:
acquiring historical programs from a program database;
selecting a training sample according to the click rate of the historical program;
and extracting a anchor vector, a label vector and a quality vector from the training sample, and inputting the anchor vector, the label vector and the quality vector into the neural network model for learning training to obtain a program vector model.
In one embodiment, the computer program when executed by the processor further performs the steps of: inputting the anchor vector, the label vector and the quality vector into a neural network model for learning and training to obtain a program vector model, wherein the step comprises the following steps of:
respectively inputting the anchor vector, the label vector and the quality vector of any two programs in the training sample into a hidden layer of a neural network model to respectively generate vectors of the two programs;
calculating a Hadamard product of vectors of the two programs;
subtracting the vectors of the two programs to obtain a difference value;
calculating the click rate of the two programs according to the quality vector;
inputting the Hadamard product, the difference value and the click rate into a sigmoid layer of a neural network model, and calculating the similarity of the two programs;
sequentially calculating the similarity of every two programs in the training sample;
calculating a loss value according to the similarity and the loss function;
and when the loss value is smaller than the preset value, obtaining a program vector model.
In one embodiment, the computer program when executed by the processor further performs the steps of: before the step of inputting the anchor vector, the label vector and the quality vector into the neural network model for learning and training to obtain the program vector model, the method comprises the following steps:
training by adopting a word2vec algorithm according to the preference degree of the user to the anchor in the historical program to obtain an anchor vector model;
training by adopting a word2vec algorithm according to a label system of the historical program to obtain a label vector model;
and training by adopting a one-hot bucket algorithm according to a quality system of the historical program to obtain a quality vector model.
In one embodiment, the computer program when executed by the processor further performs the steps of: the step of training the anchor vector by adopting a word2vec algorithm to obtain an anchor vector model comprises the following steps of:
constructing a user anchor scoring matrix;
analyzing the user anchor scoring matrix by adopting a roulette algorithm, and selecting a preset number of anchors for each user according to the analysis result to form an anchor matrix;
and training the anchor matrix by adopting a word2vec algorithm to obtain an anchor vector model.
In one embodiment, the computer program when executed by the processor further performs the steps of: the quality system comprises a playing number, a click rate, a playing completion rate and release time, and the step of training by adopting a one-hot bucket algorithm according to the quality system in the historical program to obtain the quality vector model comprises the following steps:
and selecting the playing number, the click rate, the playing completion rate and the release time of the historical program, and performing barrel division by adopting a one-hot barrel division algorithm to obtain a quality vector model.
In one embodiment, the computer program when executed by the processor further performs the steps of: the label system comprises a keyword label; in the step of training by adopting a word2vec algorithm according to the co-occurrence relation of the historical programs to obtain a label vector model, the method comprises the following steps:
extracting all keywords of the historical program and labels corresponding to the keywords;
and training the label corresponding to the keyword by adopting a word2vec algorithm to obtain a keyword label model.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (8)

1. A method for distributing a program, the method comprising:
acquiring program vectors of a new program to be distributed and a user preference program;
inputting the new program into a pre-established program vector model, and calculating a program vector of the new program; the program vector model is obtained by learning and training a historical program based on a neural network algorithm;
calculating the cosine similarity of the new program and the user preference program according to the program vector of the new program and the program vector of the user preference program;
when the cosine similarity is larger than a preset threshold value, distributing the new program to be distributed to a user corresponding to the user preference program;
the pre-established program vector model is as follows: acquiring the historical program from a program database; selecting a training sample according to the click rate of the historical program; respectively inputting the anchor vector, the label vector and the quality vector of any two programs in the training sample into a hidden layer of the neural network model to respectively generate vectors of the two programs; calculating a Hadamard product of vectors of the two programs; subtracting the vectors of the two programs to obtain a difference value; calculating the click rate of the two programs according to the quality vector; inputting the Hadamard product, the difference value and the click rate into a sigmoid layer of the neural network model, and calculating the similarity of two programs; sequentially calculating the similarity of every two programs in the training sample; calculating a loss value according to the similarity and the loss function; and when the loss value is smaller than a preset value, obtaining the program vector model.
2. The method of claim 1, wherein before the step of inputting the anchor vector, the tag vector and the quality vector into the neural network model for learning training to obtain the program vector model, the method comprises:
training by adopting a word2vec algorithm according to the preference degree of the user to the anchor in the historical program to obtain an anchor vector model;
training by adopting a word2vec algorithm according to the label system of the historical program to obtain a label vector model;
and training by adopting a one-hot bucket algorithm according to the quality system of the historical program to obtain a quality vector model.
3. The method for distributing programs according to claim 2, wherein the step of training the anchor vector by using word2vec algorithm to obtain an anchor vector model comprises:
constructing a user anchor scoring matrix;
analyzing the user anchor scoring matrix by adopting a roulette algorithm, and selecting a preset number of anchors for each user according to the analysis result to form an anchor matrix;
and training the anchor matrix by adopting a word2vec algorithm to obtain an anchor vector model.
4. The method of distributing a program according to claim 3, wherein: the quality system comprises a playing number, a click rate, a playing completion rate and release time, and the step of obtaining the quality vector model by adopting one-hot bucket algorithm training according to the quality system in the historical program comprises the following steps:
and selecting the playing number, the click rate, the playing completion rate and the release time of the historical program, and performing barrel division by adopting a one-hot barrel division algorithm to obtain a quality vector model.
5. The method of claim 2, wherein the label system comprises a keyword label; in the step of training by adopting a word2vec algorithm according to the co-occurrence relation of the historical programs to obtain a label vector model, the method comprises the following steps:
extracting all keywords of the historical program and labels corresponding to the keywords;
and training the label corresponding to the keyword by adopting a word2vec algorithm to obtain a keyword label model.
6. A system for distributing a program, comprising:
the information acquisition module is used for acquiring program vectors of new programs to be distributed and user preference programs;
the vector calculation module is used for inputting the new program into a pre-established program vector model and calculating a program vector of the new program; the program vector model is obtained by learning and training a historical program based on a neural network algorithm; the method comprises the following specific steps: acquiring the historical program from a program database; selecting a training sample according to the click rate of the historical program; respectively inputting the anchor vector, the label vector and the quality vector of any two programs in the training sample into a hidden layer of the neural network model to respectively generate vectors of the two programs; calculating a Hadamard product of vectors of the two programs; subtracting the vectors of the two programs to obtain a difference value; calculating the click rate of the two programs according to the quality vector; inputting the Hadamard product, the difference value and the click rate into a sigmoid layer of the neural network model, and calculating the similarity of two programs; sequentially calculating the similarity of every two programs in the training sample; calculating a loss value according to the similarity and the loss function; when the loss value is smaller than a preset value, obtaining the program vector model;
the cosine similarity calculation module is used for calculating the cosine similarity between the new program and the user preference program according to the program vector of the new program and the program vector of the user preference program;
and the program distribution module is used for distributing the new program to be distributed to the user corresponding to the user preference program when the cosine similarity is greater than a preset threshold value.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 5 are implemented when the computer program is executed by the processor.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
CN201910448059.XA 2019-05-27 2019-05-27 Program distribution method, system, computer device and storage medium Active CN110213660B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910448059.XA CN110213660B (en) 2019-05-27 2019-05-27 Program distribution method, system, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910448059.XA CN110213660B (en) 2019-05-27 2019-05-27 Program distribution method, system, computer device and storage medium

Publications (2)

Publication Number Publication Date
CN110213660A CN110213660A (en) 2019-09-06
CN110213660B true CN110213660B (en) 2021-08-20

Family

ID=67788888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910448059.XA Active CN110213660B (en) 2019-05-27 2019-05-27 Program distribution method, system, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN110213660B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110851647B (en) * 2019-09-29 2022-10-18 广州荔支网络技术有限公司 Intelligent distribution method, device and equipment for audio content flow and readable storage medium
CN110909202A (en) * 2019-10-28 2020-03-24 广州荔支网络技术有限公司 Audio value evaluation method and device and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686237A (en) * 2013-11-19 2014-03-26 乐视致新电子科技(天津)有限公司 Method and system for recommending video resource
CN105898420A (en) * 2015-01-09 2016-08-24 阿里巴巴集团控股有限公司 Video recommendation method and device, and electronic equipment
CN108009528A (en) * 2017-12-26 2018-05-08 广州广电运通金融电子股份有限公司 Face authentication method, device, computer equipment and storage medium based on Triplet Loss
CN109558512A (en) * 2019-01-24 2019-04-02 广州荔支网络技术有限公司 A kind of personalized recommendation method based on audio, device and mobile terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9584868B2 (en) * 2004-07-30 2017-02-28 Broadband Itv, Inc. Dynamic adjustment of electronic program guide displays based on viewer preferences for minimizing navigation in VOD program selection
US20130145276A1 (en) * 2011-12-01 2013-06-06 Nokia Corporation Methods and apparatus for enabling context-aware and personalized web content browsing experience

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686237A (en) * 2013-11-19 2014-03-26 乐视致新电子科技(天津)有限公司 Method and system for recommending video resource
CN105898420A (en) * 2015-01-09 2016-08-24 阿里巴巴集团控股有限公司 Video recommendation method and device, and electronic equipment
CN108009528A (en) * 2017-12-26 2018-05-08 广州广电运通金融电子股份有限公司 Face authentication method, device, computer equipment and storage medium based on Triplet Loss
CN109558512A (en) * 2019-01-24 2019-04-02 广州荔支网络技术有限公司 A kind of personalized recommendation method based on audio, device and mobile terminal

Also Published As

Publication number Publication date
CN110213660A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110929206B (en) Click rate estimation method and device, computer readable storage medium and equipment
CN108829822B (en) Media content recommendation method and device, storage medium and electronic device
CN108804641B (en) Text similarity calculation method, device, equipment and storage medium
CN110472090B (en) Image retrieval method based on semantic tags, related device and storage medium
CN111898031B (en) Method and device for obtaining user portrait
CN110852793A (en) Document recommendation method and device and electronic equipment
CN113535963B (en) Long text event extraction method and device, computer equipment and storage medium
CN110213660B (en) Program distribution method, system, computer device and storage medium
CN110543603A (en) Collaborative filtering recommendation method, device, equipment and medium based on user behaviors
CN113343091A (en) Industrial and enterprise oriented science and technology service recommendation calculation method, medium and program
Tondulkar et al. Get me the best: predicting best answerers in community question answering sites
CN110909258A (en) Information recommendation method, device, equipment and storage medium
CN111552810B (en) Entity extraction and classification method, entity extraction and classification device, computer equipment and storage medium
CN113705792A (en) Personalized recommendation method, device, equipment and medium based on deep learning model
CN113656699A (en) User feature vector determination method, related device and medium
CN107644042B (en) Software program click rate pre-estimation sorting method and server
CN112801425A (en) Method and device for determining information click rate, computer equipment and storage medium
CN111639485A (en) Course recommendation method based on text similarity and related equipment
CN112949305B (en) Negative feedback information acquisition method, device, equipment and storage medium
CN115222112A (en) Behavior prediction method, behavior prediction model generation method and electronic equipment
CN115269998A (en) Information recommendation method and device, electronic equipment and storage medium
CN115063858A (en) Video facial expression recognition model training method, device, equipment and storage medium
CN115114462A (en) Model training method and device, multimedia recommendation method and device and storage medium
CN114399352A (en) Information recommendation method and device, electronic equipment and storage medium
CN112541069A (en) Text matching method, system, terminal and storage medium combined with keywords

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant