CN110266745B - Information flow recommendation method, device, equipment and storage medium based on deep network - Google Patents

Information flow recommendation method, device, equipment and storage medium based on deep network Download PDF

Info

Publication number
CN110266745B
CN110266745B CN201910175634.3A CN201910175634A CN110266745B CN 110266745 B CN110266745 B CN 110266745B CN 201910175634 A CN201910175634 A CN 201910175634A CN 110266745 B CN110266745 B CN 110266745B
Authority
CN
China
Prior art keywords
document
sub
network
target
evaluation indexes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910175634.3A
Other languages
Chinese (zh)
Other versions
CN110266745A (en
Inventor
徐远东
戴蔚群
陈凯
夏锋
明子鉴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yayue Technology Co ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910175634.3A priority Critical patent/CN110266745B/en
Publication of CN110266745A publication Critical patent/CN110266745A/en
Application granted granted Critical
Publication of CN110266745B publication Critical patent/CN110266745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services

Abstract

The application discloses an information flow recommendation method, device, equipment and storage medium based on a deep network, and belongs to the field of data processing. The method comprises the following steps: receiving an information recommendation request of a target account; calling a target deep network model to calculate the sub-quality scores of the candidate documents on n evaluation indexes, wherein n is an integer larger than 1; calculating to obtain a weighted quality score of the document according to the sub-quality scores of the n evaluation indexes and the weights corresponding to the n evaluation indexes; and generating and recommending the information flow according to the weighted quality scores. According to the method and the device, the document can be evaluated on a plurality of evaluation indexes, so that the recommended information flow has better performance on the plurality of evaluation indexes, and the problem that the information flow generated by the neural network model with a single evaluation index can only obtain better effect on the single evaluation index and possibly has poor effect on other evaluation indexes is solved.

Description

Information flow recommendation method, device, equipment and storage medium based on deep network
Technical Field
The present application relates to the field of data processing, and in particular, to a method, an apparatus, a device, and a storage medium for recommending information streams based on a deep network.
Background
An information stream is streaming data in which a plurality of pieces of information are sequentially arranged. The information flow recommendation system comprises a server and a terminal, wherein the server pushes one or more information flows to the terminal.
In the related art, a deep network model is set in an information flow recommendation system. When the server needs to recommend the information stream to the terminal, the server uses the deep network model to predict the click rate of the user to each document in the candidate document set, selects n documents according to the sequence of the click rate from high to low, pushes the information stream formed by the n documents to the terminal, and the terminal displays the information stream.
The recommendation system mainly optimizes a single evaluation index (click rate), can improve the number of clicks of information streams by users, but is easy to recommend articles with attractive titles and low content quality, and causes the condition that the evaluation indexes such as sharing, appreciating and message interaction are reduced.
Disclosure of Invention
The embodiment of the application provides an information flow recommendation method, device, equipment and storage medium based on a deep network, which can be used for solving the problem that other evaluation indexes are easy to reduce in a recommendation system mainly based on optimization of a single evaluation index. The technical scheme is as follows:
according to an aspect of the present disclosure, there is provided a deep network-based information flow recommendation method, the method including:
receiving an information recommendation request of a target account;
generating a candidate document set of the target account, wherein the candidate document set comprises at least two documents;
calling a target deep network model to calculate sub-quality scores of the document on n evaluation indexes, wherein n is an integer larger than 1;
calculating to obtain a weighted quality score of the document according to the sub-quality scores of the n evaluation indexes and the weights corresponding to the n evaluation indexes;
selecting a target document from the candidate document set according to the weighted quality score, and generating the information flow according to the target document;
and recommending the information flow to a terminal corresponding to the target account.
According to another aspect of the present disclosure, there is provided an information flow recommendation apparatus based on a deep network, the apparatus including:
the receiving module is used for receiving an information recommendation request of a target account;
the generation module is used for generating a candidate document set of the target account, wherein the candidate document set comprises at least two documents;
the calling module is used for calling a target depth network model to calculate the sub-quality scores of the documents on n evaluation indexes, wherein n is an integer larger than 1;
the calculation module is used for calculating the weighted quality score of the document according to the sub-quality scores of the n evaluation indexes and the weights corresponding to the n evaluation indexes;
the generating module is used for selecting a target document from the candidate document set according to the weighted quality score and generating the information flow according to the target document;
and the recommending module is used for recommending the information flow to the terminal corresponding to the target account.
According to another aspect of the present disclosure, there is provided a server including:
a processor and a memory storing at least one instruction, at least one program, a set of codes, or a set of instructions that is loaded and executed by the processor to implement the deep network based information flow recommendation method as described above.
According to another aspect of the present disclosure, there is provided a computer-readable storage medium, characterized in that the computer-readable storage medium stores at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the deep network-based information flow recommendation method as described above.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the method comprises the steps of calculating sub-quality scores of a document on n evaluation indexes through the same target depth network model, obtaining a weighted quality score of the document according to the n sub-quality scores and corresponding weights, sequencing the document according to the weighted quality score, and generating an information stream, so that the document is evaluated on a plurality of evaluation indexes, the recommended information stream has better performance on the plurality of evaluation indexes, and the problem that the information stream generated by a neural network model with a single evaluation index can only obtain a better effect on a single evaluation index, and the effect on other evaluation indexes is possibly poor is solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 2 is a flowchart of a deep network-based information flow recommendation method according to an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram illustrating an online recommendation process of a deep network-based information flow recommendation method according to an exemplary embodiment of the present application;
FIG. 4 is a schematic structural diagram of a target deep network model provided by an exemplary embodiment of the present application;
FIG. 5 is a flowchart of a method for training a target deep network model provided by an exemplary embodiment of the present application;
fig. 6 is a schematic application diagram of an information flow recommendation method based on a deep network according to an exemplary embodiment of the present application;
FIG. 7 is an interface diagram of a deep web-based information flow recommendation method according to an exemplary embodiment of the present application;
fig. 8 is a block diagram illustrating an architecture of an information flow recommendation apparatus based on a deep network according to an exemplary embodiment of the present application;
fig. 9 is a block diagram illustrating an architecture of an information flow recommendation apparatus based on a deep network according to an exemplary embodiment of the present application;
fig. 10 is a block diagram of a server according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Currently, most of the recommendation systems already applied in the industry mainly optimize a single evaluation index, for example, a news recommendation system aims at the click rate; the short video recommendation system targets the play rate; e-commerce recommendation systems target conversion rates, and the like. The effect indexes which are preferentially considered by the recommendation systems are called as main indexes, and for the recommendation system with a very simple human-computer interaction ecology, optimization which only focuses on the main indexes is feasible. However, in a recommendation system with a complex human-computer interaction ecology, the human-computer interaction mode between a user and a document (item) is often quite abundant. Taking item as an article, the user can click on a favorite article, enjoy it, leave a message, comment, share it to friends, share it to a social display platform (such as a circle of friends), or collect it. The user may also have negative feedback even for disliked articles. If the recommendation system adopts a traditional recommendation method, only the click rate index of the user to the articles is considered to rank the recommendation results, although the click number is increased, the articles with the headline parties and low quality are easily recommended, so that the conditions of index reduction such as sharing, appreciating, message interaction and the like are caused; therefore, the acquisition and retention effects of the recommendation system on new users are influenced, and the establishment of healthy product social ecology is not facilitated in a long term.
In the face of this situation, some conventional practices in the industry adjust the recommendation list by introducing manual rules on the basis of the recommendation results obtained by single index sorting so as to reduce the disadvantage of single evaluation index optimization, but the manual rules are difficult to adapt to a large data environment and often fail to achieve the expected effect. In view of this, an embodiment of the present application provides an information flow recommendation scheme based on a deep network.
FIG. 1 shows a block diagram of a computer system 100 provided in an exemplary embodiment of the present application. The computer system 100 may be an instant messaging system, a news-pushing system, a shopping system, an online video system, a social application for crowd aggregation based on topics or channels or circles, or other application systems with social attributes, which is not limited in the embodiments of the present application. The computer system 100 includes: a first terminal 120, a server 140, and a second terminal 160.
The first terminal 120 is connected to the server cluster 120 through a wireless network or a wired network. The first terminal 120 may be at least one of a smartphone, a game console, a desktop computer, a tablet computer, an e-book reader, an MP3 player, an MP4 player, and a laptop portable computer. The first device 120 is installed and running with an application that supports social attributes and information recommendations. The application may be any of an instant messaging system, a news-pushing system, a shopping system, an online video system, a social-class application that aggregates people based on topics or channels or circles, or other application systems with social attributes. The first terminal 120 is a terminal used by a first user, and a first account is registered in an application program running in the first terminal 120.
The first terminal 120 is connected to the server 140 through a wireless network or a wired network.
The server 140 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server 140 is used for providing background services for the application programs supporting information recommendation. Alternatively, the server 140 undertakes primary computational work and the first and second terminals 120, 160 undertake secondary computational work; alternatively, the server 140 undertakes the secondary computing work and the first terminal 120 and the second terminal 160 undertakes the primary computing work; alternatively, the server 140, the first terminal 120, and the second terminal 160 perform cooperative computing by using a distributed computing architecture.
Optionally, the server 140 comprises: an access server 142 and an information recommendation server 144. The access server 142 is configured to provide an access service and an information recommendation service for the first terminal 120 and the second terminal 160, and to transmit recommendation information (at least one of an article, a picture, audio, and video) from the information recommendation server 144 to the terminal (the first terminal 120 or the second terminal 160). The information recommendation server 144 may be one or more. When the information recommendation servers 144 are multiple, at least two information recommendation servers 144 exist for providing different services, and/or at least two information recommendation servers 144 exist for providing the same service, for example, providing the same service in a load balancing manner, which is not limited in the embodiment of the present application.
The second terminal 160 is installed and operated with an application program supporting social attributes and information recommendation. The application may be any of an instant messaging system, a news-pushing system, a shopping system, an online video system, a social-class application that aggregates people based on topics or channels or circles, or other application systems with social attributes. The second terminal 160 is a terminal used by the second user. The second terminal 120 has a second account registered in its application.
Optionally, the first account and the second account are in a virtual social network that includes a social relationship chain between the first account and the second account. The virtual social network may be provided by the same social platform, or may be provided by cooperation of multiple social platforms having an association relationship (such as an authorized login relationship). Optionally, the first account and the second account may belong to the same team, the same organization, have a friend relationship, or have a temporary communication right. Optionally, the first account number and the second account number may also be in a stranger relationship. In summary, the virtual social network provides a one-way messaging approach or a two-way messaging approach between the first account and the second account.
Alternatively, the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application of different operating system platforms, or the applications installed on the two terminals are different but support information intercommunication. The different operating systems include: apple operating system, android operating system, Linux operating system, Windows operating system, and the like.
The first terminal 120 may generally refer to one of a plurality of terminals, and the second terminal 160 may generally refer to one of a plurality of terminals, and this embodiment is only illustrated by the first terminal 120 and the second terminal 160. The terminal types of the first terminal 120 and the second terminal 160 are the same or different, and include: at least one of a smartphone, a gaming console, a desktop computer, a tablet, an e-book reader, an MP3 player, an MP4 player, and a laptop portable computer. The following embodiments are exemplified in the case that the first terminal 120 and/or the second terminal 140 is a smartphone, and a friend relationship chain exists between the first account and the second account.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminal may be only one, or the number of the terminal may be several tens or hundreds, or more, in this case, the computer system further includes another terminal 180, and when another terminal 180 exists in one or more terminals, a second account having a friend relationship with the first account is logged in. The number of terminals and the type of the device are not limited in the embodiments of the present application.
In an illustrative example, taking an application program supporting social attributes and information recommendation running in the first terminal 120 as an example, when the first user starts the application program and opens the information flow presentation interface, the application program sends an information recommendation request to the server 140. The server 140 may screen an article that is interested in a second account having a friend relationship with the first account (for example, the article is predicted to be interested in, approved by, commented on, and shared by the second account according to the user image) as a candidate document set, and then perform comprehensive evaluation on each document in the candidate document set on n evaluation indexes by using a target deep network model provided in the following embodiments to obtain a weighted quality score of each document. The server extracts K documents from the candidate document set according to the sequence of the weighted quality scores from high to low to generate an information stream, and pushes the information stream to an application program in the first terminal 120, and the application program displays the information stream. In other examples, server 140 may generate the candidate document set in other manners, which are not limited.
The target deep network model can comprehensively consider the quality of the document from n evaluation indexes, so that the problem that an information flow generated by a neural network model with a single evaluation index can only obtain a good effect on the single evaluation index and possibly has a poor effect on other evaluation indexes can be solved.
Fig. 2 is a flowchart of a deep network-based information flow recommendation method according to an exemplary embodiment of the present application. The method may be performed by the computer system shown in fig. 1. The method comprises the following steps:
step 201, a terminal sends an information recommendation request of a target account to a server;
an application program is installed in the terminal, and a target account is logged in the application program. The application has the function of acquiring and displaying information streams.
And the application program sends an information recommendation request to the server through the terminal, wherein the information recommendation request is used for requesting to recommend information flow to the target account. For example, when an information flow presentation interface in an application is triggered, the application sends an information recommendation request to the server.
Step 202, a server receives an information recommendation request of a target account;
optionally, the server extracts the target account from the information recommendation request.
Step 203, the server generates a candidate document set of the target account, wherein the candidate document set comprises at least two documents;
a document is information that can be exposed on an application, and a document includes: at least one element of picture, audio, video, text, AR elements.
When recommending information streams to a target account, the server generates a candidate document set for the target account, which includes a plurality of documents that may be of interest to the target account.
Optionally, the server generates a candidate document set according to a user portrait of the target account, wherein the user portrait comprises information of gender, age, region, academic calendar, occupation and the like of the target account; or the server generates a candidate document set according to the historical browsing record of the target account; or, the server generates a candidate document set according to the historical consumption record of the target account, and the like.
Optionally, when the target account exists in one or more friend accounts, the server may further generate the candidate document set in at least one of the following manners: adding the document clicked by the friend account to the candidate document set; adding the document with longer dwell time of the friend account to the candidate document set; taking the document approved by the friend account as a candidate document; and adding the documents shared by the friend accounts to the candidate document set.
Optionally, the server filters out m documents that may be of interest to the target account from a candidate document set (also referred to as a candidate document pool) to form a candidate document set, where m is an integer greater than 1.
Step 204, the server calls a target deep network model to calculate the sub-quality scores of the documents on the n evaluation indexes;
the target deep network model is a model for calculating the quality scores of the documents on n evaluation indexes, wherein n is an integer greater than 1. The n evaluation indices include, but are not limited to: at least two of click rate, play rate, conversion rate, appreciation rate, message rate, average rate and share rate.
The target deep network model is a deep network model. Referring to fig. 3 in combination, the server calls the same target deep network model, and the sub-quality scores of the documents on the n evaluation indexes can be calculated. For example, a sub-quality score 1 on the evaluation index 1, a sub-quality score 2 on the evaluation index 2, a sub-quality score n on the evaluation index n of each document in the candidate document set are calculated.
Step 205, the server calculates the weighted quality score of the document according to the sub-quality scores of the n evaluation indexes and the weights corresponding to the n evaluation indexes;
each of the n evaluation indexes also has a respective weight. In an illustrative example, the sum of the weights added to the n evaluation indexes is 1.
For the document i, the server multiplies the sub-quality scores of the document i on the n evaluation indexes by the corresponding weights, and then adds the n products to obtain the weighted quality score of the document.
Step 206, the server selects a target document from the candidate document set according to the weighted quality score;
optionally, the server ranks each document in the candidate document set according to the weighted quality score, and then determines the top k ranked documents as the target documents. Wherein k is an integer not greater than m.
The first k documents may be all documents in the candidate document set or may be a part of documents in the candidate document set. When the current k documents are a part of documents, the number of the documents may be a fixed number, such as 100; or proportionally extracted documents, such as the top k documents determined to be the top 80% of the documents in the candidate document set.
Step 207, the server outputs information flow according to the target document;
optionally, the information stream comprises k target documents in a sequential order. Referring to fig. 3, document index 1 corresponds to the document with the highest weighted quality score, document index 2 corresponds to the document with the highest weighted quality score, and so on, the server may generate the information stream in sequence for the top k ranked target documents.
Optionally, the server sends the entire information stream to the terminal, or the server divides the entire information stream into a plurality of parts, sends the information stream to the terminal in batches, and sends one part of the information stream in each batch.
In step 208, the terminal displays the information stream.
Optionally, the application displays the information stream after receiving the information stream.
In summary, in the method provided in this embodiment, the sub-quality scores of the document on the n evaluation indexes are calculated by using the same target depth network model, the weighted quality scores of the document are obtained according to the n sub-quality scores and the corresponding weights, and the document is ranked according to the weighted quality scores to generate the information stream, so that the document is evaluated on the multiple evaluation indexes, so that the recommended information stream has better performance on the multiple evaluation indexes, and the problem that the information stream generated by the neural network model of a single evaluation index can only obtain a better effect on a single evaluation index, but may have a poor effect on other evaluation indexes is avoided.
Optionally, the weight corresponding to each evaluation index is adjustable, and a better effect can be achieved on one or more target evaluation indexes in a targeted manner by adjusting different weights.
The embodiment of the application provides a target deep network model, and the target deep network model can score documents on n evaluation indexes. Evaluation indicators include, but are not limited to: fig. 4 shows a model structure diagram of the deep network model. The target deep network model includes: a shared network portion 220 and n independent network portions 240;
a shared network part 220 for extracting a common feature vector of input data, the input data including document features of the document. The shared network portion 220 includes a shared features layer 221, a shared mapping layer 222, and a shared fully-connected layer 223, which are connected in series. The shared feature layer 221 is configured to extract sub-feature vectors of at least two dimensions corresponding to the input data; the shared mapping layer 222 is configured to perform dimension reduction mapping on the sub-feature vectors of at least two dimensions to obtain sub-feature vectors after dimension reduction mapping; and the shared full-connection layer 223 is used for performing full-connection operation on the sub-feature vectors subjected to the dimensionality reduction mapping and outputting the sub-feature vectors as common feature vectors.
The n independent network parts 240 are mutually parallel, the ith independent network part in the n independent network parts is used for calculating the sub-quality fraction of the document on the ith evaluation index according to the common characteristic vector, and i is an integer not larger than n. Optionally, the network structure in which there are at least two groups of parallel independent network portions is different. Optionally, the network structure comprises: at least one of the number of layers of the neural network layer, the type of the neural network layer, a weight coefficient in the neural network layer, and the arrangement order of the neural network layer.
Alternatively, the number of nodes of the first neural network layer of the n independent network portions 240 is the same, since the n independent network portions 240 each take as input the common feature vector output by the shared network portion 220.
Fig. 4 uses 2 evaluation indexes as an illustrative example, and the server obtains user characteristics and context characteristics according to the target account, and obtains document characteristics according to the document, where the context characteristics include but are not limited to at least one of time, place, network environment, terminal type, and operating system type. The server takes the user characteristics, the context characteristics and the document characteristics as input data and inputs the input data into the target deep network model; and calculating the sub-quality scores of the documents on the n evaluation indexes through a target deep network model. Wherein the shared network portion represents a commonality across the plurality of evaluation indices, and the ith individual network portion represents an individuality across the ith evaluation index. Each independent network part 240 shares the shared feature vector output using the shared network part 220, then extracts the individual feature vector related to the evaluation index using the respective independent network structure, and finally outputs the sub-quality score corresponding to the evaluation index.
In fig. 4, the shared mapping (embedding) layer 222 mainly aims to reduce the dimension of the sparse feature, and the shared mapping layer 222 is a fully connected layer in nature.
Figure BDA0001989494440000101
In the above formula, i represents a single feature group number, XfiDenotes the sub-feature vector, W, corresponding to the ith feature groupeiAnd the corresponding weight of the ith feature group in the mapping process. The output result z is the mapped sub-feature vector (embedding output) in the upper graph. And after the mapped sub-feature vectors are subjected to full-connection layer operation, outputting the sub-feature vectors as shared feature vectors. The shared feature vectors processed by the shared mapping layer 222 and the shared fully-connected layer 223 are shared between different independent network portions 240.
Each independent network portion 240 shares its own independent neural network structure, which facilitates better data fitting on each evaluation index according to its own label distribution. The number of layers of the neural network and the number of nodes in each layer need to be designed in advance, and the neural networks of different independent network parts 240 can be designed differently in the middle layer except that the nodes in the first layer need to be the same. The two adjacent neural network layers in the independent network part are constructed in a fully connected mode. The calculation between the two layers is as follows
Figure BDA0001989494440000102
Where i denotes the ith independent network portion and k denotes the kth layer of the independent network portion. XikRepresenting node values of the k-th neural network layer of the i-th independent network portion. g () represents Activation functions (activations), which are nonlinear transformations applied in linear operations. WikAnd bikThe weight and bias parameters from the k-1 layer to the k layer are parameters to be learned in the training process.
Between two evaluation indexes with progressive relation, such as a click target and a sharing target, according to the setting of recommended products, a user often needs to click first and then possibly share and other behaviors, the sharing rate can be expressed as the click rate multiplied by the conditional sharing probability after click, and the following formula is met:
p(y=1,z=1|x)=p(y=1|x)×p(z=1|y=1,x).
wherein x represents feature, y represents click, and z represents share. Therefore, for a first and a second independent network portion of said n independent network portions where there is a dependency relationship, the output of the second independent network portion is further connected to a product node; the product node is configured to determine a final sub-quality score of the second independent network portion by multiplying the first sub-quality score output by the first independent network portion by the sub-quality score output by the second independent network portion. That is, the network output node of the independent network part adds the product of the output of the evaluation index 1 to the output of the evaluation index 2 according to the progressive relation between the business targets to carry out integral modeling, so that the optimization of the target deep network model is more stable. Such a dependency can be cancelled if there is no progressive relationship between the outputs of the two evaluation indexes.
It should be noted that fig. 4 is illustrated by data of three dimensions including user characteristics, context characteristics, and document characteristics in the input data. In one possible embodiment, the input data includes data in two dimensions, user features and document features. In another possible embodiment, the input data may further include data of other dimensions besides the user feature, the context feature and the document feature, and the embodiment does not limit the number of data dimensions included in the input data. However, in terms of characterizing the relationship between the user and the document, the input data generally includes user features and document features, and data of other dimensions can be selectively added to enrich the feature dimension.
In summary, compared with a method in which a plurality of simple neural network models are combined to solve a multi-index problem, the target depth network model provided in this embodiment can simplify the feature extraction and sample preparation process, simplify the online prediction process, and reduce the performance pressure caused by the newly added evaluation index.
In the target deep network model in this embodiment, the embedding parameter of the shared mapping layer is used for each optimization index, so that data of individual optimization indexes can be prevented from being too sparse, and a common abstract trait in characteristics can be used to assist prediction of each optimization index.
The target depth network model in this embodiment has independent output for each evaluation index, and can be flexibly combined on line. The weight can be adjusted to incline the assigned evaluation index during sorting to meet the service requirement on the line.
The target deep network model can be obtained through an off-line training process. Fig. 5 shows a flowchart of a training method of a target deep network model according to an exemplary embodiment of the present application. The method comprises the following steps:
step 501, generating a training sample according to the user characteristic data and the candidate document data of the sample user account;
the sample user account is the account used by the sample user. Optionally, the server extracts the user characteristic data and the candidate document data of the sample user account from the log data of the online system. The user characteristic data and the candidate document data of each sample user account are formed into a training sample.
User characteristic data includes, but is not limited to: user profile data and user historical behavior.
Step 502, respectively calibrating positive and negative labels for training samples according to each evaluation index to obtain a training sample set;
because the target deep network model adopts at least two independent network parts corresponding to the evaluation indexes, and the at least two independent network parts should be trained independently, positive and negative labels are respectively calibrated for each evaluation index. Namely, aiming at the evaluation index 1, calibrating a positive label 1 or a negative label 1 for the training sample; aiming at the evaluation index 2, calibrating a positive label 2 or a negative label 2 for the training sample; and aiming at the evaluation index 3, calibrating the positive label 3 or the negative label 3 for the training sample, and so on, and no further description is given.
After the positive label and the negative label are calibrated for each training sample, the calibrated training samples are added to a training sample set.
Step 503, inputting each training sample in the training sample set to the target deep network model, and training the target deep network model by adopting an error back propagation algorithm to obtain the trained target deep network model.
The target deep network model needs to use a loss function (loss function) in the training process. The loss function is used for measuring the inconsistency degree between the predicted value f (x) and the true value Y of the target depth network model. The smaller the loss function is, the better the prediction effect of the target depth network model is.
Schematically, the loss function corresponding to the target depth network model is an overall loss function which is comprehensively constructed according to n evaluation indexes.
And designing an overall loss function for the training process, wherein the overall loss function is obtained by the contribution of each evaluation index. In one implementation, the contribution of each evaluation index in the overall loss function is equal, with no bias for a certain evaluation index. In another implementation, when there is one or more important evaluation indexes, the contribution of each evaluation index in the loss function is not equal, and there is a bias for the important evaluation indexes.
In an illustrative example, the contribution of each evaluation index is equal, and the overall Loss function Loss is L1+ L2+ L3+ L4+ … + Ln, where Li is the Loss function corresponding to the ith evaluation index, and i is a positive integer.
Taking the evaluation index as the click rate as an example, a mode of taking logarithm as a maximum likelihood function can be adopted as the loss function, and the loss function of the click rate is recorded as L _ ctr.
Taking the evaluation index as the evaluation index of the sharing dimension, the evaluation index of the collection dimension, and the evaluation index of the like, since the samples are all bernoulli events with values of 0 and 1, and the loss functions all satisfy the expression form of binomial distribution, the loss function of the sharing (share) dimension can be denoted as L _ share, the loss function of the collection (collect) dimension can be denoted as L _ collect, and the loss function of the like dimension can be denoted as L _ like.
Taking the evaluation index as the evaluation index of the dwell time as an example, since the sample data is a non-bernoulli event, that is, the sample label is a real number, rather than a 0 or 1 value of the binomial distribution, the loss function can be represented by a mean square error (RMSE). Optionally, there may be corresponding Loss functions for different value types, such as evaluation indexes of negative feedback dimensions, and change Loss may also be used as a Loss function and is denoted as L _ nega.
In an illustrative example, assuming that the evaluation index includes 6-dimensional evaluation indexes including click rate, sharing, favorite, like, stay duration, and negative feedback, the overall Loss function Loss is expressed as L _ ctr + L _ share + L _ select + L _ like + L _ nega.
In another illustrative example, the loss function for each evaluation index may also be weighted.
In summary, in the training method provided by this embodiment, the corresponding independent network portions are independently trained for each evaluation index, and the coupling degree between the independent network portions is reduced, so that the evaluation indexes with dense data can be effectively avoided in the training process, and the evaluation indexes with sparse data can also be learned and optimized, which better conforms to the actual application environment.
The above information flow recommendation method is explained below with reference to a specific example. The terminal is provided with an application program, and the application program is provided with a user interface used for displaying the information flow. The server is a backend server for the application. When the server is a server, the information flow recommendation function exists in the server; when the server is a server cluster, a server for information recommendation exists in the server cluster. As shown in fig. 6, fig. 6 is an application diagram of an information flow recommendation method according to an exemplary embodiment of the present application. The information flow recommendation method comprises the following four processes:
user usage process 61:
the user uses the application program to send an information recommendation request to the server, and the server feeds back information flow to the application program. In the initial stage, the information stream fed back by the server may be generated by a generation method based on the user portrait, which is not limited in this embodiment.
The application program displays the information flow on the user interface, and a user can perform at least one operation of clicking, sharing, appreciating and commenting on the document in the information flow on the user interface.
The log collection process 62:
the server logs the user's operation behavior on the application. Optionally, each log records a user account, an operation behavior and an operation time. The operation behavior is any one of click, share, appreciate and comment.
Optionally, the server may also record a user representation for each user account.
Offline training process 63:
a technician may build a target deep network model based on the network structure of fig. 2 described above. User features and document features are then extracted from the log file.
Alternatively, the technician extracts user features from the user's historical behavior and user representation, and document features from the document information. And inputting the user characteristics and the document characteristics into the target deep network model for training to obtain the trained target deep network model. The training process may refer to the process described above with respect to fig. 5.
It should be noted that the training process can be performed in an off-line manner. The offline training process may also be performed multiple times at different time periods.
The online recommendation process 64:
after the target deep network model is trained, the target deep network model can be put into online use. When the server receives an information recommendation request of a target account sent by the terminal, the target account is a requesting user (user), the server acquires user characteristics of the target account, and generates a candidate document set for the target account. For each document in the candidate document set, document features of the document are also obtained. The server inputs the user characteristics of the target account and the document characteristics of each document into the target deep network model to obtain the sub-quality scores of each document on the n evaluation indexes, and then the weighted quality score of the document is obtained through calculation according to the sub-quality scores on the n evaluation indexes and the weights corresponding to the n evaluation indexes.
And the server ranks the documents in the candidate document set according to the sequence of the weighted quality scores of the documents from high to low. And generating an information stream recommended to the target account according to the sorted top k target documents. The server sends the information stream to the terminal, which displays the information stream on the user interface.
As shown in fig. 7, the terminal displays an information stream on an application 70 by using a list control, where the list control includes a plurality of list items 71 to 75, each list item is used to display a document, an arrangement order of the list items is the same as an arrangement order of items in the information stream, an item with a higher weighted quality score is positioned in front of a corresponding list item in the list control, for example, the weighted quality score of the article 1 is the highest, the corresponding list item 71 is displayed at the uppermost position on the user interface, and the weighted quality score of the article 2 is smaller than the weighted quality score of the article 1, and the corresponding list item 72 is displayed below the list item 71. The user may slide the list control with a slide-up gesture and a slide-down gesture on the user interface to view previously or subsequently ranked list items.
Optionally, in each list item are displayed an article title, author, thumbnail, like button 76 and comment button 77 of the article.
Optionally, after a certain list item is clicked, a display interface of the article corresponding to the list item is displayed, and a sharing button may be further displayed on the display interface.
Note that, the weight of the n evaluation indexes is adjustable. The server receives a weight modification operation; and modifying the weights corresponding to the n evaluation indexes according to the weight modification operation.
In a possible embodiment, the weight setting among the multiple evaluation indexes may be performed by first calculating a candidate weight combination in an off-line evaluation manner, and then verifying the effect of each group of weight combinations through a line experiment. In a single target estimation scene, a sequencing order of a candidate document set and a corresponding target label are given, generally, Auc (Area under the curve) is used as an index of an offline evaluation mode, the larger Auc is, the more accurate estimation is shown, and the better sequencing effect is. The method for searching the optimal weight combination mode of the weighted quality scores among the multiple evaluation indexes in the offline model prediction result by using the grid search method comprises the following specific processes: for each set weight, calculating a weighted quality score y output among a plurality of evaluation indexes of each document, sequencing the documents in the candidate document set according to the weighted quality score y value, and calculating the Auc on each evaluation index according to the sequencing result; for different weight combinations, different y values and Auc on a plurality of corresponding evaluation indexes can be obtained, and then the weight combination with the Auc dominance is selected according to the product target, or the weight combination with the highest Auc weighted average is selected as the on-line experimental combination according to the importance of the product target.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 8, a block diagram of a deep network-based information flow recommendation apparatus according to an exemplary embodiment of the present application is shown. The apparatus may be implemented as all or part of a server in software, hardware, or a combination of both. The device comprises:
a receiving module 810, configured to receive an information recommendation request of a target account;
a generating module 820, configured to generate a candidate document set of the target account, where the candidate document set includes at least two documents;
the calling module 830 is configured to call a target deep network model to calculate sub-quality scores of each document of the at least two documents on n evaluation indexes, where n is an integer greater than 1;
a calculating module 840, configured to calculate a weighted quality score of the document according to the sub-quality scores of the n evaluation indexes and weights corresponding to the n evaluation indexes;
a selecting module 850, configured to select a target document from the candidate document set according to the weighted quality score, and generate the information stream according to the target document;
and a recommending module 860, configured to recommend the information stream to a terminal corresponding to the target account.
In an optional embodiment, the target deep network model comprises: a shared network portion and n independent network portions, said n independent network portions being juxtaposed to each other;
the shared network part is used for extracting a common feature vector of input data, and the input data comprises document features of the document;
and the ith independent network part in the n independent network parts is used for calculating the sub-quality fraction of the document on the ith evaluation index according to the common characteristic vector, wherein i is an integer not greater than n.
In an optional embodiment, the shared network part includes a shared feature layer, a shared mapping layer and a shared full connection layer, which are connected in sequence;
the shared characteristic layer is used for extracting sub-characteristic vectors of at least two dimensions corresponding to the input data;
the shared mapping layer is used for performing dimensionality reduction mapping on the sub-feature vectors of the at least two dimensions to obtain sub-feature vectors subjected to dimensionality reduction mapping;
and the shared full-connection layer is used for performing full-connection operation on the sub-feature vectors subjected to the dimensionality reduction mapping and outputting the sub-feature vectors as the common feature vectors.
In an alternative embodiment, the network structure in which there are at least two of said independent network portions is different, said network structure comprising: at least one of the number of neural network layers, the type of neural network layer, the weight coefficient of neural network layer, and the arrangement order of neural network layers.
In an alternative embodiment, the number of nodes of the first neural network layer of the n independent network portions is the same.
In an alternative embodiment, for a first independent network portion and a second independent network portion having a dependency relationship among the n independent network portions, the output of the second independent network portion is further connected to a product node;
the product node is configured to determine a final sub-quality score of the second independent network portion by multiplying the first sub-quality score output by the first independent network portion by the sub-quality score output by the second independent network portion.
In an optional embodiment, the apparatus further comprises an acquisition module 870.
The obtaining module 870 is configured to obtain user characteristics according to the target account, and obtain document characteristics according to the document;
the calling module 830 is configured to input the user characteristics and the document characteristics as the input data to the target deep network model; and calculating the sub-quality scores of the documents on the n evaluation indexes through the target deep network model.
In an optional embodiment, the obtaining module 870 is configured to obtain user characteristics and context characteristics according to the target account, and obtain document characteristics according to the document; wherein the context feature comprises at least one of time, place, network environment, terminal type and operating system type;
the calling module 830 is configured to input the user characteristic, the context characteristic, and the document characteristic as the input data to the target deep network model; and calculating the sub-quality scores of the documents on the n evaluation indexes through the target deep network model.
In an optional embodiment, the apparatus further comprises: a modification module 880;
the receiving module 810 is configured to receive a weight modification operation;
the modifying module 880 is configured to modify the weights corresponding to the n evaluation indexes according to the weight modifying operation.
In an alternative embodiment, the selecting module 850 is configured to rank each document in the candidate document set according to the weighted quality score; and determining the top k documents as the target documents, wherein k is an integer not greater than m.
In an optional embodiment, the loss function corresponding to the target deep network model is an overall loss function that is comprehensively constructed according to the n evaluation indexes.
It should be noted that: in the information flow recommendation device based on the deep network according to the foregoing embodiment, when recommending an information flow to a terminal, only the division of the above function modules is illustrated, and in practical applications, the function distribution may be completed by different function modules according to needs, that is, the internal structure of the device is divided into different function modules, so as to complete all or part of the functions described above. In addition, the information flow recommendation device and the method embodiment of the information flow recommendation method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment and are not described herein again.
Fig. 10 shows a schematic structural diagram of a server according to an embodiment of the present application. The server is used for implementing the information flow recommendation method based on the deep network provided in the above embodiment. Specifically, the method comprises the following steps:
the server 1000 includes a Central Processing Unit (CPU)1001, a system memory 1004 including a Random Access Memory (RAM)1002 and a Read Only Memory (ROM)1003, and a system bus 1005 connecting the system memory 1004 and the central processing unit 1001. The server 1000 also includes a basic input/output system (I/O system) 1006, which facilitates the transfer of information between devices within the computer, and a mass storage device 1007, which stores an operating system 1013, application programs 1014, and other program modules 1015.
The basic input/output system 1006 includes a display 1008 for displaying information and an input device 1009, such as a mouse, keyboard, etc., for user input of information. Wherein the display 1008 and input device 1009 are connected to the central processing unit 1001 through an input-output controller 1010 connected to the system bus 1005. The basic input/output system 1006 may also include an input/output controller 1010 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input-output controller 1010 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1007 is connected to the central processing unit 1001 through a mass storage controller (not shown) connected to the system bus 1005. The mass storage device 1007 and its associated computer-readable media provide non-volatile storage for the server 1000. That is, the mass storage device 1007 may include a computer readable medium (not shown) such as a hard disk or CD-ROM drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1004 and mass storage device 1007 described above may be collectively referred to as memory.
The server 1000 may also operate as a remote computer connected to a network via a network, such as the internet, according to various embodiments of the present application. That is, the server 1000 may be connected to the network 1012 through the network interface unit 1011 connected to the system bus 1005, or the network interface unit 1011 may be used to connect to another type of network or a remote computer system (not shown).
The memory stores one or more programs stored in the memory and configured to be executed by one or more processors. The one or more programs include instructions for implementing the deep network-based information flow recommendation method provided by the various embodiments described above.
Embodiments of the present application also provide a computer-readable storage medium storing one or more programs, the one or more programs being stored in a memory and configured to be executed by one or more processors. The one or more programs include instructions for implementing the deep network-based information flow recommendation method provided by the various embodiments described above.
Embodiments of the present application also provide a computer program product having one or more programs stored in a memory and configured to be executed by one or more processors. The one or more programs include instructions for implementing the deep network-based information flow recommendation method provided by the various embodiments described above.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (11)

1. An information flow recommendation method based on a deep network is characterized by comprising the following steps:
receiving an information recommendation request of a target account;
generating a candidate document set of the target account, wherein the candidate document set comprises at least two documents;
calling a target deep network model to calculate the sub-quality score of each document of the at least two documents on n evaluation indexes, wherein n is an integer larger than 1, the target deep network model comprises a shared network part and n independent network parts, the n independent network parts are mutually parallel, the shared network part is used for extracting common feature vectors of input data, the input data comprise document features of the documents, the input of the n independent network parts is the common feature vectors output by the shared network part, the number of nodes of a first layer neural network layer of the n independent network parts is the same, the ith independent network part of the n independent network parts is used for calculating the sub-quality score of the document on the ith evaluation index according to the common feature vectors, and i is an integer not larger than n, for a first independent network part and a second independent network part which have dependency relationship among the n independent network parts, the output of the second independent network part is also connected with a product node, and the product node is used for determining the result as the final sub-quality score of the second independent network part after multiplying a first sub-quality score output by the first independent network part and a second sub-quality score output by the second independent network part;
calculating to obtain a weighted quality score of the document according to the sub-quality scores of the n evaluation indexes and the weights corresponding to the n evaluation indexes;
selecting a target document from the candidate document set according to the weighted quality score, and generating the information flow according to the target document;
and recommending the information flow to a terminal corresponding to the target account.
2. The method of claim 1, wherein the shared network portion comprises a shared feature layer, a shared mapping layer, and a shared fully-connected layer, which are connected in sequence;
the shared characteristic layer is used for extracting sub-characteristic vectors of at least two dimensions corresponding to the input data;
the shared mapping layer is used for performing dimensionality reduction mapping on the sub-feature vectors of the at least two dimensions to obtain sub-feature vectors subjected to dimensionality reduction mapping;
and the shared full-connection layer is used for performing full-connection operation on the sub-feature vectors subjected to the dimensionality reduction mapping and outputting the sub-feature vectors as the common feature vectors.
3. The method of claim 1, wherein the network structure in which there are at least two independent network parts is different, the network structure comprising: at least one of the number of neural network layers, the type of neural network layer, the weight coefficient of neural network layer, and the arrangement order of neural network layers.
4. The method according to any one of claims 1 to 3, wherein before the invoking the target deep network model to calculate the sub-quality scores of the document on the n evaluation indexes, the method further comprises:
acquiring user characteristics according to the target account, and acquiring document characteristics according to the document;
the method for calculating the sub-quality scores of the documents on the n evaluation indexes by calling the target deep network model comprises the following steps:
inputting the user characteristics and the document characteristics into the target deep network model as the input data;
and calculating the sub-quality scores of the documents on the n evaluation indexes through the target deep network model.
5. The method according to any one of claims 1 to 3, wherein before the invoking the target deep network model to calculate the sub-quality scores of the document on the n evaluation indexes, the method further comprises:
acquiring user characteristics and context characteristics according to the target account, and acquiring document characteristics according to the document; wherein the context feature comprises at least one of time, place, network environment, terminal type and operating system type;
the method for calculating the sub-quality scores of the documents on the n evaluation indexes by calling the target deep network model comprises the following steps:
inputting the user features, the context features and the document features as the input data to the target deep network model;
and calculating the sub-quality scores of the documents on the n evaluation indexes through the target deep network model.
6. The method of any of claims 1 to 3, further comprising:
receiving a weight modification operation;
and modifying the weights corresponding to the n evaluation indexes according to the weight modification operation.
7. The method of claim 1, wherein selecting a target document from the set of candidate documents according to the weighted quality score comprises:
ranking each document in the candidate document set according to the weighted quality score;
and determining the top k documents as the target documents, wherein k is an integer not greater than m.
8. The method according to any one of claims 1 to 3, wherein the loss function corresponding to the target deep network model is an overall loss function which is comprehensively constructed according to the n evaluation indexes.
9. An information flow recommendation apparatus based on a deep network, the apparatus comprising:
the receiving module is used for receiving an information recommendation request of a target account;
the generation module is used for generating a candidate document set of the target account, wherein the candidate document set comprises at least two documents;
a calling module, configured to call a target deep network model to calculate a sub-quality score of each document of the at least two documents on n evaluation indexes, where n is an integer greater than 1, the target deep network model includes a shared network portion and n independent network portions, the n independent network portions are juxtaposed to each other, the shared network portion is configured to extract a common feature vector of input data, the input data includes document features of the documents, an input of the n independent network portions is the common feature vector output by the shared network portion, the number of nodes of a first layer neural network layer of the n independent network portions is the same, an i-th independent network portion of the n independent network portions is configured to calculate the sub-quality score of the document on the i-th evaluation index according to the common feature vector, i is an integer not greater than n, wherein, for a first independent network part and a second independent network part which have dependency relationship among the n independent network parts, the output of the second independent network part is further connected with a product node, and the product node is used for determining a final sub-quality fraction of the second independent network part after multiplying a first sub-quality fraction output by the first independent network part and a sub-quality fraction output by the second independent network part;
the calculation module is used for calculating the weighted quality score of the document according to the sub-quality scores of the n evaluation indexes and the weights corresponding to the n evaluation indexes;
the selection module is used for selecting a target document from the candidate document set according to the weighted quality score and generating the information flow according to the target document;
and the recommending module is used for recommending the information flow to the terminal corresponding to the target account.
10. A server, characterized in that the server comprises:
a processor and a memory, the memory storing at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the deep network-based information flow recommendation method of any of claims 1 to 8.
11. A computer-readable storage medium storing at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the deep network-based information flow recommendation method according to any one of claims 1 to 8.
CN201910175634.3A 2019-03-08 2019-03-08 Information flow recommendation method, device, equipment and storage medium based on deep network Active CN110266745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910175634.3A CN110266745B (en) 2019-03-08 2019-03-08 Information flow recommendation method, device, equipment and storage medium based on deep network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910175634.3A CN110266745B (en) 2019-03-08 2019-03-08 Information flow recommendation method, device, equipment and storage medium based on deep network

Publications (2)

Publication Number Publication Date
CN110266745A CN110266745A (en) 2019-09-20
CN110266745B true CN110266745B (en) 2022-02-25

Family

ID=67911723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910175634.3A Active CN110266745B (en) 2019-03-08 2019-03-08 Information flow recommendation method, device, equipment and storage medium based on deep network

Country Status (1)

Country Link
CN (1) CN110266745B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784151B (en) * 2019-11-08 2024-02-06 北京搜狗科技发展有限公司 Method and related device for determining recommended information
CN110837598B (en) * 2019-11-11 2021-03-19 腾讯科技(深圳)有限公司 Information recommendation method, device, equipment and storage medium
CN112925924A (en) * 2019-12-05 2021-06-08 北京达佳互联信息技术有限公司 Multimedia file recommendation method and device, electronic equipment and storage medium
CN112925963B (en) * 2019-12-06 2022-11-22 杭州海康威视数字技术股份有限公司 Data recommendation method and device
CN111193795B (en) * 2019-12-30 2021-07-02 腾讯科技(深圳)有限公司 Information pushing method and device, electronic equipment and computer readable storage medium
CN111310034B (en) * 2020-01-23 2023-04-07 深圳市雅阅科技有限公司 Resource recommendation method and related equipment
CN111291266B (en) * 2020-02-13 2023-03-21 深圳市雅阅科技有限公司 Artificial intelligence based recommendation method and device, electronic equipment and storage medium
CN111459783B (en) * 2020-04-03 2023-04-18 北京字节跳动网络技术有限公司 Application program optimization method and device, electronic equipment and storage medium
CN113542893B (en) * 2020-04-14 2023-08-18 北京达佳互联信息技术有限公司 Method and device for acquiring work evaluation information, and video screening method and device
CN112200639A (en) * 2020-10-30 2021-01-08 杭州时趣信息技术有限公司 Information flow model construction method, device and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105337843A (en) * 2015-09-23 2016-02-17 腾讯科技(深圳)有限公司 Interaction system and method, client, and background server
CN107045693A (en) * 2017-05-05 2017-08-15 北京媒立方传媒科技有限公司 Media characteristic determination, Media Recommendation Method and device
CN108280114A (en) * 2017-07-28 2018-07-13 淮阴工学院 A kind of user's literature reading interest analysis method based on deep learning
US10173773B1 (en) * 2016-02-23 2019-01-08 State Farm Mutual Automobile Insurance Company Systems and methods for operating drones in response to an incident
CN109241425A (en) * 2018-08-31 2019-01-18 腾讯科技(深圳)有限公司 A kind of resource recommendation method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105337843A (en) * 2015-09-23 2016-02-17 腾讯科技(深圳)有限公司 Interaction system and method, client, and background server
US10173773B1 (en) * 2016-02-23 2019-01-08 State Farm Mutual Automobile Insurance Company Systems and methods for operating drones in response to an incident
CN107045693A (en) * 2017-05-05 2017-08-15 北京媒立方传媒科技有限公司 Media characteristic determination, Media Recommendation Method and device
CN108280114A (en) * 2017-07-28 2018-07-13 淮阴工学院 A kind of user's literature reading interest analysis method based on deep learning
CN109241425A (en) * 2018-08-31 2019-01-18 腾讯科技(深圳)有限公司 A kind of resource recommendation method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN110266745A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN110266745B (en) Information flow recommendation method, device, equipment and storage medium based on deep network
US20210326674A1 (en) Content recommendation method and apparatus, device, and storage medium
Guan et al. Matrix factorization with rating completion: An enhanced SVD model for collaborative filtering recommender systems
TWI702844B (en) Method, device, apparatus, and storage medium of generating features of user
CN107330715B (en) Method and device for selecting picture advertisement material
EP4181026A1 (en) Recommendation model training method and apparatus, recommendation method and apparatus, and computer-readable medium
US10726087B2 (en) Machine learning system and method to identify and connect like-minded users
CN109558989A (en) Queuing time prediction technique, device, equipment and computer readable storage medium
CN110909182A (en) Multimedia resource searching method and device, computer equipment and storage medium
US20090299932A1 (en) System and method for providing a virtual persona
US10936601B2 (en) Combined predictions methodology
CN113761359B (en) Data packet recommendation method, device, electronic equipment and storage medium
CN110008397A (en) A kind of recommended models training method and device
US20210326718A1 (en) Machine learning techniques to shape downstream content traffic through hashtag suggestion during content creation
US20240046922A1 (en) Systems and methods for dynamically updating machine learning models that provide conversational responses
US9875443B2 (en) Unified attractiveness prediction framework based on content impact factor
Lin Learning information recommendation based on text vector model and support vector machine
CN111652673B (en) Intelligent recommendation method, device, server and storage medium
CN111443973B (en) Filling method, device and equipment of remark information and storage medium
CN114625894A (en) Appreciation evaluation method, model training method, appreciation evaluation apparatus, model training medium, and computing apparatus
CN113886674A (en) Resource recommendation method and device, electronic equipment and storage medium
CN110942345A (en) Seed user selection method, device, equipment and storage medium
CN116720003B (en) Ordering processing method, ordering processing device, computer equipment and storage medium
Yao et al. Microblog Search Based on Deep Reinforcement Learning
Zhou et al. SARF: A Spark Based Recommendation Framework in the Cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20221129

Address after: 1402, Floor 14, Block A, Haina Baichuan Headquarters Building, No. 6, Baoxing Road, Haibin Community, Xin'an Street, Bao'an District, Shenzhen, Guangdong 518133

Patentee after: Shenzhen Yayue Technology Co.,Ltd.

Address before: 518057 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 floors

Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TR01 Transfer of patent right