CN105653683B - Personalized recommendation method and device - Google Patents

Personalized recommendation method and device Download PDF

Info

Publication number
CN105653683B
CN105653683B CN201511020219.9A CN201511020219A CN105653683B CN 105653683 B CN105653683 B CN 105653683B CN 201511020219 A CN201511020219 A CN 201511020219A CN 105653683 B CN105653683 B CN 105653683B
Authority
CN
China
Prior art keywords
prediction model
sample
theta
operation information
sample data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201511020219.9A
Other languages
Chinese (zh)
Other versions
CN105653683A (en
Inventor
姜立宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201511020219.9A priority Critical patent/CN105653683B/en
Publication of CN105653683A publication Critical patent/CN105653683A/en
Application granted granted Critical
Publication of CN105653683B publication Critical patent/CN105653683B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Abstract

The invention discloses a personalized recommendation method and device. The method comprises the following steps: acquiring current operation information of an operation action executed by a user to be tested on an article to be tested; obtaining a prediction model, wherein the prediction model is used for representing the corresponding relation between operation information and a score value, and the prediction model is obtained by training sample data, and the sample data comprises a sample score value of a sample user on a sample article and sample operation information of an operation action of the sample user on the sample article; obtaining a prediction score value corresponding to the current operation information by using the prediction model; and performing personalized recommendation based on the prediction score value. Therefore, personalized recommendation can be realized based on the implicit behavior of the user on the article.

Description

Personalized recommendation method and device
Technical Field
The invention relates to the field of data processing, in particular to a personalized recommendation method and device.
Background
With the continuous development of information technology, personalized recommendation technology is developed to better serve users, and is used for providing personalized recommendation content meeting the requirements of the users. Generally, the personalized recommendation technology generates personalized recommendation content for a user based on user behavior data and in combination with a certain data analysis method.
Specifically, user behavior can be classified into dominant behavior and recessive behavior. The explicit behavior may be a user's visually indicated preference for the item, such as a user scoring the item. The implicit behavior may be an operation action performed by the user on the item, such as browsing, purchasing, rating, collecting, forwarding, approving, downloading, and so on.
When the personalized recommendation is performed, for example, in a personalized recommendation scheme implemented based on a collaborative filtering algorithm, the similarity between users can be calculated by using the scores of the objects by the users, neighbor users having the same or similar interest preference with a target user are determined, then the scores of the objects to be pushed by the target user are predicted according to the scores of the objects by the neighbor users, and then the objects to be pushed with higher predicted scores are taken as recommendation contents and sent to the target user. Wherein, the higher prediction score value can be understood as exceeding the preset value.
Therefore, the value of the item credit of the user plays an important role in the personalized recommendation scheme. For the explicit behavior, the captured value of the user's credit on the article can be directly utilized to perform personalized recommendation. However, for the implicit behavior, usually, only the operation information of the operation action performed on the article by the user can be captured, and if the operation action is browsing, the operation information can be embodied as browsing duration; the operation action is purchase, the operation information can be embodied as purchase times, and the like, and how to use the captured operation information to perform personalized recommendation becomes a problem to be solved urgently at present.
Disclosure of Invention
The invention aims to provide a personalized recommendation method and device, which can realize personalized recommendation based on the implicit behavior of a user on an article.
The embodiment of the invention provides a personalized recommendation method, which comprises the following steps:
acquiring current operation information of an operation action executed by a user to be tested on an article to be tested;
obtaining a prediction model, wherein the prediction model is used for representing the corresponding relation between operation information and a score value, and the prediction model is obtained by training sample data, and the sample data comprises a sample score value of a sample user on a sample article and sample operation information of an operation action of the sample user on the sample article;
obtaining a prediction score value corresponding to the current operation information by using the prediction model;
and performing personalized recommendation based on the prediction score value.
Optionally, the prediction model is: y is equal to theta1x12x2+…+θixi+…+θnxn0
Wherein y represents a score value; thetaiA weight value representing an ith operation action; x is the number ofiOperation information indicating the ith operation; i is more than or equal to 1 and less than or equal to n, and n is the type number of the operation actions; theta0Is a constant.
Optionally, the training sample data to obtain the prediction model includes:
acquiring a plurality of groups of the sample data;
establishing an original prediction model, and obtaining a predicted value corresponding to the sample operation information in each group of sample data by using the original prediction model;
establishing a loss function, wherein the loss function is used for expressing the deviation between the sample score value in each group of sample data and the estimated value corresponding to the sample operation information in the group of sample data;
and adjusting the original prediction model, and obtaining the prediction model when the loss function reaches the minimum value.
Optionally, the loss function is:
Figure BDA0000896640080000021
wherein J (θ) represents a loss function; y isjRepresenting the predicted value corresponding to the sample operation information in the jth group of sample data; z is a radical ofjAnd j is more than or equal to 1 and less than or equal to m, and m is the number of the sample data.
Optionally, the original prediction model is:
y=θ11x112x2+…+θ1ixi+…+θ1nxn10wherein, theta1iAn initial weight value representing an ith operation action; theta10Represents an initial constant; then
The adjusting the original prediction model and obtaining the prediction model when the loss function reaches a minimum value includes:
adjusting θ in the original prediction model by the following formula1i:θ2i=θ1i+α(zj-yj)xji(ii) a Wherein, theta2iThe adjusted weight value of the ith operation after one adjustment of the original prediction model, α the learning rate, and zj-yj)xjiFrom the loss function J (theta) to theta1iObtaining a partial derivative; x is the number ofjiThe sample operation information of the ith operation action in the jth group of sample data is represented, j is more than or equal to 1 and less than or equal to m, and m is the number of the sample data;
judging the theta2iWhether the loss function is to be minimized,if so, then the theta is adjusted2iIs determined as the thetaiObtaining the prediction model; if not, then at theta2iUsing said α and said (z)j-yj)xjiAnd adjusting to obtain the prediction model.
The embodiment of the invention also provides a personalized recommendation device, which comprises:
the operation information acquisition unit is used for acquiring the current operation information of the operation action executed by the user to be tested on the article to be tested;
the prediction model obtaining unit is used for obtaining a prediction model, the prediction model is used for representing the corresponding relation between the operation information and the score value, and the prediction model is obtained by training sample data, wherein the sample data comprises the sample score value of a sample user on a sample article and sample operation information of an operation action performed on the sample article by the sample user;
a score value obtaining unit, configured to obtain, by using the prediction model, a prediction score value corresponding to the current operation information;
and the personalized recommendation unit is used for performing personalized recommendation based on the prediction score value.
Optionally, the prediction model obtaining unit is specifically configured to obtain the following prediction models:
y=θ1x12x2+…+θixi+…+θnxn0
wherein y represents a score value; thetaiA weight value representing an ith operation action; x is the number ofiOperation information indicating the ith operation; i is more than or equal to 1 and less than or equal to n, and n is the type number of the operation actions; theta0Is a constant.
Optionally, the apparatus further comprises:
the sample data acquisition unit is used for acquiring a plurality of groups of sample data;
the model establishing unit is used for establishing an original prediction model and obtaining a predicted value corresponding to the sample operation information in each group of sample data by using the original prediction model;
the loss function establishing unit is used for establishing a loss function, and the loss function is used for expressing the deviation between the sample score value in each group of sample data and the estimated value corresponding to the sample operation information in the group of sample data;
and the model adjusting unit is used for adjusting the original prediction model and obtaining the prediction model when the loss function reaches the minimum value.
Optionally, the loss function establishing unit is specifically configured to establish a loss function as follows:
Figure BDA0000896640080000041
wherein J (θ) represents a loss function; y isjRepresenting the predicted value corresponding to the sample operation information in the jth group of sample data; z is a radical ofjAnd j is more than or equal to 1 and less than or equal to m, and m is the number of the sample data.
Optionally, the model establishing unit is specifically configured to establish an original prediction model as follows:
y=θ11x112x2+…+θ1ixi+…+θ1nxn10wherein, theta1iAn initial weight value representing an ith operation action; theta10Represents an initial constant; then
The model adjusting unit is specifically configured to adjust θ in the original prediction model by the following formula1i:θ2i=θ1i+α(zj-yj)xji(ii) a Wherein, theta2iThe adjusted weight value of the ith operation after one adjustment of the original prediction model, α the learning rate, and zj-yj)xjiFrom the loss function J (theta) to theta1iObtaining a partial derivative; x is the number ofjiThe sample operation information of the ith operation action in the jth group of sample data is represented, j is more than or equal to 1 and less than or equal to m, and m is the number of the sample data; judging the theta2iWhether to make the loss functionReaching a minimum value, and if so, dividing said theta by2iIs determined as the thetaiObtaining the prediction model; if not, then at theta2iUsing said α and said (z)j-yj)xjiAnd adjusting to obtain the prediction model.
According to the technical scheme, the captured implicit behaviors of the user can be converted into the predicted rating values of the user to the articles through the prediction model representing the corresponding relation between the implicit behaviors and the rating values, and therefore personalized recommendation can be achieved based on the predicted rating values. In addition, the accuracy of the score value prediction of the method is also improved by a mode of training sample data to obtain a prediction model.
Additional features and advantages of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a personalized recommendation method of the present invention;
FIG. 2 is a flow chart of a method of obtaining a predictive model in accordance with the present invention;
FIG. 3 is a schematic diagram of a user-item preference matrix of the present invention;
fig. 4 is a schematic structural diagram of the personalized recommendation device of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
Referring to fig. 1, a flow chart of a personalized recommendation method of the present invention is shown, which may include:
s101, obtaining current operation information of operation actions executed by a user to be tested on an article to be tested.
S102, obtaining a prediction model, wherein the prediction model is used for representing the corresponding relation between the operation information and the score value, and obtaining the prediction model by training sample data, and the sample data comprises the sample score value of the sample user on the sample article and the sample operation information of the operation action of the sample user on the sample article.
In the scheme of the invention, the prediction score of the user corresponding to the recessive behavior on the article can be estimated by using the prediction model obtained by training, and then the personalized recommendation can be carried out by using the prediction score.
Firstly, when the score value is predicted, the scheme of the invention can obtain the following two information:
on one hand, the current operation information of the operation action executed by the user to be tested on the object to be tested.
For example, current operation information can be captured in real time, and online credit value prediction can be performed; or when the personalized recommendation needs to be performed, the offline rating value prediction may be performed by using the previously captured current operation information, which may not be specifically limited in the embodiment of the present invention.
For example, the following information is captured for user a watching video a: the user A watches the duration of the video A, and the user A collects the video A, so that the operation action 1 is browsing, and the current operation information 1 is the browsing duration; the operation action 2 is collection, and the current operation information 2 is collection frequency 1.
On the other hand, the prediction model may represent a correspondence between the operation information and the score value. In the scheme of the invention, the prediction model can be as follows: y is equal to theta1x12x2+…+θixi+…+θnxn0. Wherein y represents a score value; thetaiA weight value representing an ith operation action; x is the number ofiOperation information indicating the ith operation; i is more than or equal to 1 and less than or equal to n, and n is the type number of the operation actions; theta0Is a constant.
In the application process, the type of the operation action in the prediction model can be selected according to the actual requirement. For example, it can be determined through statistical analysis which operation actions have a greater impact on the prediction score value for most items, such as purchasing and browsing, and the prediction model can include at least purchasing and browsing. Alternatively, it may be determined, in combination with the characteristics of some specific items, which operation actions have a greater influence on the prediction score value for the specific item, for example, if the influence of the evaluation is greater for specific item a, at least the operation action may be evaluated in the prediction model. Or, more operation action types may be selected as comprehensively as possible, which may not be specifically limited in the embodiment of the present invention.
As an example, the present invention further provides a solution for obtaining a prediction model, which is described in detail in fig. 2 below and will not be described in detail here.
S103, obtaining a prediction score value corresponding to the current operation information by using the prediction model.
After the two information are obtained, the prediction scoring value corresponding to the recessive behavior of the object to be detected of the user to be detected can be obtained by using the prediction model.
In the above example, if y ═ θ1x12x20Wherein, theta1The weight value of the browsing duration in the prediction of the score value; x is the number of1Is the browsing duration; theta2The weight values collected in the prediction of the score values; x is the number of2And if the collection times are the collection times, predicting score values corresponding to the implicit behaviors of the user A when watching the video A can be obtained through prediction according to the prediction model and the current operation information.
In the above example, if y ═ θ1x12x23x30That is, the predictive model includes operational information of the uncaptured operational action, e.g., θ3To purchase a weight value at the time of prediction of the score value; x is the number of3The number of purchases. In correspondence with this, the operation information of such an uncaptured operation action may be regarded as 0, and in the above example, it may be understood as x3Thus, based on the prediction model and the current operation information, it can be predicted as wellAnd the prediction score value corresponding to the implicit behavior of the user A when watching the video A.
And S104, performing personalized recommendation based on the prediction score value.
For example, the user to be tested is U1The object to be measured is T1Obtaining U by the scheme of the invention1For T1The prediction score value X corresponding to the executed operation action11Then, X may be substituted11And the preference matrix of the user item shown in the figure 3 is supplemented for use in personalized recommendation. The matrix shown in fig. 3 may represent the values of the scores of the k items for g users.
For example, when personalized recommendation is performed based on collaborative filtering algorithm, X can be utilized11Calculate U1Determining the similarity between the user and other users1And then is U according to the value of the neighbor user's score on the article1Determining recommended content, implementing for U1Personalized recommendation. Or, when personalized recommendations are made to other users in the matrix, such as to U2Personalized recommendation can be carried out based on X11Judge U1Whether or not it is U2Neighbor user of, if U1Is U2The neighbor user of (1) can further be according to U1The value of the score for the article is U2Determining recommended content, implementing for U2Personalized recommendation. That is to say, based on the prediction score value in the scheme of the invention, the personalized recommendation can be performed not only on the user to be tested, but also on other users.
In conclusion, the captured implicit behavior of the user can be converted into the predicted scoring value of the user to the article by using the scheme of the invention, and the purpose of carrying out personalized recommendation based on the implicit behavior is realized. In addition, the prediction score value corresponding to the recessive behavior is predicted by using the prediction model obtained by training, and the accuracy of the score value prediction of the invention is also favorably improved.
Referring to fig. 2, a flow chart illustrating a method of obtaining a predictive model of the present invention may include:
s201, obtaining a plurality of groups of sample data.
In the scheme of the present invention, the sample data is obtained by capturing the historical behavior data of the sample user, for example, the sample data may be embodied as: the method comprises the following steps of (1) sample user-sample article-sample score value-implicit behavior (which can be specifically the contents of operation action and operation information), wherein the implicit behavior can be in a single dimension, namely only one operation action is included; or implicit behavior may be multidimensional, i.e., include at least two operational actions.
It is to be understood that the sample article may be a generally representative article, or the sample article may be an article having a high degree of correlation with the article to be tested, which is not particularly limited in the embodiment of the present invention.
In addition, regarding the number of sample data participating in training, generally, the number of sample data is large, and the accuracy of the obtained prediction model is high, so that a plurality of groups of sample data can be properly selected for training; alternatively, the number of selected sample data may be adaptively adjusted according to the accuracy of the prediction model obtained by training.
For example, N sets of sample data may be selected first, and training may be performed based on the N sets of sample data to obtain a prediction model; and then selecting part of test sample data, detecting the accuracy of the prediction model by using the test sample data, and if the accuracy of the prediction model is low, selecting more than M groups of sample data on the basis of the N groups of sample data, and training by using (N + M) groups of sample data to obtain an updated prediction model. That is, the present invention can improve the accuracy of the prediction model by appropriately increasing the sample data capacity. The low accuracy of the prediction model can be understood as that the accuracy of the prediction model is lower than a preset value, and the preset value is not limited in the embodiment of the invention and can be determined by combining with the actual application requirements.
S202, establishing an original prediction model, and obtaining a predicted value corresponding to the sample operation information in each group of sample data by using the original prediction model.
As an example, the original prediction model in the solution of the present invention can be embodied as the following formula:
y=θ11x112x2+…+θ1ixi+…+θ1nxn10
wherein, theta1iAn initial weight value representing an ith operation action; theta10Representing an initial constant. Specifically, may be random as θ1iAnd theta10Assignment, which may not be specifically limited in the embodiments of the present invention.
S203, establishing a loss function, wherein the loss function is used for representing the deviation between the sample score value in each group of sample data and the estimated value corresponding to the sample operation information in the group of sample data.
Besides the original prediction model, the scheme of the invention can also establish a Loss Function (Loss Function), which can represent the sample score value, the prediction value corresponding to the sample operation information obtained based on the prediction model and the difference between the sample score value and the prediction value. In general, the prediction model used at this time is considered to be most accurate when the loss function reaches a minimum, i.e., when the difference between the estimated value and the sample score value is minimal. Therefore, the original prediction model can be adjusted by taking the condition that the loss function reaches the minimum value as a condition, and the prediction model in the scheme of the invention is obtained.
Specifically, the loss function can be established in the following manner.
Suppose thatj=yj-zjEstimated value y representing jth group of sample datajAnd a sample score value zjThe deviation therebetween. In general, producejThe cause of (a) may be related to the type of action of the operation chosen in the predictive model, or some random noise present in the sample data. Suppose thatjAre independently identically distributed and follow a normal distribution, i.e.j~N(0,σ2) Then, there are:
Figure BDA0000896640080000101
from this it can be deduced:
Figure BDA0000896640080000102
P(yj|xj&θ) represents a given xjAnd y after thetajThe distribution probability of (2).
Specifically, P (y)j|xj&Theta) is the probability obtained for the group j of sample data, and the value of theta should be obtained when all sample data are predicted to be the best, that is, the probability product of all sample data should be maximized. As an example, the maximum probability product can be solved by a maximum likelihood method, and in particular, the likelihood function L (θ) can be embodied as the following formula:
Figure BDA0000896640080000103
where m is the number of sample data.
Taking the logarithm of L (θ):
Figure BDA0000896640080000104
log ofL(θ)At a maximum, i.e. so that
Figure BDA0000896640080000105
At a minimum, therefore, the loss function J (θ) in the solution of the present invention can be embodied by the following formula:
Figure BDA0000896640080000106
s204, adjusting the original prediction model, and obtaining the prediction model when the loss function reaches the minimum value.
From the above analysis, it can be known that when the loss function reaches the minimum value, the estimated value is closest to the sample score value, so the process of obtaining the prediction model by the scheme of the present invention can be converted into the process of solving the minimum value of the loss function. Specifically, the minimum value of the loss function may be solved by a random gradient descent method, a least square method, or the like, which may not be specifically limited in the embodiment of the present invention.
The following explains the process of adjusting the original prediction model by taking the prediction model obtained by the stochastic gradient descent method as an example.
First, θ in the original prediction model can be adjusted by the following formula1i
Figure BDA0000896640080000111
Wherein, theta2iThe adjusted weight value of the ith operation after the original prediction model is adjusted once, α is the learning rate which can be used to indicate the adjustment step length of the iterative adjustment, zj-yj)xjiMay be represented by a loss function J (theta) vs. theta1iObtaining a partial derivative, wherein the partial derivative can be used for representing the adjustment direction of iterative adjustment; x is the number ofjiAnd sample operation information representing the ith operation action in the jth group of sample data.
To theta1iThe derivation may be embodied as:
Figure BDA0000896640080000112
in the scheme of the invention, the adjustment of the original prediction model can be understood as the adjustment of the parameter theta in the model1iAnd theta10The value of (a) may specifically relate to the following two aspects of information:
in one aspect, a can be used to represent the adjustment step size. Specifically, α can be set empirically, and generally, α is set to be small, the convergence speed of the loss function is relatively slow, but the convergence point, i.e., the minimum value, of the loss function can be found obviously; if the alpha setting is large, the convergence speed of the loss function will be fast, but the convergence point may not be easily found. As an example, a few α's may be set more, and a suitable α's may be selected through multiple trials. The value of α in the embodiments of the present invention is not particularly limited.
On the other hand, it can be used to indicate the direction of adjustment (z)j-yj)xji. Specifically, θ may be divided by a loss function J (θ)1iDerivation of the deviationTypically, the adjustment direction is the opposite direction of the partial derivative being calculated. In addition, it should be noted that, if the prediction model in the solution of the present invention is directed to one-dimensional implicit behavior, that is, the original prediction model is y ═ θ11x110The adjustment direction can then be determined by the loss function J (theta) vs. theta11And (6) obtaining the derivative.
In addition, θ is adjusted1iSimilarly, θ in the original prediction model can be adjusted according to the following formula10
Figure BDA0000896640080000121
Next, the adjusted theta is determined2iWhether or not to minimize the loss function.
As an example, it may be determined whether the weight value of the operation action continuously lingers at the convergence point, that is, whether the weight value is substantially unchanged during the last several iterative adjustments. If so, the loss function may be considered to have reached a minimum value.
For example, α and (z) can be utilizedj-yj)xjiObtaining theta2iAdjusted theta3iAnd theta3iAdjusted theta4iEtc. if theta2i、θ3iAnd theta4iIf the loss function is substantially unchanged, then it can be determined that the loss function has reached a minimum value, and θ can be calculated2iDetermined as theta in the prediction modeliAnd obtaining the prediction model in the scheme of the invention. In particular, the amount of the solvent to be used,
Figure BDA0000896640080000122
if passing through theta1iTo theta2iIf the loss function does not reach the minimum value, the value of θ can be obtained2iUsing α and (z)j-yj)xjiAnd continuously carrying out iterative adjustment until the loss function is determined to reach the minimum value, and obtaining a prediction model. The specific procedures can be described as above, and are not illustrated here.
Corresponding to the method shown in fig. 1, an embodiment of the invention further provides a personalized recommendation device 300, which is shown in fig. 4 as a schematic diagram, and the device may include:
an operation information obtaining unit 301, configured to obtain current operation information of an operation action performed on an article to be tested by a user to be tested;
a prediction model obtaining unit 302, configured to obtain a prediction model, where the prediction model is used to represent a correspondence between operation information and a score value, and obtain the prediction model by training sample data, where the sample data includes a sample score value of a sample item by a sample user and sample operation information of an operation action performed on the sample item by the sample user;
a score value obtaining unit 303, configured to obtain, by using the prediction model, a prediction score value corresponding to the current operation information;
and the personalized recommendation unit 304 is configured to perform personalized recommendation based on the predicted score value.
Optionally, the prediction model obtaining unit is specifically configured to obtain the following prediction models:
y=θ1x12x2+…+θixi+…+θnxn0
wherein y represents a score value; thetaiA weight value representing an ith operation action; x is the number ofiOperation information indicating the ith operation; i is more than or equal to 1 and less than or equal to n, and n is the type number of the operation actions; theta0Is a constant.
Optionally, the apparatus further comprises:
the sample data acquisition unit is used for acquiring a plurality of groups of sample data;
the model establishing unit is used for establishing an original prediction model and obtaining a predicted value corresponding to the sample operation information in each group of sample data by using the original prediction model;
the loss function establishing unit is used for establishing a loss function, and the loss function is used for expressing the deviation between the sample score value in each group of sample data and the estimated value corresponding to the sample operation information in the group of sample data;
and the model adjusting unit is used for adjusting the original prediction model and obtaining the prediction model when the loss function reaches the minimum value.
Optionally, the loss function establishing unit is specifically configured to establish a loss function as follows:
Figure BDA0000896640080000131
wherein J (θ) represents a loss function; y isjRepresenting the predicted value corresponding to the sample operation information in the jth group of sample data; z is a radical ofjAnd j is more than or equal to 1 and less than or equal to m, and m is the number of the sample data.
Optionally, the model establishing unit is specifically configured to establish an original prediction model as follows:
y=θ11x112x2+…+θ1ixi+…+θ1nxn10wherein, theta1iAn initial weight value representing an ith operation action; theta10Represents an initial constant; then
The model adjusting unit is specifically configured to adjust θ in the original prediction model by the following formula1i
Figure BDA0000896640080000132
Wherein, theta2iThe adjusted weight value of the ith operation after one adjustment of the original prediction model, α the learning rate, and zj-yj)xjiFrom the loss function J (theta) to theta1iObtaining a partial derivative; x is the number ofjiThe sample operation information of the ith operation action in the jth group of sample data is represented, j is more than or equal to 1 and less than or equal to m, and m is the number of the sample data; judging the theta2iWhether to minimize the loss function, and if so, the theta2iIs determined as the thetaiObtaining the prediction model; if not, then at theta2iUsing said α and said (z)j-yj)xjiAnd adjusting to obtain the prediction model.
The preferred embodiments of the present invention have been described in detail with reference to the accompanying drawings, however, the present invention is not limited to the specific details of the above embodiments, and various simple modifications can be made to the technical solution of the present invention within the technical idea of the present invention, and these simple modifications are within the protective scope of the present invention.
It should be noted that the various technical features described in the above embodiments can be combined in any suitable manner without contradiction, and the invention is not described in any way for the possible combinations in order to avoid unnecessary repetition.
In addition, any combination of the various embodiments of the present invention is also possible, and the same should be considered as the disclosure of the present invention as long as it does not depart from the spirit of the present invention.

Claims (6)

1. A method for personalized recommendation, the method comprising:
acquiring current operation information of an operation action executed by a user to be tested on an article to be tested, wherein the current operation information comprises: browsing duration, collection times and purchase times;
obtaining a prediction model, wherein the prediction model is used for representing the corresponding relation between operation information and a score value, and the prediction model is obtained by training sample data, and the sample data comprises a sample score value of a sample user on a sample article and sample operation information of an operation action of the sample user on the sample article; the prediction model is as follows: y is equal to theta1x12x2+…+θixi+…+θnxn0(ii) a Wherein y represents a score value; thetaiA weight value representing an ith operation action; x is the number ofiOperation information indicating the ith operation; i is more than or equal to 1 and less than or equal to n, and n is the type number of the operation actions; theta0Is a constant;
obtaining a prediction score value corresponding to the current operation information by using the prediction model;
based on the prediction score value, performing personalized recommendation;
the training of the sample data to obtain the prediction model comprises:
acquiring a plurality of groups of the sample data;
establishing an original prediction model, and obtaining a predicted value corresponding to the sample operation information in each group of sample data by using the original prediction model;
establishing a loss function, wherein the loss function is used for expressing the deviation between the sample score value in each group of sample data and the estimated value corresponding to the sample operation information in the group of sample data;
adjusting the original prediction model, and obtaining the prediction model when the loss function reaches the minimum value;
the personalized recommendation based on the prediction scoring value comprises the following steps:
and supplementing the predicted scoring values into a user item preference matrix to carry out personalized recommendation, wherein the user item preference matrix comprises the scoring values of a plurality of users on the items.
2. The method of claim 1, wherein the loss function is:
Figure FDA0002594789440000021
wherein J (θ) represents a loss function; y isjRepresenting the predicted value corresponding to the sample operation information in the jth group of sample data; z is a radical ofjAnd j is more than or equal to 1 and less than or equal to m, and m is the number of the sample data.
3. The method of claim 2,
the original prediction model is: y is equal to theta11x112x2+…+θ1ixi+…+θ1nxn10Wherein, theta1iAn initial weight value representing an ith operation action; theta10Represents an initial constant; then
The adjusting the original prediction model and obtaining the prediction model when the loss function reaches a minimum value includes:
adjusting θ in the original prediction model by the following formula1i
Figure FDA0002594789440000022
Figure FDA0002594789440000023
Wherein, theta2iThe adjusted weight value of the ith operation after one adjustment of the original prediction model, α the learning rate, and zj-yj)xjiFrom the loss function J (theta) to theta1iObtaining a partial derivative; x is the number ofjiThe sample operation information of the ith operation action in the jth group of sample data is represented, j is more than or equal to 1 and less than or equal to m, and m is the number of the sample data;
judging the theta2iWhether to minimize the loss function, and if so, the theta2iIs determined as the thetaiObtaining the prediction model; if not, then at theta2iUsing said α and said (z)j-yj)xjiAnd adjusting to obtain the prediction model.
4. A personalized recommendation device, the device comprising:
an operation information obtaining unit, configured to obtain current operation information of an operation action performed on an object to be tested by a user to be tested, where the current operation information includes: browsing duration, collection times and purchase times;
a prediction model acquisition unit configured to acquire a prediction model, the prediction model being used to represent a correspondence between operation information and score values, and acquire the prediction model by training sample data including data for a sampleSample scoring values of the user on the sample articles and sample operation information of operation actions performed on the sample articles by the sample user; the prediction model obtaining unit is specifically configured to obtain the following prediction models: y is equal to theta1x12x2+…+θixi+…+θnxn0(ii) a Wherein y represents a score value; thetaiA weight value representing an ith operation action; x is the number ofiOperation information indicating the ith operation; i is more than or equal to 1 and less than or equal to n, and n is the type number of the operation actions; theta0Is a constant;
a score value obtaining unit, configured to obtain, by using the prediction model, a prediction score value corresponding to the current operation information;
the personalized recommendation unit is used for performing personalized recommendation based on the prediction score value;
the device further comprises:
the sample data acquisition unit is used for acquiring a plurality of groups of sample data;
the model establishing unit is used for establishing an original prediction model and obtaining a predicted value corresponding to the sample operation information in each group of sample data by using the original prediction model;
the loss function establishing unit is used for establishing a loss function, and the loss function is used for expressing the deviation between the sample score value in each group of sample data and the estimated value corresponding to the sample operation information in the group of sample data;
a model adjusting unit, configured to adjust the original prediction model and obtain the prediction model when the loss function reaches a minimum value;
and the personalized recommendation unit is used for supplementing the predicted scoring values into a user item preference matrix to perform personalized recommendation, wherein the user item preference matrix comprises the scoring values of a plurality of users on the items.
5. The apparatus according to claim 4, wherein the loss function establishing unit is specifically configured to establish the following loss functions:
Figure FDA0002594789440000031
wherein J (θ) represents a loss function; y isjRepresenting the predicted value corresponding to the sample operation information in the jth group of sample data; z is a radical ofjAnd j is more than or equal to 1 and less than or equal to m, and m is the number of the sample data.
6. The apparatus of claim 5,
the model establishing unit is specifically configured to establish an original prediction model as follows:
y=θ11x112x2+…+θ1ixi+…+θ1nxn10wherein, theta1iAn initial weight value representing an ith operation action; theta10Represents an initial constant; then
The model adjusting unit is specifically configured to adjust θ in the original prediction model by the following formula1i
Figure FDA0002594789440000041
Wherein, theta2iThe adjusted weight value of the ith operation after one adjustment of the original prediction model, α the learning rate, and zj-yj)xjiFrom the loss function J (theta) to theta1iObtaining a partial derivative; x is the number ofjiThe sample operation information of the ith operation action in the jth group of sample data is represented, j is more than or equal to 1 and less than or equal to m, and m is the number of the sample data; judging the theta2iWhether to minimize the loss function, and if so, the theta2iIs determined as the thetaiObtaining the prediction model; if not, then at theta2iUsing said α and said (z)j-yj)xjiAnd adjusting to obtain the prediction model.
CN201511020219.9A 2015-12-30 2015-12-30 Personalized recommendation method and device Active CN105653683B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511020219.9A CN105653683B (en) 2015-12-30 2015-12-30 Personalized recommendation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511020219.9A CN105653683B (en) 2015-12-30 2015-12-30 Personalized recommendation method and device

Publications (2)

Publication Number Publication Date
CN105653683A CN105653683A (en) 2016-06-08
CN105653683B true CN105653683B (en) 2020-10-16

Family

ID=56478136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511020219.9A Active CN105653683B (en) 2015-12-30 2015-12-30 Personalized recommendation method and device

Country Status (1)

Country Link
CN (1) CN105653683B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846191A (en) * 2016-11-25 2017-06-13 北京粉笔蓝天科技有限公司 A kind of method of combination of curriculums table, system and server
CN106779204B (en) * 2016-12-08 2021-02-09 电子科技大学 Personalized path recommendation method and device
CN106600372A (en) * 2016-12-12 2017-04-26 武汉烽火信息集成技术有限公司 Commodity recommending method and system based on user behaviors
CN107341176B (en) * 2017-05-23 2020-05-29 北京三快在线科技有限公司 Sample weight setting method and device and electronic equipment
CN107577736B (en) * 2017-08-25 2021-12-17 武汉数字智能信息科技有限公司 File recommendation method and system based on BP neural network
CN108334729A (en) * 2017-08-28 2018-07-27 江西博瑞彤芸科技有限公司 Health information management method and management system
CN110110205A (en) * 2018-01-16 2019-08-09 北京京东金融科技控股有限公司 Recommendation information generation method and device
CN110049079A (en) * 2018-01-16 2019-07-23 阿里巴巴集团控股有限公司 Information push and model training method, device, equipment and storage medium
CN110309417A (en) * 2018-04-13 2019-10-08 腾讯科技(深圳)有限公司 The Weight Determination and device of evaluation points
CN110032498B (en) * 2018-11-23 2022-09-16 每日互动股份有限公司 Prediction method for user APP behaviors
CN109408731B (en) * 2018-12-27 2021-03-16 网易(杭州)网络有限公司 Multi-target recommendation method, multi-target recommendation model generation method and device
CN110008404B (en) * 2019-03-22 2022-08-23 成都理工大学 Latent semantic model optimization method based on NAG momentum optimization
CN113508378A (en) * 2019-10-31 2021-10-15 华为技术有限公司 Recommendation model training method, recommendation device and computer readable medium
CN111159555A (en) * 2019-12-30 2020-05-15 北京每日优鲜电子商务有限公司 Commodity recommendation method, commodity recommendation device, server and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8706670B2 (en) * 2011-01-11 2014-04-22 National Tsing Hua University Relative variable selection system and selection method thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103377296B (en) * 2012-04-19 2016-05-18 中国科学院声学研究所 A kind of data digging method of many indexs evaluation information
CN104156472B (en) * 2014-08-25 2018-05-08 北京四达时代软件技术股份有限公司 A kind of video recommendation method and system
CN104537114B (en) * 2015-01-21 2018-05-15 清华大学 Personalized recommendation method
CN105069122B (en) * 2015-08-12 2018-08-21 天津大学 A kind of personalized recommendation method and its recommendation apparatus based on user behavior

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8706670B2 (en) * 2011-01-11 2014-04-22 National Tsing Hua University Relative variable selection system and selection method thereof

Also Published As

Publication number Publication date
CN105653683A (en) 2016-06-08

Similar Documents

Publication Publication Date Title
CN105653683B (en) Personalized recommendation method and device
CN109902708B (en) Recommendation model training method and related device
CN106485562B (en) Commodity information recommendation method and system based on user historical behaviors
US20210142389A1 (en) Method and System for making Recommendation from Binary Data Using Neighbor-Score Matrix and Latent Factors
CN106484777B (en) Multimedia data processing method and device
KR101573601B1 (en) Apparatus and method for hybrid filtering content recommendation using user profile and context information based on preference
CN104331459B (en) A kind of network resource recommended method and device based on on-line study
CN107341176B (en) Sample weight setting method and device and electronic equipment
EP2960849A1 (en) Method and system for recommending an item to a user
CN108132964B (en) Collaborative filtering method for scoring project classes based on user
CN111506820B (en) Recommendation model, recommendation method, recommendation device, recommendation equipment and recommendation storage medium
CN110209946B (en) Social and community-based product recommendation method, system and storage medium
CN107016122B (en) Knowledge recommendation method based on time migration
CN109739768B (en) Search engine evaluation method, device, equipment and readable storage medium
CN109558544B (en) Sorting method and device, server and storage medium
CN110647678A (en) Recommendation method based on user character label
CN107577736B (en) File recommendation method and system based on BP neural network
CN112395496A (en) Information recommendation method and device, electronic equipment and storage medium
CN107103093A (en) A kind of short text based on user behavior and sentiment analysis recommends method and device
CN107798457B (en) Investment portfolio scheme recommending method, device, computer equipment and storage medium
CN112487283A (en) Method and device for training model, electronic equipment and readable storage medium
Zeldes et al. Deep density networks and uncertainty in recommender systems
CN112084825B (en) Cooking evaluation method, cooking recommendation method, computer device and storage medium
CN105260458A (en) Video recommendation method for display apparatus and display apparatus
CN103744929A (en) Target user object determination method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant