CN102722552B - Learning rate regulating method in collaborative filtering model - Google Patents

Learning rate regulating method in collaborative filtering model Download PDF

Info

Publication number
CN102722552B
CN102722552B CN201210168756.8A CN201210168756A CN102722552B CN 102722552 B CN102722552 B CN 102722552B CN 201210168756 A CN201210168756 A CN 201210168756A CN 102722552 B CN102722552 B CN 102722552B
Authority
CN
China
Prior art keywords
hidden
proper vector
training
learning rate
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210168756.8A
Other languages
Chinese (zh)
Other versions
CN102722552A (en
Inventor
罗辛
陈鹏
夏云霓
吴磊
杨瑞龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Gkhb Information Technology Co ltd
Chongqing University
Original Assignee
CHENGDU GUOKE HAIBO COMPUTER SYSTEMS Co Ltd
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHENGDU GUOKE HAIBO COMPUTER SYSTEMS Co Ltd, Chongqing University filed Critical CHENGDU GUOKE HAIBO COMPUTER SYSTEMS Co Ltd
Priority to CN201210168756.8A priority Critical patent/CN102722552B/en
Publication of CN102722552A publication Critical patent/CN102722552A/en
Application granted granted Critical
Publication of CN102722552B publication Critical patent/CN102722552B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a learning rate regulating method in a collaborative filtering model, and belongs to the technical field of the data mining and the personalized recommendation. The learning rate regulating method enhances the convergence rate by enhancing the learning rate corresponding to a Cain feature vector, and enhances the recommendation accuracy by reducing the learning rate corresponding to the Cain feature vector. The learning rate regulating method disclosed by the invention can enable the accuracy and the convergence rate of a recommended model to keep a better balance state, so that the training process of the recommended model is optimized.

Description

A kind of method of regularized learning algorithm speed in collaborative filtering recommending model
Technical field
The invention belongs to data mining and personalized recommendation technical field, particularly relate to the method for regularized learning algorithm speed in a kind of collaborative filtering recommending model.
Background technology
The explosive increase of internet information scale, brings the problem of information overload, and excess of information presents simultaneously, makes user be difficult to therefrom filter out the effective part to individual, and information utilization reduces on the contrary.Personalized recommendation technology is the important branch of data mining research field, and target is that the Intelligent Service of " information is looked for people " is provided by setting up personalized recommendation system, fundamentally to solve information overload.
As recommending generation source, recommended models is the core component in personalized recommendation system, and recommended models based on matrix factorization possesses good recommendation accuracy rate and extensibility because of it, be the widely used recommended models of a class, still, the construction process of the recommended models based on matrix factorization depends on learning rate, if learning rate is too high, can reduce and recommend accuracy rate, if learning rate is too low, can reduce the convergence speed of model.
At present, to the recommended models based on matrix factorization, the learning rate setting of its training process remains the fixedly method of empirical value that adopts, according to Construction of A Model experience in the past, choosing one may reach and recommend accuracy rate and the convergence speed better value of balance between the two, as changeless learning rate, the method has been ignored different training datas and the objective requirement of applied environment to recommended models, and accuracy rate is low and easily cause the convergence speed of recommended models slow.
Therefore furtheing investigate on the basis of the recommended models training process based on matrix factorization, those skilled in the art are devoted to develop the method for regularized learning algorithm speed in a kind of collaborative filtering recommending model, can make the accuracy rate of recommended models and speed of convergence reach a good equilibrium state.
Summary of the invention
Because the above-mentioned defect of prior art, technical matters to be solved by this invention is to provide the method for regularized learning algorithm speed in a kind of collaborative filtering recommending model, make accuracy rate and the speed of convergence of recommended models reach a good equilibrium state, the training process of recommended models is optimized.
For achieving the above object, the invention provides the method for regularized learning algorithm speed in a kind of collaborative filtering recommending model, carry out according to the following steps:
Step 1, definition are also calculated the learning rate magnification ratio factor and the scale down factor; Set up learning rate and the hidden proper vector of user corresponding relation, set up the corresponding relation of learning rate and the hidden proper vector of project;
Set the magnification ratio factor-alpha of learning rate; By sigmoid function definition
Figure BDA00001691615800021
0 < η 0< 1; Set the scale down factor-beta of learning rate, β=α -1;
Setting the hidden proper vector of user is P, the matrix that P is m * f, and m is number of users, the dimension that f is hidden characteristic vector space, P u, kit is the element that in P, u is capable, k is listed as; For all p u, k{ 1≤u≤m, 1≤k≤f} sets up learning rate η u, k, initialization η u, k0, m, f are positive integer;
The hidden proper vector of setting item is Q, the matrix that Q is n * f, and n is item number, the dimension that f is hidden characteristic vector space, q i, kit is the element that in Q, i is capable, k is listed as; For all q i, k{ 1≤i≤n, 1≤k≤f} sets up learning rate η i, k, initialization η i, k0, n is positive integer;
Step 2, the calculating hidden proper vector of user are or/and the hidden proper vector of project is being trained the learning direction of t constantly;
For the hidden proper vector P of user u, kwith the hidden proper vector q of project i, k, it is r at training data corresponding to training moment t u, i; P u, kin the training moment, the learning direction of t is
Figure BDA00001691615800022
d u , k t = ( r u , i - &lang; p u t - 1 , q i t - 1 &rang; ) &CenterDot; q i , k t - 1 - &lambda; &CenterDot; p u , k t - 1 , t T is positive integer;
Q i, kin the training moment, the learning direction of t is d i , k t = ( r u , i - &lang; p u t - 1 , q i t - 1 &rang; ) &CenterDot; q u , k t - 1 - &lambda; &CenterDot; p i , k t - 1 ;
Figure BDA00001691615800034
p uthe state value of the corresponding hidden proper vector of user after the training moment, t-1 finished;
Figure BDA00001691615800035
q ithe state value of the corresponding hidden proper vector of project after the training moment, t-1 finished;
Figure BDA00001691615800036
with
Figure BDA00001691615800037
p u, kand q i, krespectively at the state value of training after moment t-1 finishes; λ is the stipulations factor, by the parameter P of λ and current training u, kstate value
Figure BDA00001691615800038
substitution learning direction, thus the overfitting in training process reduced, after the training moment, t finished, right respectively
Figure BDA00001691615800039
with
Figure BDA000016916158000310
carry out buffer memory;
Step 3, use determinacy step by step modulating method regularized learning algorithm speed;
The hidden proper vector P of user u, klearning direction when training moment t+1 is
Figure BDA000016916158000311
the hidden proper vector q of project i, klearning direction when training moment t+1 is
Figure BDA000016916158000312
Calculate
Figure BDA000016916158000313
or/and
Figure BDA000016916158000314
d u , k t + 1 = ( r u , i - &lang; p u t , q i t &rang; ) &CenterDot; q i , k t - &lambda; &CenterDot; p u , k t ;
d i , k t + 1 = ( r u , i - &lang; p u t , q i t &rang; ) &CenterDot; q u , k t - &lambda; &CenterDot; p i , k t ;
Judgement
Figure BDA000016916158000317
or/and
Figure BDA000016916158000318
product signs;
When time, use learning rate magnification ratio factor-alpha to η u, kamplify: &eta; u , k t + 1 = &eta; u , k t &CenterDot; &alpha; ;
When time, use learning rate scale down factor-beta to η u, kdwindle: &eta; u , k t + 1 = &eta; u , k t &CenterDot; &beta; ;
Figure BDA000016916158000323
η u, kat the state value of training moment t,
Figure BDA000016916158000324
η u, kstate value at training moment t+1;
When
Figure BDA000016916158000325
time, use learning rate magnification ratio factor-alpha to η i, kamplify:
Figure BDA000016916158000326
When
Figure BDA000016916158000327
time, use learning rate scale down factor-beta to η i, kdwindle:
Figure BDA000016916158000328
Figure BDA000016916158000329
η i,kat the state value of training moment t,
Figure BDA000016916158000330
η i,kstate value at training moment t+1.
Preferably, also comprise the step that the hidden proper vector of user or the hidden proper vector of project are upgraded;
The hidden proper vector of user is when training moment t+1
Figure BDA00001691615800041
Figure BDA00001691615800042
The hidden proper vector of project is when training moment t+1
Figure BDA00001691615800043
Figure BDA00001691615800044
The invention has the beneficial effects as follows: the present invention, by dynamic regularized learning algorithm speed, can make the accuracy rate of recommended models and speed of convergence reach a good equilibrium state, and the training process of recommended models is optimized.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the embodiment of the present invention one.
Fig. 2 is the schematic flow sheet of the embodiment of the present invention two.
Fig. 3 is the schematic flow sheet of the embodiment of the present invention three.
Fig. 4 is the schematic flow sheet of the embodiment of the present invention four.
Fig. 5 is the schematic flow sheet of the embodiment of the present invention five.
Fig. 6 is the schematic flow sheet of the embodiment of the present invention six.
Fig. 7 is the schematic flow sheet of the embodiment of the present invention seven.
Fig. 8 is the schematic flow sheet of the embodiment of the present invention eight.
Fig. 9 is the schematic flow sheet of the embodiment of the present invention nine.
Figure 10 is the schematic flow sheet of the embodiment of the present invention ten.
Figure 11 is the recommendation accuracy rate comparison diagram before and after the present invention optimizes.
Figure 12 is the speed of convergence comparison diagram before and after the present invention optimizes.
Embodiment
Below in conjunction with drawings and Examples, the invention will be further described:
Embodiment mono-: as shown in Figure 1, a kind of method of regularized learning algorithm speed in collaborative filtering recommending model, carry out according to the following steps:
A1, definition are also calculated the learning rate magnification ratio factor and the scale down factor;
Set the magnification ratio factor-alpha of learning rate; By sigmoid function definition
Figure BDA00001691615800051
0 < η 0< 1; Set the scale down factor-beta of learning rate, β=α -1;
A2, set up learning rate and the hidden proper vector of user corresponding relation, set up the corresponding relation of learning rate and the hidden proper vector of project;
Setting the hidden proper vector of user is P, the matrix that P is m * f, and m is number of users, the dimension that f is hidden characteristic vector space, P u, kit is the element that in P, u is capable, k is listed as; For all p u, k{ 1≤u≤m, 1≤k≤f} sets up learning rate η u, k, initialization η u, k0, m, f are positive integer;
The hidden proper vector of setting item is Q, the matrix that Q is n * f, and n is item number, the dimension that f is hidden characteristic vector space, q i, kit is the element that in Q, i is capable, k is listed as; For all q i, k{ 1≤i≤n, 1≤k≤f} sets up learning rate η i, k, initialization η i, k0, n is positive integer;
A3, the hidden proper vector of calculating user are being trained the learning direction of t constantly;
For the hidden proper vector P of user u, kwith the hidden proper vector q of project i, k, it is r at training data corresponding to training moment t u, i; P u, kin the training moment, the learning direction of t is
d u , k t = ( r u , i - &lang; p u t - 1 , q i t - 1 &rang; ) &CenterDot; q i , k t - 1 - &lambda; &CenterDot; p u , k t - 1 , t T is positive integer;
Figure BDA00001691615800054
p uthe state value of the corresponding hidden proper vector of user after the training moment, t-1 finished;
Figure BDA00001691615800055
q ithe state value of the corresponding hidden proper vector of project after the training moment, t-1 finished;
Figure BDA00001691615800056
with
Figure BDA00001691615800057
p u, kand q i, krespectively at the state value of training after moment t-1 finishes; λ is the stipulations factor; After the training moment, t finished, right respectively
Figure BDA00001691615800058
with
Figure BDA00001691615800059
carry out buffer memory;
A4, calculate in training constantly during t+1 the hidden proper vector P of user u, klearning direction
Figure BDA000016916158000510
d u , k t + 1 = ( r u , i - &lang; p u t , q i t &rang; ) &CenterDot; q i , k t - &lambda; &CenterDot; p u , k t ;
A5, judgement product signs;
When
Figure BDA00001691615800062
time, use learning rate magnification ratio factor-alpha to η u, kamplify: &eta; u , k t + 1 = &eta; u , k t &CenterDot; &alpha; ;
When
Figure BDA00001691615800064
time, use learning rate scale down factor-beta to η u, kdwindle: &eta; u , k t + 1 = &eta; u , k t &CenterDot; &beta; ;
η u, kat the state value of training moment t,
Figure BDA00001691615800067
η u, kstate value at training moment t+1.
The hidden proper vector of user is at two continuous training learning direction jack per line constantly, be illustrated in two continuous training constantly, the direction of search of the hidden proper vector of user in search volume do not change, illustrate that current learning direction possesses higher reliability, can suitably increase learning rate corresponding to this hidden proper vector, thereby improve speed of convergence.
The hidden proper vector of user is at two continuous training learning direction contrary sign constantly, represent that it in two continuous training constantly, there is concussion in the direction of search in search volume, illustrate that current learning direction reliability is lower, can suitably dwindle learning rate corresponding to this hidden proper vector, thereby improve, recommend accuracy rate.
Embodiment bis-: as shown in Figure 2, a kind of method of regularized learning algorithm speed in collaborative filtering recommending model, carry out according to the following steps:
A1, definition are also calculated the learning rate magnification ratio factor and the scale down factor;
Set the magnification ratio factor-alpha of learning rate; By sigmoid function definition
Figure BDA00001691615800068
0 < η 0< 1; Set the scale down factor-beta of learning rate, β=α -1;
A2, set up learning rate and the hidden proper vector of user corresponding relation, set up the corresponding relation of learning rate and the hidden proper vector of project;
Setting the hidden proper vector of user is P, the matrix that P is m * f, and m is number of users, the dimension that f is hidden characteristic vector space, P u, kit is the element that in P, u is capable, k is listed as; For all p u, k{ 1≤u≤m, 1≤k≤f} sets up learning rate η u, k, initialization η u, k0, m, f are positive integer;
The hidden proper vector of setting item is Q, the matrix that Q is n * f, and n is item number, the dimension that f is hidden characteristic vector space, q i, kit is the element that in Q, i is capable, k is listed as; For all q i, k{ 1≤i≤n, 1≤k≤f} sets up learning rate η i, k, initialization η i, k0, n is positive integer;
A3, the hidden proper vector of computational item are being trained the learning direction of t constantly;
For the hidden proper vector P of user u, kwith the hidden proper vector q of project i, k, it is r at training data corresponding to training moment t u, i; q i, kin the training moment, the learning direction of t is
Figure BDA00001691615800071
Q i, klearning direction at training moment t
Figure BDA00001691615800072
be expressed as: d i , k t = ( r u , i - &lang; p u t - 1 , q i t - 1 &rang; ) &CenterDot; q u , k t - 1 - &lambda; &CenterDot; p i , k t - 1 ;
Figure BDA00001691615800074
p uthe state value of the corresponding hidden proper vector of user after the training moment, t-1 finished;
Figure BDA00001691615800075
q ithe state value of the corresponding hidden proper vector of project after the training moment, t-1 finished;
Figure BDA00001691615800076
with
Figure BDA00001691615800077
p u, kand q i, krespectively at the state value of training after moment t-1 finishes; λ is the stipulations factor; After the training moment, t finished, right respectively with carry out buffer memory;
A4, the calculating hidden proper vector q of project when training moment t+1 i, klearning direction
Figure BDA000016916158000710
d i , k t + 1 = ( r u , i - &lang; p u t , q i t &rang; ) &CenterDot; q u , k t - &lambda; &CenterDot; p i , k t ;
A5, judgement
Figure BDA000016916158000712
product signs;
When time, use learning rate magnification ratio factor-alpha to η i, kamplify:
Figure BDA000016916158000714
When
Figure BDA000016916158000715
time, use learning rate scale down factor-beta to η i, kdwindle:
Figure BDA000016916158000716
Figure BDA000016916158000717
η i,kat the state value of training moment t,
Figure BDA000016916158000718
η i,kstate value at training moment t+1.
The hidden proper vector of project is at two continuous training learning direction jack per line constantly, be illustrated in two continuous training constantly, the direction of search of the hidden proper vector of project in search volume do not change, illustrate that current learning direction possesses higher reliability, can suitably increase learning rate corresponding to this hidden proper vector, thereby improve speed of convergence.
The hidden proper vector of project is at two continuous training learning direction contrary sign constantly, represent that it in two continuous training constantly, there is concussion in the direction of search in search volume, illustrate that current learning direction reliability is lower, can suitably dwindle learning rate corresponding to this hidden proper vector, thereby improve, recommend accuracy rate.
Embodiment tri-: as shown in Figure 3, the flow process of the present embodiment and embodiment mono-are basic identical, difference is: first set up learning rate and the hidden proper vector of user corresponding relation, set up the corresponding relation of learning rate and the hidden proper vector of project, and then definition and calculate the learning rate magnification ratio factor and the scale down factor.
Embodiment tetra-: as shown in Figure 4, the flow process of the present embodiment and embodiment bis-are basic identical, difference is: first set up learning rate and the hidden proper vector of user corresponding relation, set up the corresponding relation of learning rate and the hidden proper vector of project, and then definition and calculate the learning rate magnification ratio factor and the scale down factor.
Embodiment five: as shown in Figure 5, the flow process of the present embodiment and embodiment mono-are basic identical, and difference is: after regularized learning algorithm speed, also comprise the step that the hidden proper vector of user is upgraded;
The hidden proper vector of user is when training moment t+1
Figure BDA00001691615800081
Figure BDA00001691615800082
By use the learning rate of adjusting by determinacy step by step modulating method in training process, thereby make the hidden proper vector of user comprise it in continuous two training training information constantly at each training pace of learning constantly, thereby reach the object of optimizing training process.
Embodiment six: as shown in Figure 6, the flow process of the present embodiment and embodiment bis-are basic identical, and difference is: after regularized learning algorithm speed, also comprise the step that the hidden proper vector of project is upgraded;
The hidden proper vector of project is when training moment t+1
Figure BDA00001691615800083
Figure BDA00001691615800084
By use the learning rate of adjusting by determinacy step by step modulating method in training process, thereby make the hidden proper vector of project comprise it in continuous two training training information constantly at each training pace of learning constantly, thereby reach the object of optimizing training process.
Embodiment seven: as shown in Figure 7, the flow process of the present embodiment and embodiment tri-are basic identical, and difference is: after regularized learning algorithm speed, also comprise the step that the hidden proper vector of user is upgraded;
The hidden proper vector of user is when training moment t+1
Figure BDA00001691615800092
By use the learning rate of adjusting by determinacy step by step modulating method in training process, thereby make the hidden proper vector of user comprise it in continuous two training training information constantly at each training pace of learning constantly, thereby reach the object of optimizing training process.
Embodiment eight: as shown in Figure 8, the flow process of the present embodiment and embodiment tetra-are basic identical, and difference is: after regularized learning algorithm speed, also comprise the step that the hidden proper vector of project is upgraded;
The hidden proper vector of project is when training moment t+1
Figure BDA00001691615800094
By use the learning rate of adjusting by determinacy step by step modulating method in training process, thereby make the hidden proper vector of project comprise it in continuous two training training information constantly at each training pace of learning constantly, thereby reach the object of optimizing training process.
Embodiment nine: as shown in Figure 9, a kind of method of regularized learning algorithm speed in collaborative filtering recommending model, carry out according to the following steps:
Step 1, definition are also calculated the learning rate magnification ratio factor and the scale down factor; Set up learning rate and the hidden proper vector of user corresponding relation, set up the corresponding relation of learning rate and the hidden proper vector of project;
Set the magnification ratio factor-alpha of learning rate; By sigmoid function definition
Figure BDA00001691615800095
0 < η 0< 1; Set the scale down factor-beta of learning rate, β=α -1;
Setting the hidden proper vector of user is P, the matrix that P is m * f, and m is number of users, the dimension that f is hidden characteristic vector space, P u, kit is the element that in P, u is capable, k is listed as; For all p u, k{ 1≤u≤m, 1≤k≤f} sets up learning rate η u, k, initialization η u, k0, m, f are positive integer;
The hidden proper vector of setting item is Q, the matrix that Q is n * f, and n is item number, the dimension that f is hidden characteristic vector space, q i, kit is the element that in Q, i is capable, k is listed as; For all q i, k{ 1≤i≤n, 1≤k≤f} sets up learning rate η i, k, initialization η i, k0, n is positive integer;
Step 2, the calculating hidden proper vector of user and the hidden proper vector of project are at the learning direction of training moment t;
For the hidden proper vector P of user u, kwith the hidden proper vector q of project i, k, it is r at training data corresponding to training moment t u, i; P u, kin the training moment, the learning direction of t is
Figure BDA00001691615800101
Figure BDA00001691615800102
t is positive integer; q i, kin the training moment, the learning direction of t is
Figure BDA00001691615800103
Figure BDA00001691615800104
Figure BDA00001691615800105
p uthe state value of the corresponding hidden proper vector of user after the training moment, t-1 finished;
Figure BDA00001691615800106
q ithe state value of the corresponding hidden proper vector of project after the training moment, t-1 finished;
Figure BDA00001691615800107
with p u, kand q i, krespectively at the state value of training after moment t-1 finishes; λ is the stipulations factor; After the training moment, t finished, right respectively
Figure BDA00001691615800109
with carry out buffer memory;
Step 3, use determinacy step by step modulating method regularized learning algorithm speed;
The hidden proper vector P of user u, klearning direction when training moment t+1 is
Figure BDA000016916158001011
the hidden proper vector q of project i, klearning direction when training moment t+1 is
Figure BDA000016916158001012
calculate
Figure BDA000016916158001013
with
Figure BDA000016916158001014
d u , k t + 1 = ( r u , i - &lang; p u t , q i t &rang; ) &CenterDot; q i , k t - &lambda; &CenterDot; p u , k t ;
d i , k t = ( r u , i - &lang; p u t - 1 , q i t - 1 &rang; ) &CenterDot; q u , k t - 1 - &lambda; &CenterDot; p i , k t - 1 ;
Judgement
Figure BDA000016916158001017
with
Figure BDA000016916158001018
product signs;
When
Figure BDA000016916158001019
time, use learning rate magnification ratio factor-alpha to η u, kamplify: &eta; u , k t + 1 = &eta; u , k t &CenterDot; &alpha; ;
When time, use learning rate scale down factor-beta to η u, kdwindle: &eta; u , k t + 1 = &eta; u , k t &CenterDot; &beta; ;
Figure BDA00001691615800111
η u, kat the state value of training moment t,
Figure BDA00001691615800112
η u, kstate value at training moment t+1;
When
Figure BDA00001691615800113
time, use learning rate magnification ratio factor-alpha to η i, kamplify:
Figure BDA00001691615800114
When
Figure BDA00001691615800115
time, use learning rate scale down factor-beta to η i, kdwindle:
Figure BDA00001691615800116
η i,kat the state value of training moment t,
Figure BDA00001691615800118
η i,kstate value at training moment t+1.
The hidden proper vector of user or the hidden proper vector of project are at two continuous training learning direction jack per line constantly, be illustrated in two continuous training constantly, the hidden proper vector of user or the direction of search of the hidden proper vector of project in search volume do not change, illustrate that current learning direction possesses higher reliability, can suitably increase learning rate corresponding to this hidden proper vector, thereby improve speed of convergence;
The hidden proper vector of user or the hidden proper vector of project are at two continuous training learning direction contrary sign constantly, represent that it in two continuous training constantly, there is concussion in the direction of search in search volume, illustrate that current learning direction reliability is lower, can suitably dwindle learning rate corresponding to this hidden proper vector, thereby improve, recommend accuracy rate.
Embodiment ten: as shown in figure 10, the flow process of the present embodiment and embodiment nine are basic identical, and difference is: after regularized learning algorithm speed, also comprise the step that the hidden proper vector of user and the hidden proper vector of project are upgraded;
The hidden proper vector of user is when training moment t+1
Figure BDA000016916158001110
The hidden proper vector of project is when training moment t+1
Figure BDA000016916158001111
Figure BDA000016916158001112
By use the learning rate of adjusting by determinacy step by step modulating method in training process, thereby make the hidden proper vector of user or the hidden proper vector of project comprise it in continuous two training training information constantly at each training pace of learning constantly, thereby reach the object of optimizing training process.
As can be seen from the above embodiments, set up learning rate and the hidden proper vector of user corresponding relation, set up learning rate and the hidden proper vector of project corresponding relation step can with definition and calculate the learning rate magnification ratio factor and the step of the scale down factor in no particular order order operate.
For the correctness of method and accuracy are verified, be configured to INTEL i5-760,2.8G processor, has moved emulation experiment on the PC of 8G internal memory and has verified.In experimental verification, used MovieLens 1M data set, MovieLens 1M data set is the authoritative public testing data set in personalized recommendation technical research field, derive from http://www.grouplens.org/node/12, this data set has comprised 6040 users and 3900 projects has been surpassed to the score information of 1,000,000, its user-project rating matrix consistency is respectively 4.25%, all user's scorings are all distributed in interval [0,5], in, the higher representative of consumer of score value is stronger to the interest of respective item.Experiment is used root-mean-square error RMSE as the evaluation index of recommending accuracy rate, uses exercise wheel number as the evaluation index of convergence speed; RMSE is lower, recommends accuracy rate higher; Exercise wheel number is fewer, and convergence speed is faster.
Each parameter in experiment is set to: stipulations factor lambda=0.05, and hidden feature space dimension f=20, m is set to 6040, n according to the number of users of Experiment Training data centralization and is set to 3900 according to the item number of Experiment Training data centralization.
Figure 11 is the recommendation accuracy rate comparison diagram before and after the present invention optimizes, in figure, lines 1 are the recommendation accuracy rate before optimizing, lines 2 are the recommendation accuracy rate after optimizing, as seen from Figure 11, the RMSE value of lines 2 is starkly lower than lines 1, because RMSE value is lower, recommend accuracy rate higher, can find out after the present invention optimizes, for different learning rate initial value η 0, the recommended models based on matrix factorization all can obtain than higher recommendation accuracy before optimizing.
Figure 12 is the speed of convergence comparison diagram before and after the present invention optimizes, and in figure, lines 3 are the speed of convergence after optimizing, and lines 4 are the speed of convergence before optimizing, as seen from Figure 12, and after using the present invention to optimize, as learning rate initial value η 0be less than at 0.015 o'clock, the convergence speed of recommended models is obviously faster than before optimizing; And as learning rate initial value η 0be greater than at 0.015 o'clock, the convergence speed of recommended models is front basically identical with optimization.As can be seen here, after using this method to optimize, can obviously reduce learning rate initial value η 0impact on model speed of convergence; The method that the present invention proposes can make the recommended models based on matrix factorization reach and recommend the well balanced of accuracy rate and convergence speed.
More than describe preferred embodiment of the present invention in detail.Should be appreciated that those of ordinary skill in the art just can design according to the present invention make many modifications and variations without creative work.Therefore, all technician in the art, all should be in the determined protection domain by claims under this invention's idea on the basis of existing technology by the available technical scheme of logical analysis, reasoning, or a limited experiment.

Claims (2)

1. a method for regularized learning algorithm speed in collaborative filtering recommending model, is characterized in that carrying out according to the following steps:
Step 1, definition are also calculated the learning rate magnification ratio factor and the scale down factor; Set up learning rate and the hidden proper vector of user corresponding relation, set up the corresponding relation of learning rate and the hidden proper vector of project;
Set the magnification ratio factor-alpha of learning rate; By sigmoid function definition
Figure FDA0000373125950000011
0 < η 0< 1; Set the scale down factor-beta of learning rate, β=α -1;
Setting the hidden proper vector of user is P, the matrix that P is m * f, and m is number of users, the dimension that f is hidden characteristic vector space, P u, kit is the element that in P, u is capable, k is listed as; For all P u, k{ 1≤u≤m, 1≤k≤f} sets up learning rate η u, k, initialization η u, k0, m, f are positive integer;
The hidden proper vector of setting item is Q, the matrix that Q is n * f, and n is item number, the dimension that f is hidden characteristic vector space, q i, kit is the element that in Q, i is capable, k is listed as; For all q i, k{ 1≤i≤n, 1≤k≤f} sets up learning rate η i, k, initialization η i, k0, n is positive integer;
Step 2, the calculating hidden proper vector of user are or/and the hidden proper vector of project is being trained the learning direction of t constantly;
For the hidden proper vector P of user u, kwith the hidden proper vector q of project i, k, it is r in training score data corresponding to training moment t u,i; P u, kin the training moment, the learning direction of t is
Figure FDA0000373125950000012
d u , k t = ( r u , i - < p u t - 1 , q i t - 1 > ) &CenterDot; q i , k t - 1 - &lambda; &CenterDot; p u , k t - 1 , T is positive integer;
Q i, kin the training moment, the learning direction of t is d i , k t = ( r u , i - < p u t - 1 , q i t - 1 > ) &CenterDot; p u , k t - 1 - &lambda; &CenterDot; q i , k t - 1 ;
Figure FDA0000373125950000016
p uthe state value of the corresponding hidden proper vector of user after the training moment, t-1 finished;
Figure FDA0000373125950000017
q ithe state value of the corresponding hidden proper vector of project after the training moment, t-1 finished;
Figure FDA0000373125950000018
with
Figure FDA0000373125950000019
p u, kand q i, krespectively at the state value of training after moment t-1 finishes; λ is the stipulations factor;
Step 3, use determinacy step by step modulating method regularized learning algorithm speed;
The hidden proper vector P of user u, klearning direction when training moment t+1 is
Figure FDA0000373125950000021
the hidden proper vector q of project i, klearning direction when training moment t+1 is
Calculate
Figure FDA0000373125950000023
or/and
Figure FDA0000373125950000024
d u , k t = ( r u , i - < p u t , q i t > ) &CenterDot; q i , k t - &lambda; &CenterDot; p u , k t ;
d i , k t + 1 = ( r u , i - < p u t , q i t > ) &CenterDot; q u , k t - &lambda; &CenterDot; p i , k t ;
Judgement
Figure FDA0000373125950000026
or/and
Figure FDA0000373125950000027
product signs;
When
Figure FDA0000373125950000028
time, use learning rate magnification ratio factor-alpha to η u, kamplify: &eta; u , k t + 1 = &eta; u , k t &CenterDot; &alpha; ;
When
Figure FDA00003731259500000210
time, use learning rate scale down factor-beta to η u, kdwindle: &eta; u , k t + 1 = &eta; u , k t &CenterDot; &beta; ;
Figure FDA00003731259500000212
η u, kat the state value of training moment t,
Figure FDA00003731259500000213
η u, kstate value at training moment t+1;
When
Figure FDA00003731259500000214
time, use learning rate magnification ratio factor-alpha to η i, kamplify:
Figure FDA00003731259500000215
When
Figure FDA00003731259500000216
time, use learning rate scale down factor-beta to η i, kdwindle:
Figure FDA00003731259500000218
η i,kat the state value of training moment t,
Figure FDA00003731259500000219
η i,kstate value at training moment t+1;
Step 4, to the hidden proper vector of user or/and the hidden proper vector of project upgrade:
The hidden proper vector of user is when training moment t+1
Figure FDA00003731259500000220
Figure FDA00003731259500000221
The hidden proper vector of project is when training moment t+1
Figure FDA00003731259500000223
2. the method for regularized learning algorithm speed in a kind of collaborative filtering recommending model as claimed in claim 1, is characterized in that: after being also included in training t finishing constantly, right respectively
Figure FDA00003731259500000224
with
Figure FDA00003731259500000225
carry out the step of buffer memory.
CN201210168756.8A 2012-05-28 2012-05-28 Learning rate regulating method in collaborative filtering model Expired - Fee Related CN102722552B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210168756.8A CN102722552B (en) 2012-05-28 2012-05-28 Learning rate regulating method in collaborative filtering model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210168756.8A CN102722552B (en) 2012-05-28 2012-05-28 Learning rate regulating method in collaborative filtering model

Publications (2)

Publication Number Publication Date
CN102722552A CN102722552A (en) 2012-10-10
CN102722552B true CN102722552B (en) 2014-02-26

Family

ID=46948313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210168756.8A Expired - Fee Related CN102722552B (en) 2012-05-28 2012-05-28 Learning rate regulating method in collaborative filtering model

Country Status (1)

Country Link
CN (1) CN102722552B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105302873A (en) * 2015-10-08 2016-02-03 北京航空航天大学 Collaborative filtering optimization method based on condition restricted Boltzmann machine
CN110378731B (en) * 2016-04-29 2021-04-20 腾讯科技(深圳)有限公司 Method, device, server and storage medium for acquiring user portrait
CN108389113B (en) * 2018-03-22 2022-04-19 广东工业大学 Collaborative filtering recommendation method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540874A (en) * 2009-04-23 2009-09-23 中山大学 Interactive TV program recommendation method based on collaborative filtration
CN102135989A (en) * 2011-03-09 2011-07-27 北京航空航天大学 Normalized matrix-factorization-based incremental collaborative filtering recommending method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101415022B1 (en) * 2007-07-24 2014-07-09 삼성전자주식회사 Method and apparatus for information recommendation using hybrid algorithm

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540874A (en) * 2009-04-23 2009-09-23 中山大学 Interactive TV program recommendation method based on collaborative filtration
CN102135989A (en) * 2011-03-09 2011-07-27 北京航空航天大学 Normalized matrix-factorization-based incremental collaborative filtering recommending method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
罗辛等.通过相似度支持度优化基于K近邻的协同过滤算法.《计算机学报》.2010,第33卷(第08期),第1437-1445页.
通过相似度支持度优化基于K近邻的协同过滤算法;罗辛等;《计算机学报》;20100831;第33卷(第08期);第1437-1445页 *

Also Published As

Publication number Publication date
CN102722552A (en) 2012-10-10

Similar Documents

Publication Publication Date Title
WO2020107806A1 (en) Recommendation method and device
CN105046515B (en) Method and device for sorting advertisements
WO2019072107A1 (en) Prediction of spending power
CN102231144B (en) A kind of power distribution network method for predicting theoretical line loss based on Boosting algorithm
CN101908172B (en) A kind of power market hybrid simulation method adopting multiple intelligent agent algorithms
CN102541920A (en) Method and device for improving accuracy degree by collaborative filtering jointly based on user and item
Langone et al. Incremental kernel spectral clustering for online learning of non-stationary data
CN104008515A (en) Intelligent course selection recommendation method
CN110263257A (en) Multi-source heterogeneous data mixing recommended models based on deep learning
CN106021366A (en) API (Application Programing Interface) tag recommendation method based on heterogeneous information
CN102722552B (en) Learning rate regulating method in collaborative filtering model
CN105373853A (en) Stock public opinion index prediction method and device
CN109670161A (en) Commodity similarity calculating method and device, storage medium, electronic equipment
CN103870604A (en) Travel recommendation method and device
Froemelt et al. A two-stage clustering approach to investigate lifestyle carbon footprints in two Australian cities
CN113409157B (en) Cross-social network user alignment method and device
CN110263232A (en) A kind of mixed recommendation method based on range study and deep learning
CN102930341A (en) Optimal training method of collaborative filtering recommendation model
Blom et al. Accurate model reduction of large hydropower systems with associated adaptive inflow
CN109034278A (en) A kind of ELM-IN-ELM frame Ensemble Learning Algorithms based on extreme learning machine
CN114169906B (en) Electronic coupon pushing method and device
CN103914780A (en) Group buying ordering system and method
CN108228833B (en) Method for solving community project recommendation task by utilizing user tendency learning
Wang et al. Font Design in Visual Communication Design of Genetic Algorithm
CN102768751A (en) Learning resource push system and learning resource push method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: CHENGDU GUOKE HAIBO INFORMATION TECHNOLOGY CO., LT

Free format text: FORMER OWNER: CHONGQING UNIVERSITY

Effective date: 20150326

Free format text: FORMER OWNER: CHENGDU GUOKE HAIBO INFORMATION TECHNOLOGY CO., LTD.

Effective date: 20150326

C41 Transfer of patent application or patent right or utility model
C56 Change in the name or address of the patentee
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 400045 SHAPINGBA, CHONGQING TO: 610041 CHENGDU, SICHUAN PROVINCE

CP01 Change in the name or title of a patent holder

Address after: 400045 Shapingba District, Sha Sha Street, No. 174, Chongqing

Patentee after: Chongqing University

Patentee after: CHENGDU GKHB INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 400045 Shapingba District, Sha Sha Street, No. 174, Chongqing

Patentee before: Chongqing University

Patentee before: CHENGDU GUOKE HAIBO COMPUTER SYSTEMS Co.,Ltd.

TR01 Transfer of patent right

Effective date of registration: 20150326

Address after: 610041, 4 Building 1, ideal center, No. 38 Tianyi street, hi tech Zone, Sichuan, Chengdu

Patentee after: CHENGDU GKHB INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 400045 Shapingba District, Sha Sha Street, No. 174, Chongqing

Patentee before: Chongqing University

Patentee before: CHENGDU GKHB INFORMATION TECHNOLOGY Co.,Ltd.

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140226

Termination date: 20190528

CF01 Termination of patent right due to non-payment of annual fee