CN102930341A - Optimal training method of collaborative filtering recommendation model - Google Patents

Optimal training method of collaborative filtering recommendation model Download PDF

Info

Publication number
CN102930341A
CN102930341A CN2012103898008A CN201210389800A CN102930341A CN 102930341 A CN102930341 A CN 102930341A CN 2012103898008 A CN2012103898008 A CN 2012103898008A CN 201210389800 A CN201210389800 A CN 201210389800A CN 102930341 A CN102930341 A CN 102930341A
Authority
CN
China
Prior art keywords
hidden
training
eigenmatrix
user
project
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012103898008A
Other languages
Chinese (zh)
Other versions
CN102930341B (en
Inventor
罗辛
夏云霓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHENGDU GKHB INFORMATION TECHNOLOGY CO., LTD.
Original Assignee
罗辛
夏云霓
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 罗辛, 夏云霓 filed Critical 罗辛
Priority to CN201210389800.8A priority Critical patent/CN102930341B/en
Publication of CN102930341A publication Critical patent/CN102930341A/en
Application granted granted Critical
Publication of CN102930341B publication Critical patent/CN102930341B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an optimal training method of a collaborative filtering recommendation model, belonging to the technical field of data mining and individual recommendation. The optimal training method comprises the following steps of: arranging hidden characteristic matrixes seperately, so as to eliminate the interdependency of user hidden characteristics and item hidden characteristics in a training process; secondly, dividing the hidden characteristic matrixes into a user hidden characteristic training process and an item hidden characteristic training process which separate stochastic gradient descent; and finally, executing the user hidden characteristic training process and the item hidden characteristic training process. With the adoption of the optimal training method provided by the invention, the interdependency of the user hidden characteristic matrixes and the item hidden characteristic matrixes can be eliminated by optimizing a training process of the collaborative filtering recommendation model, so that the expandability is improved; the rate of convergence is fast; the training round number desired by the convergence is less; and the establishment speed of the recommendation model can be improved.

Description

A kind of optimization training method of collaborative filtering recommending model
Technical field
The invention belongs to data mining and personalized recommendation technical field, particularly relate to a kind of optimization training method of collaborative filtering recommending model.
Background technology
The explosive increase of internet information scale brings the problem of information overload, and excess of information presents simultaneously, so that the user is difficult to therefrom filter out the effective part to the individual, information utilization reduces on the contrary.The personalized recommendation technology is the important branch of data mining research field, and target is the Intelligent Service that " information is looked for the people " is provided by setting up personalized recommendation system, fundamentally to solve information overload.
As recommending the generation source, recommended models is the core component in the personalized recommendation system, and possess good recommendation accuracy rate and extensibility based on the recommended models of matrix factorization because of it, it is the widely used recommended models of a class, at present, the personalized recommendation technology can be according to user's historical behavior, inner link to user and information is analyzed, thereby user and its may be got up in interested informational linkage, provide information to look for people's Intelligent Service, thereby solve the problem of information overload, recommended models is the emphasis of personalized recommendation technical field research as the core component in the commending system, because matrix factorization recommended models possesses very high recommendation accuracy rate and good extensibility, thereby possesses widely range of application, but, because in the training process of matrix factorization recommended models, the hidden feature of its user and the hidden feature of project interdepend, can't carry out parallelization, thereby limit the further popularization of model.
Therefore on the basis of further investigation based on the recommended models training process of matrix factorization, those skilled in the art are devoted to develop a kind of optimization training method of collaborative filtering recommending model, can eliminate the hidden eigenmatrix of user and the relation of interdependence of the hidden eigenmatrix of project in training process in the matrix factorization recommended models.
Summary of the invention
Because the defects of prior art, technical matters to be solved by this invention provides a kind of optimization training method that can eliminate the hidden eigenmatrix of user and the hidden eigenmatrix of project complementary collaborative filtering recommending model in training process in the matrix factorization recommended models.
By the training process of matrix factorization recommended models is analyzed, determine that the interdependent property of the hidden feature of user and project in training process is to be caused by the adduction relationship to the state value of corresponding hidden feature, therefore, by introducing single-row random gradient down machine system, eliminated the hidden feature of user and project quoting corresponding hidden eigenstate value in training process, thereby the hidden eigenmatrix of user and the interdependent property of the hidden eigenmatrix of project in training process have been eliminated, again based on this, training process to the hidden feature of user and the hidden feature of project carries out parallelization, reach the purpose of further lifting matrixes factorization collaborative filtering recommending model extensibility, the present invention carries out according to the following steps:
Step 1, the hidden eigenmatrix of single-row user are or/and the hidden eigenmatrix of project;
For the matrix factorization recommended models of needs structures, with the hidden eigenmatrix P of its user or/and the hidden eigenmatrix Q of project is single-row; By single-row P and Q, so that P, Q only depend on self state value and the initial value of corresponding hidden eigenmatrix before t wheel training beginning in t wheel training process, therefore, for t wheel training process, the training result of every delegation vector of P, Q only depends on initial value and the corresponding training data of the state value of self, corresponding hidden proper vector, and the relation of interdependence between P, Q is eliminated.
Step 2, judge whether hidden eigenmatrix restrains; When the hidden eigenmatrix of user and the hidden eigenmatrix convergence of project, the hidden eigenmatrix of user that the output training is finished is or/and the hidden eigenmatrix of project; When the hidden eigenmatrix of user or/and the hidden eigenmatrix of project when not restraining, execution in step three.
Step 3, the structure based on the hidden proper vector of the user of single-row random Gradient Descent or/and the hidden proper vector training process of project;
According to the hidden eigenmatrix of single-row user and the hidden eigenmatrix of project, hidden eigenmatrix training process to matrix factorization recommended models decomposes, and obtains the hidden eigenmatrix training process of user or/and two training subprocess of the hidden eigenmatrix training process of project; When training moment t, for the capable vectorial P of u among the hidden eigenmatrix P of user u, its training process based on single-row random Gradient Descent is:
P u t = c k · p u t - 1 + η Σ k = 1 M c M - k [ r u , k - ⟨ p u t - 1 , q k t - 1 ⟩ ] · q k t - 1 , M is P uCorresponding training sample number, M is positive integer, and k is the order of the hidden eigenmatrix systematic learning of user training sample, and 1≤k≤M, η are learning rate, and c is learning process stipulations coefficients, c=1-η λ, λ are the learning process standardizing factor, r U, kBe P uK corresponding training sample; When training moment t, for the capable vector of the i among the hidden eigenmatrix Q of project q i, its training process based on single-row random Gradient Descent is: q i t = c k · q i t - 1 + η Σ h = 1 H c H - h [ r h , i - ⟨ q i t - 1 , p h t - 1 ⟩ ] · p h t - 1 , H is q iCorresponding training sample number, H is positive integer, h is the order of the hidden eigenmatrix systematic learning of project training sample, 1≤h≤H, r H, iBe q iH corresponding training sample.
The present invention has avoided that the hidden proper vector of each user and the hidden proper vector of project interdepend in the training process of traditional matrix factorization collaborative filtering recommending model, can't carry out the defective of parallelization, can support and analysis means for personalized recommendation research supplies a model.
Better, after described step 3, also comprise the hidden eigenmatrix of parallel training user or/and the step of the hidden eigenmatrix of project, to reach the purpose of further lifting matrixes factorization recommended models extensibility; The hidden eigenmatrix of parallel training user carries out according to the following steps:
A1, according to distributing space Z and training sample set R, successively the capable vector of the m among the hidden eigenmatrix P of user and corresponding training sample thereof are distributed the training node S to correspondence 1, until
Figure BDA00002257171800033
P zFor distributing training node S 1Hidden vector set, R pThe corresponding training sample of the expression hidden proper vector in family;
After A2, the process of distributing are finished, the hidden proper vector of all users is carried out parallel training; The hidden eigenmatrix of parallel training project carries out according to the following steps:
B1, according to distributing space Z and training sample set R, successively the capable vector of the n among the hidden eigenmatrix Q of project and corresponding training sample thereof are distributed the training node S to correspondence 2, until
Figure BDA00002257171800041
Q zFor distributing to node S 2Hidden vector set, R qThe expression project is levied the corresponding training sample of matrix-vector;
After B2, the process of distributing are finished, the hidden proper vector of all projects is carried out parallel training.
The invention has the beneficial effects as follows: the present invention is optimized by the training to the collaborative filtering recommending model, eliminated the relation of interdependence of the hidden eigenmatrix of user and the hidden eigenmatrix of project, improved extensibility, possesses faster speed of convergence, reach the required exercise wheel number of convergence still less, improved the structure speed of recommended models.
Description of drawings
Fig. 1 is the schematic flow sheet of the embodiment of the invention one.
Fig. 2 is the schematic flow sheet of the embodiment of the invention two.
Fig. 3 is the schematic flow sheet of the embodiment of the invention three.
Fig. 4 is the accuracy rate comparison diagram that the present invention and PRMF recommend method.
Fig. 5 is the comparison diagram that the present invention and PRMF recommend the speed-up ratio of method.
Embodiment
The invention will be further described below in conjunction with drawings and Examples:
Embodiment one: as shown in Figure 1, a kind of optimization training method of collaborative filtering recommending model is characterized in that carrying out according to the following steps:
Step 1, the hidden eigenmatrix of single-row user;
For the matrix factorization recommended models of needs structure, that the hidden eigenmatrix P of its user is single-row;
Step 2, judge whether hidden eigenmatrix restrains; When the hidden eigenmatrix convergence of user, the hidden eigenmatrix of user that the output training is finished; When the hidden eigenmatrix of user is not restrained, execution in step three.
Step 3, structure are based on the hidden proper vector training process of the user of single-row random Gradient Descent;
According to the hidden eigenmatrix of single-row user, the hidden eigenmatrix training process of matrix factorization recommended models is decomposed, obtain the training subprocess of the hidden eigenmatrix training process of user; When training moment t, for the capable vectorial P of u among the hidden eigenmatrix P of user u, its training process based on single-row random Gradient Descent is: P u t = c k · p u t - 1 + η Σ k = 1 M c M - k [ r u , k - ⟨ p u t - 1 , q k t - 1 ⟩ ] · q k t - 1 , M is P uCorresponding training sample number, M is positive integer, and k is the order of the hidden eigenmatrix systematic learning of user training sample, and 1≤k≤M, η are learning rate, and c is learning process stipulations coefficients, c=1-η λ, λ are the learning process standardizing factor, r U, kBe P uK corresponding training sample;
Figure BDA00002257171800052
Be r U, kThe corresponding hidden proper vector of project is at t-1 state constantly.
Step 4, the hidden eigenmatrix of parallel training user, the hidden eigenmatrix of user that the output training is finished after finishing;
A1, according to distributing space Z and training sample set R, successively the capable vector of the m among the hidden eigenmatrix P of user and corresponding training sample thereof are distributed the training node S to correspondence 1, until
Figure BDA00002257171800053
P zFor distributing training node S 1Hidden vector set, R pThe corresponding training sample of the expression hidden proper vector in family;
After A2, the process of distributing are finished, the hidden proper vector of all users is carried out parallel training.
Embodiment two: as shown in Figure 2, a kind of optimization training method of collaborative filtering recommending model, carry out according to the following steps:
Step 1, the hidden eigenmatrix of single-row project;
For the matrix factorization recommended models of needs structure, that the hidden eigenmatrix Q of its project is single-row;
Step 2, judge whether hidden eigenmatrix restrains; When the hidden eigenmatrix convergence of project, the hidden eigenmatrix of project that the output training is finished; When the hidden eigenmatrix of project is not restrained, execution in step three.
Step 3, structure are based on the hidden proper vector training process of the project of single-row random Gradient Descent;
According to the hidden eigenmatrix of single-row project, the hidden eigenmatrix training process of matrix factorization recommended models is decomposed the training subprocess of the hidden eigenmatrix training process of the project that obtains; When training moment t, for the capable vector of the i among the hidden eigenmatrix Q of project q i, its training process based on single-row random Gradient Descent is:
Figure BDA00002257171800061
H is q iCorresponding training sample number, H is positive integer, h is the order of the hidden eigenmatrix systematic learning of project training sample, 1≤h≤H, r H, iBe q iH corresponding training sample;
Figure BDA00002257171800062
Be r H, iThe corresponding hidden proper vector of user is at t-1 state value constantly.
Step 4, the hidden eigenmatrix of parallel training project, the hidden eigenmatrix of project that the output training is finished after finishing;
B1, according to distributing space Z and training sample set R, successively the capable vector of the n among the hidden eigenmatrix Q of project and corresponding training sample thereof are distributed the training node S to correspondence 2, until
Figure BDA00002257171800063
Q zFor distributing to node S 2Hidden vector set, R qThe expression project is levied the corresponding training sample of matrix-vector;
After B2, the process of distributing are finished, the hidden proper vector of all projects is carried out parallel training.
Embodiment three: as shown in Figure 3, a kind of optimization training method of collaborative filtering recommending model, carry out according to the following steps:
Step 1, the hidden eigenmatrix of single-row user and the hidden eigenmatrix of project;
For the matrix factorization recommended models of needs structure, that the hidden eigenmatrix P of its user and the hidden eigenmatrix Q of project is single-row;
Step 2, judge whether the hidden eigenmatrix of user and the hidden eigenmatrix of project restrain; When the hidden eigenmatrix of user and the hidden eigenmatrix convergence of project, the hidden eigenmatrix of user and the hidden eigenmatrix of project that the output training is finished; When the hidden eigenmatrix of user or the hidden eigenmatrix of project are not restrained, execution in step three.
Step 3, structure are based on the hidden proper vector of the user of single-row random Gradient Descent and the hidden proper vector training process of project;
According to the hidden eigenmatrix of single-row user and the hidden eigenmatrix of project, hidden to matrix factorization recommended models
The eigenmatrix training process decomposes, and obtains two training subprocess of the hidden eigenmatrix training process of user and the hidden eigenmatrix training process of project; When training moment t, for the capable vectorial P of u among the hidden eigenmatrix P of user u, its training process based on single-row random Gradient Descent is:
P u t = c k · p u t - 1 + η Σ k = 1 M c M - k [ r u , k - ⟨ p u t - 1 , q k t - 1 ⟩ ] · q k t - 1 , M is P uCorresponding training sample number, M is positive integer, and k is the order of the hidden eigenmatrix systematic learning of user training sample, and 1≤k≤M, η are learning rate, and c is learning process stipulations coefficients, c=1-η λ, λ are the learning process standardizing factor, r U, kBe P uK corresponding training sample; When training moment t, for the capable vector of the i among the hidden eigenmatrix Q of project q i, its training process based on single-row random Gradient Descent is: q i t = c k · q i t - 1 + η Σ h = 1 H c H - h [ r h , i - ⟨ q i t - 1 , p h t - 1 ⟩ ] · p h t - 1 , H is q iCorresponding training sample number, H is positive integer, h is the order of the hidden eigenmatrix systematic learning of project training sample, 1≤h≤H, r H, iBe q iH corresponding training sample;
Step 4, the hidden eigenmatrix of parallel training user and the hidden eigenmatrix of project, the output training is finished after finishing the hidden eigenmatrix of user and the hidden eigenmatrix of project;
The hidden eigenmatrix of parallel training user carries out according to the following steps:
A1, according to distributing space Z and training sample set R, successively the capable vector of the m among the hidden eigenmatrix P of user and corresponding training sample thereof are distributed the training node S to correspondence 1, until P zFor distributing training node S 1Hidden vector set, R pThe corresponding training sample of the expression hidden proper vector in family;
After A2, the process of distributing are finished, the hidden proper vector of all users is carried out parallel training; The hidden eigenmatrix of parallel training project carries out according to the following steps:
B1, according to distributing space Z and training sample set R, successively the capable vector of the n among the hidden eigenmatrix Q of project and corresponding training sample thereof are distributed the training node S to correspondence 2, until Q zFor distributing to node S 2Hidden vector set, R qThe expression project is levied the corresponding training sample of matrix-vector;
After B2, the process of distributing are finished, the hidden proper vector of all projects is carried out parallel training.
For correctness and accuracy to method are verified, having moved emulation experiment at the small-size computer cluster verifies, this cluster comprises four nodes altogether, being configured to of each node: INTELi5-760 double-core 2.8GCPU, the internal memory of 8G, in experimental verification, the data set that uses is Movi eLens 1M data set, this data set derives from http://www.grouplens.org/node/12, comprised 6040 users and 3900 projects have been surpassed 1,000,000 score information, its user-project rating matrix consistency is respectively 4.25%, all user's scorings all are distributed in interval [0,5] in, score value is higher, and representative of consumer is stronger to the interest of respective item; RMSE(root mean square error, root-mean-square error are used in experiment) as the evaluation index of recommending accuracy rate, the RMSE value is lower, recommends accuracy rate higher; Experiment uses speed-up ratio as the evaluation index of Algorithm parallelization performance, and speed-up ratio is higher, and the parallelization performance of algorithm is better; The PRMF(parallel Regularized Matrix Factorization that proposes in the algorithm that in the experiment this patent is proposed and the paper " A Paral l el Matrix Factori zat ion basedRecommender by Alternating Stochastic Gradient Decent ", parallel regular matrix decomposes) algorithm compares, this paper and be published in " Engineering Applications of ArtificialIntelligence(engineering and artificial intelligence application) " in 2011.
Wherein lines 1 are for adopting recommendation accuracy rate of the present invention, lines 2 are the recommendation accuracy rate of PRMF algorithm, as seen from Figure 4, the recommendation accuracy rate of the present invention and PRMF method is basic identical, but the method that the present invention proposes possesses faster speed of convergence, reach the required exercise wheel number of convergence still less, as seen from Figure 5, lines 3 are for adopting speed-up ratio of the present invention, lines 4 are the speed-up ratio of PRMF algorithm, along with the increase of concurrent process quantity, the speed-up ratio of the method that the present invention proposes obviously is better than the speed-up ratio of PRMF method, thereby the present invention possesses better parallel performance.
More than describe preferred embodiment of the present invention in detail.Should be appreciated that those of ordinary skill in the art need not creative work and just can design according to the present invention make many modifications and variations.Therefore, all in the art technician all should be in the determined protection domain by claims under this invention's idea on the basis of existing technology by the available technical scheme of logical analysis, reasoning, or a limited experiment.

Claims (2)

1. the optimization training method of a collaborative filtering recommending model is characterized in that carrying out according to the following steps:
Step 1, the hidden eigenmatrix of single-row user and the hidden eigenmatrix of project;
For the matrix factorization recommended models of needs structures, with the hidden eigenmatrix P of its user or/and the hidden eigenmatrix Q of project is single-row;
Step 2, judge whether hidden eigenmatrix restrains; When the hidden eigenmatrix of user or the hidden eigenmatrix of project are not restrained, execution in step three.
Step 3, the structure based on the hidden proper vector of the user of single-row random Gradient Descent or/and the hidden proper vector training process of project;
According to the hidden eigenmatrix of single-row user and the hidden eigenmatrix of project, hidden eigenmatrix training process to matrix factorization recommended models decomposes, and obtains the hidden eigenmatrix training process of user or/and two training subprocess of the hidden eigenmatrix training process of project; When training moment t, for the capable vectorial P of u among the hidden eigenmatrix P of user u, its training process based on single-row random Gradient Descent is:
P u t = c k · p u t - 1 + η Σ k = 1 M c M - k [ r u , k - ⟨ p u t - 1 , q k t - 1 ⟩ ] · q k t - 1 , M is P uCorresponding training sample number, M is positive integer, and k is the order of the hidden eigenmatrix systematic learning of user training sample, and 1≤k≤M, η are learning rate, and c is learning process stipulations coefficients, c=1-η λ, λ are the learning process standardizing factor, r U, kBe P uK corresponding training sample; When training moment t, for the capable vector of the i among the hidden eigenmatrix Q of project q i, its training process based on single-row random Gradient Descent is: q i t = c k · q i t - 1 + η Σ h = 1 H c H - h [ r h , i - ⟨ q i t - 1 , p h t - 1 ⟩ ] · p h t - 1 , H is q iCorresponding training sample number, H is positive integer, h is the order of the hidden eigenmatrix systematic learning of project training sample, 1≤h≤H, r H, iBe q iH corresponding training sample;
2. the optimization training method of a kind of collaborative filtering recommending model as claimed in claim 1 is characterized in that: also comprise the hidden eigenmatrix of parallel training user or/and the step of the hidden eigenmatrix of project after described step 3;
The hidden eigenmatrix of parallel training user carries out according to the following steps:
A1, according to distributing space Z and training sample set R, successively the capable vector of the m among the hidden eigenmatrix P of user and corresponding training sample thereof are distributed the training node S to correspondence 1, until
Figure FDA00002257171700021
P zFor distributing training node S 1Hidden vector set, R pThe corresponding training sample of the expression hidden proper vector in family;
After A2, the process of distributing are finished, the hidden proper vector of all users is carried out parallel training; The hidden eigenmatrix of parallel training project carries out according to the following steps:
B1, according to distributing space Z and training sample set R, successively the capable vector of the n among the hidden eigenmatrix Q of project and corresponding training sample thereof are distributed the training node S to correspondence 2, until
Figure FDA00002257171700022
Q zFor distributing to node S 2Hidden vector set, R qThe expression project is levied the corresponding training sample of matrix-vector;
After B2, the process of distributing are finished, the hidden proper vector of all projects is carried out parallel training.
CN201210389800.8A 2012-10-15 2012-10-15 Optimal training method of collaborative filtering recommendation model Expired - Fee Related CN102930341B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210389800.8A CN102930341B (en) 2012-10-15 2012-10-15 Optimal training method of collaborative filtering recommendation model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210389800.8A CN102930341B (en) 2012-10-15 2012-10-15 Optimal training method of collaborative filtering recommendation model

Publications (2)

Publication Number Publication Date
CN102930341A true CN102930341A (en) 2013-02-13
CN102930341B CN102930341B (en) 2015-01-28

Family

ID=47645135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210389800.8A Expired - Fee Related CN102930341B (en) 2012-10-15 2012-10-15 Optimal training method of collaborative filtering recommendation model

Country Status (1)

Country Link
CN (1) CN102930341B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390032A (en) * 2013-07-04 2013-11-13 上海交通大学 Recommendation system and method based on relationship type cooperative topic regression
CN106776928A (en) * 2016-12-01 2017-05-31 重庆大学 Recommend method in position based on internal memory Computational frame, fusion social environment and space-time data
CN109446420A (en) * 2018-10-17 2019-03-08 青岛科技大学 A kind of cross-domain collaborative filtering method and system
CN110390561A (en) * 2019-07-04 2019-10-29 四川金赞科技有限公司 User-financial product of stochastic gradient descent is accelerated to select tendency ultra rapid predictions method and apparatus based on momentum

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129462A (en) * 2011-03-11 2011-07-20 北京航空航天大学 Method for optimizing collaborative filtering recommendation system by aggregation
CN102135989A (en) * 2011-03-09 2011-07-27 北京航空航天大学 Normalized matrix-factorization-based incremental collaborative filtering recommending method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102135989A (en) * 2011-03-09 2011-07-27 北京航空航天大学 Normalized matrix-factorization-based incremental collaborative filtering recommending method
CN102129462A (en) * 2011-03-11 2011-07-20 北京航空航天大学 Method for optimizing collaborative filtering recommendation system by aggregation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XIN LUO ET AL: "A parallel matrix factorization based recommender by alternating stochastic gradient decent", 《ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE》, 25 September 2011 (2011-09-25), pages 1403 - 1412 *
XIN LUO ET AL: "Incremental Collaborative Filtering recommender based on Regularized Matrix Factorization", 《KNOWLEDGE-BASED SYSTEMS》, 16 September 2011 (2011-09-16), pages 271 - 280, XP 028444042, DOI: doi:10.1016/j.knosys.2011.09.006 *
杨阳等: "基于矩阵分解与用户近邻模型的协同过滤推荐算法", 《计算机应用》, vol. 32, no. 2, 1 February 2012 (2012-02-01), pages 395 - 398 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390032A (en) * 2013-07-04 2013-11-13 上海交通大学 Recommendation system and method based on relationship type cooperative topic regression
CN103390032B (en) * 2013-07-04 2017-01-18 上海交通大学 Recommendation system and method based on relationship type cooperative topic regression
CN106776928A (en) * 2016-12-01 2017-05-31 重庆大学 Recommend method in position based on internal memory Computational frame, fusion social environment and space-time data
CN106776928B (en) * 2016-12-01 2020-11-24 重庆大学 Position recommendation method based on memory computing framework and fusing social contact and space-time data
CN109446420A (en) * 2018-10-17 2019-03-08 青岛科技大学 A kind of cross-domain collaborative filtering method and system
CN109446420B (en) * 2018-10-17 2022-01-25 青岛科技大学 Cross-domain collaborative filtering method and system
CN110390561A (en) * 2019-07-04 2019-10-29 四川金赞科技有限公司 User-financial product of stochastic gradient descent is accelerated to select tendency ultra rapid predictions method and apparatus based on momentum

Also Published As

Publication number Publication date
CN102930341B (en) 2015-01-28

Similar Documents

Publication Publication Date Title
Wang et al. Bi-directional long short-term memory method based on attention mechanism and rolling update for short-term load forecasting
López et al. Application of SOM neural networks to short-term load forecasting: The Spanish electricity market case study
CN102231144B (en) A kind of power distribution network method for predicting theoretical line loss based on Boosting algorithm
CN102567391B (en) Method and device for building classification forecasting mixed model
CN109902222A (en) Recommendation method and device
Hellmann et al. Evolution of social networks
CN109791626A (en) The coding method of neural network weight, computing device and hardware system
CN104636801A (en) Transmission line audible noise prediction method based on BP neural network optimization
CN106909990A (en) A kind of Forecasting Methodology and device based on historical data
CN104155574A (en) Power distribution network fault classification method based on adaptive neuro-fuzzy inference system
Ghaderi et al. Behavioral simulation and optimization of generation companies in electricity markets by fuzzy cognitive map
CN103620624A (en) Method and apparatus for local competitive learning rule that leads to sparse connectivity
CN104636985A (en) Method for predicting radio disturbance of electric transmission line by using improved BP (back propagation) neural network
CN102930341B (en) Optimal training method of collaborative filtering recommendation model
CN103365727A (en) Host load forecasting method in cloud computing environment
Agami et al. A neural network based dynamic forecasting model for Trend Impact Analysis
CN106803135A (en) The Forecasting Methodology and device of a kind of photovoltaic power generation system output power
CN107330589A (en) Satellite network coordinates the quantitative evaluation method and system of risk
CN110019420A (en) A kind of data sequence prediction technique and calculate equipment
CN110264012A (en) Renewable energy power combination prediction technique and system based on empirical mode decomposition
CN109145342A (en) Automatic wiring system and method
WO2023241207A1 (en) Data processing method, apparatus and device, computer-readable storage medium, and computer program product
Huo et al. A BP neural network predictor model for stock price
CN114091776A (en) K-means-based multi-branch AGCNN short-term power load prediction method
CN109117352B (en) Server performance prediction method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: CHENGDU GUOKE HAIBO INFORMATION TECHNOLOGY CO., LT

Free format text: FORMER OWNER: LUO XIN

Effective date: 20141226

Free format text: FORMER OWNER: XIA YUNNI

Effective date: 20141226

C41 Transfer of patent application or patent right or utility model
C53 Correction of patent of invention or patent application
CB03 Change of inventor or designer information

Inventor after: Luo Xin

Inventor after: Chen Peng

Inventor after: Wu Lei

Inventor after: Xia Yunni

Inventor before: Luo Xin

Inventor before: Xia Yunni

COR Change of bibliographic data

Free format text: CORRECT: INVENTOR; FROM: LUO XIN XIA YUNNI TO: LUO XIN CHEN PENG WU LEI XIA YUNNI

Free format text: CORRECT: ADDRESS; FROM: 400012 YUZHONG, CHONGQING TO: 610041 CHENGDU, SICHUAN PROVINCE

TA01 Transfer of patent application right

Effective date of registration: 20141226

Address after: 610041, 4 Building 1, ideal center, No. 38 Tianyi street, hi tech Zone, Sichuan, Chengdu

Applicant after: CHENGDU GKHB INFORMATION TECHNOLOGY CO., LTD.

Address before: 400012 No. 187 East Jiefang Road, Yuzhong District, Chongqing, 25-4

Applicant before: Luo Xin

Applicant before: Xia Yunni

C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
CB03 Change of inventor or designer information

Inventor after: Luo Xin

Inventor after: Xia Yunni

Inventor before: Luo Xin

Inventor before: Chen Peng

Inventor before: Wu Lei

Inventor before: Xia Yunni

COR Change of bibliographic data
TR01 Transfer of patent right

Effective date of registration: 20160623

Address after: 400045 Chongqing city Shapingba District Yang Gong Bridge No. 104 of No. 7 20-3

Patentee after: Chongqing cloud core software technology Co., Ltd.

Address before: 610041, 4 Building 1, ideal center, No. 38 Tianyi street, hi tech Zone, Sichuan, Chengdu

Patentee before: CHENGDU GKHB INFORMATION TECHNOLOGY CO., LTD.

C41 Transfer of patent application or patent right or utility model
CB03 Change of inventor or designer information

Inventor after: Luo Xin

Inventor after: Chen Peng

Inventor after: Wu Lei

Inventor after: Xia Yunni

Inventor before: Luo Xin

Inventor before: Xia Yunni

COR Change of bibliographic data
TR01 Transfer of patent right

Effective date of registration: 20160811

Address after: 610041, 4 Building 1, ideal center, No. 38 Tianyi street, hi tech Zone, Sichuan, Chengdu

Patentee after: CHENGDU GKHB INFORMATION TECHNOLOGY CO., LTD.

Address before: 400045 Chongqing city Shapingba District Yang Gong Bridge No. 104 of No. 7 20-3

Patentee before: Chongqing cloud core software technology Co., Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150128

Termination date: 20191015