CN102930341B - Optimal training method of collaborative filtering recommendation model - Google Patents

Optimal training method of collaborative filtering recommendation model Download PDF

Info

Publication number
CN102930341B
CN102930341B CN201210389800.8A CN201210389800A CN102930341B CN 102930341 B CN102930341 B CN 102930341B CN 201210389800 A CN201210389800 A CN 201210389800A CN 102930341 B CN102930341 B CN 102930341B
Authority
CN
China
Prior art keywords
hidden
training
eigenmatrix
user
project
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210389800.8A
Other languages
Chinese (zh)
Other versions
CN102930341A (en
Inventor
罗辛
陈鹏
吴磊
夏云霓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHENGDU GKHB INFORMATION TECHNOLOGY CO., LTD.
Original Assignee
CHENGDU GKHB INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHENGDU GKHB INFORMATION TECHNOLOGY Co Ltd filed Critical CHENGDU GKHB INFORMATION TECHNOLOGY Co Ltd
Priority to CN201210389800.8A priority Critical patent/CN102930341B/en
Publication of CN102930341A publication Critical patent/CN102930341A/en
Application granted granted Critical
Publication of CN102930341B publication Critical patent/CN102930341B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an optimal training method of a collaborative filtering recommendation model, belonging to the technical field of data mining and individual recommendation. The optimal training method comprises the following steps of: arranging hidden characteristic matrixes seperately, so as to eliminate the interdependency of user hidden characteristics and item hidden characteristics in a training process; secondly, dividing the hidden characteristic matrixes into a user hidden characteristic training process and an item hidden characteristic training process which separate stochastic gradient descent; and finally, executing the user hidden characteristic training process and the item hidden characteristic training process. With the adoption of the optimal training method provided by the invention, the interdependency of the user hidden characteristic matrixes and the item hidden characteristic matrixes can be eliminated by optimizing a training process of the collaborative filtering recommendation model, so that the expandability is improved; the rate of convergence is fast; the training round number desired by the convergence is less; and the establishment speed of the recommendation model can be improved.

Description

A kind of optimization training method of collaborative filtering recommending model
Technical field
The invention belongs to data mining and personalized recommendation technical field, particularly relate to a kind of optimization training method of collaborative filtering recommending model.
Background technology
The explosive increase of internet information scale, bring the problem of information overload, excess of information presents simultaneously, and make user be difficult to therefrom filter out the effective part of individual, information utilization reduces on the contrary.Personalized recommendation technology is the important branch of data mining research field, and target is the Intelligent Service by setting up personalized recommendation system to provide " information looks for people ", fundamentally to solve information overload.
As recommendation generation source, recommended models is the core component in personalized recommendation system, and possess good recommendation accuracy rate and extensibility based on the recommended models of matrix factorisation because of it, it is the widely used recommended models of a class, at present, personalized recommendation technology can according to the historical behavior of user, the inner link of user and information is analyzed, thus user and its may be got up in interested informational linkage, information is provided to look for the Intelligent Service of people, thus solve the problem of information overload, recommended models is as the core component in commending system, it is the emphasis of personalized recommendation technical field research, because matrix factorisation recommended models possesses very high recommendation accuracy rate and good extensibility, thus range of application is widely possessed, but, in training process due to matrix factorisation recommended models, the hidden feature of its user hidden characteristic sum project interdepends, parallelization cannot be carried out, thus limit the further genralrlization of model.
Therefore furtheing investigate on the basis based on the recommended models training process of matrix factorisation, those skilled in the art are devoted to the optimization training method developing a kind of collaborative filtering recommending model, can eliminate the hidden eigenmatrix of user in matrix factorisation recommended models and the hidden eigenmatrix of project relation of interdependence in the training process.
Summary of the invention
Because the above-mentioned defect of prior art, technical matters to be solved by this invention is to provide a kind of optimization training method can eliminating the complementary collaborative filtering recommending model in the training process of the hidden eigenmatrix of user and the hidden eigenmatrix of project in matrix factorisation recommended models.
By analyzing the training process of matrix factorisation recommended models, determine that user and the hidden feature of project interdependent property are in the training process caused by the adduction relationship of the state value to corresponding hidden feature, therefore, by introducing single-row stochastic gradient descent mechanism, eliminate user and the hidden feature of project quoting in the training process to corresponding hidden eigenstate value, thus eliminate the hidden eigenmatrix of user and the hidden eigenmatrix of project interdependent property in the training process, again based on this, parallelization is carried out to the training process of the hidden feature of user's hidden characteristic sum project, reach the object of further lifting matrixes factorization collaborative filtering recommending model extensibility, the present invention carries out according to the following steps:
Step one, the hidden eigenmatrix of single-row user are or/and the hidden eigenmatrix of project;
For the matrix factorisation recommended models needing structure, by hidden for its user eigenmatrix P or/and project hidden eigenmatrix Q is single-row; By single-row P and Q, P, Q is made to take turns in training process the state value and the initial value of the hidden eigenmatrix of correspondence before the training of t wheel starts that only depend on self at t, therefore, training process is taken turns for t, the training result of each row vector of P, Q only depends on the state value of self, the initial value of corresponding hidden proper vector and corresponding training data, and the relation of interdependence between P, Q is eliminated.
Step 2, judge whether hidden eigenmatrix restrains; When the hidden eigenmatrix of user and the hidden eigenmatrix of project are restrained, the hidden eigenmatrix of user that output has been trained is or/and the hidden eigenmatrix of project; When the hidden eigenmatrix of user or/and when the hidden eigenmatrix of project is not restrained, perform step 3.
Step 3, construct the hidden proper vector of user based on single-row stochastic gradient descent or/and the hidden proper vector training process of project;
According to the hidden eigenmatrix of single-row user and the hidden eigenmatrix of project, the hidden eigenmatrix training process of matrix factorisation recommended models is decomposed, obtains user's hidden eigenmatrix training process or/and two training subprocess of the hidden eigenmatrix training process of project; When training moment t, for the vectorial P that u in user's hidden eigenmatrix P is capable u, its training process based on single-row stochastic gradient descent is:
P u t = c k · p u t - 1 + η Σ k = 1 M c M - k [ r u , k - ⟨ p u t - 1 , q k t - 1 ⟩ ] · q k t - 1 , M is P ucorresponding training sample number, M is positive integer, and k is the order of user's hidden eigenmatrix systematic learning training sample, and 1≤k≤M, η is learning rate, and c is learning process stipulations coefficients, and c=1-η λ, λ are learning process standardizing factor, r u,kfor P ua corresponding kth training sample; When training moment t, for the i-th row vector q in the hidden eigenmatrix Q of project i, its training process based on single-row stochastic gradient descent is: q i t = c k · q i t - 1 + η Σ h = 1 H c H - h [ r h , i - ⟨ q i t - 1 , p h t - 1 ⟩ ] · p h t - 1 , H is q icorresponding training sample number, H is positive integer, and h is the order of project hidden eigenmatrix systematic learning training sample, 1≤h≤H, r h, ifor q ih corresponding training sample.
Present invention, avoiding the hidden proper vector of each user and the hidden proper vector of project in the training process of traditional matrix factorisation collaborative filtering recommending model to interdepend, the defect of parallelization cannot be carried out, can support and analysis means for personalized recommendation research supplies a model.
Preferably, after described step 3, also comprise the hidden eigenmatrix of parallel training user or/and the step of the hidden eigenmatrix of project, to reach the object of further lifting matrixes factorization recommended models extensibility; The hidden eigenmatrix of parallel training user carries out according to the following steps:
A1, according to distributing space Z and training sample set R, successively the training sample of the m row vector in hidden for user eigenmatrix P and correspondence thereof is distributed the training node S to correspondence 1, until p zfor distributing training node S 1hidden vector set, R prepresent the training sample corresponding to the hidden proper vector in family;
After A2, the process of distributing complete, parallel training is carried out to the hidden proper vector of all users; The hidden eigenmatrix of parallel training project carries out according to the following steps:
B1, basis distribute space Z and training sample set R, successively the n-th line vector in hidden for project eigenmatrix Q and corresponding training sample thereof are distributed the training node S to correspondence 2, until q zfor distributing to node S 2hidden vector set, R qexpression project levies the training sample corresponding to matrix-vector;
After B2, the process of distributing complete, parallel training is carried out to the hidden proper vector of all projects.
The invention has the beneficial effects as follows: the present invention is by being optimized the training of collaborative filtering recommending model, eliminate the relation of interdependence of the hidden eigenmatrix of user and the hidden eigenmatrix of project, improve extensibility, possesses speed of convergence faster, the exercise wheel number reached needed for convergence is less, improves the structure speed of recommended models.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the embodiment of the present invention one.
Fig. 2 is the schematic flow sheet of the embodiment of the present invention two.
Fig. 3 is the schematic flow sheet of the embodiment of the present invention three.
Fig. 4 is the accuracy rate comparison diagram that the present invention and PRMF recommend method.
Fig. 5 is the comparison diagram that the present invention and PRMF recommend the speed-up ratio of method.
Embodiment
Below in conjunction with drawings and Examples, the invention will be further described:
Embodiment one: as shown in Figure 1, a kind of optimization training method of collaborative filtering recommending model, is characterized in that carrying out according to the following steps:
Step one, the hidden eigenmatrix of single-row user;
For the matrix factorisation recommended models needing structure, by single-row for hidden for its user eigenmatrix P;
Step 2, judge whether hidden eigenmatrix restrains; When the hidden eigenmatrix convergence of user, export the hidden eigenmatrix of user of having trained; When the hidden eigenmatrix of user is not restrained, perform step 3.
Step 3, construct the hidden proper vector training process of user based on single-row stochastic gradient descent;
According to the hidden eigenmatrix of single-row user, the hidden eigenmatrix training process of matrix factorisation recommended models is decomposed, obtains the training subprocess of the hidden eigenmatrix training process of user; When training moment t, for the vectorial P that u in user's hidden eigenmatrix P is capable u, its training process based on single-row stochastic gradient descent is: P u t = c k · p u t - 1 + η Σ k = 1 M c M - k [ r u , k - ⟨ p u t - 1 , q k t - 1 ⟩ ] · q k t - 1 , M is P ucorresponding training sample number, M is positive integer, and k is the order of user's hidden eigenmatrix systematic learning training sample, and 1≤k≤M, η is learning rate, and c is learning process stipulations coefficients, and c=1-η λ, λ are learning process standardizing factor, r u,kfor P ua corresponding kth training sample; for r u,kthe corresponding hidden proper vector of project is in the state in t-1 moment.
Step 4, the hidden eigenmatrix of parallel training user, export the hidden eigenmatrix of user of having trained after completing;
A1, according to distributing space Z and training sample set R, successively the training sample of the m row vector in hidden for user eigenmatrix P and correspondence thereof is distributed the training node S to correspondence 1, until p zfor distributing training node S 1hidden vector set, R prepresent the training sample corresponding to the hidden proper vector in family;
After A2, the process of distributing complete, parallel training is carried out to the hidden proper vector of all users.
Embodiment two: as shown in Figure 2, a kind of optimization training method of collaborative filtering recommending model, carry out according to the following steps:
Step one, the hidden eigenmatrix of single-row project;
For the matrix factorisation recommended models needing structure, by single-row for hidden for its project eigenmatrix Q;
Step 2, judge whether hidden eigenmatrix restrains; When the hidden eigenmatrix convergence of project, export the hidden eigenmatrix of project of having trained; When the hidden eigenmatrix of project is not restrained, perform step 3.
Step 3, construct the hidden proper vector training process of project based on single-row stochastic gradient descent;
According to the hidden eigenmatrix of single-row project, the hidden eigenmatrix training process of matrix factorisation recommended models is decomposed, the training subprocess of the hidden eigenmatrix training process of the project that obtains; When training moment t, for the i-th row vector q in the hidden eigenmatrix Q of project i, its training process based on single-row stochastic gradient descent is: h is q icorresponding training sample number, H is positive integer, and h is the order of project hidden eigenmatrix systematic learning training sample, 1≤h≤H, r h,ifor q ih corresponding training sample; for r h,ithe corresponding hidden proper vector of user is at the state value in t-1 moment.
Step 4, the hidden eigenmatrix of parallel training project, export the hidden eigenmatrix of project of having trained after completing;
B1, basis distribute space Z and training sample set R, successively the n-th line vector in hidden for project eigenmatrix Q and corresponding training sample thereof are distributed the training node S to correspondence 2, until q zfor distributing to node S 2hidden vector set, R qexpression project levies the training sample corresponding to matrix-vector;
After B2, the process of distributing complete, parallel training is carried out to the hidden proper vector of all projects.
Embodiment three: as shown in Figure 3, a kind of optimization training method of collaborative filtering recommending model, carry out according to the following steps:
Step one, the hidden eigenmatrix of single-row user and the hidden eigenmatrix of project;
For the matrix factorisation recommended models needing structure, by single-row for hidden for its user eigenmatrix P and project hidden eigenmatrix Q;
Step 2, judge whether the hidden eigenmatrix of user and the hidden eigenmatrix of project restrain; When the hidden eigenmatrix of user and the hidden eigenmatrix of project are restrained, export the hidden eigenmatrix of user and the hidden eigenmatrix of project of having trained; When the hidden eigenmatrix of user or the hidden eigenmatrix of project are not restrained, perform step 3.
Step 3, construct the hidden proper vector of user based on single-row stochastic gradient descent and the hidden proper vector training process of project;
According to the hidden eigenmatrix of single-row user and the hidden eigenmatrix of project, hidden to matrix factorisation recommended models
Eigenmatrix training process decomposes, and obtains two training subprocess of user's hidden eigenmatrix training process and the hidden eigenmatrix training process of project; When training moment t, for the vectorial P that u in user's hidden eigenmatrix P is capable u, its training process based on single-row stochastic gradient descent is:
P u t = c k · p u t - 1 + η Σ k = 1 M c M - k [ r u , k - ⟨ p u t - 1 , q k t - 1 ⟩ ] · q k t - 1 , M is P ucorresponding training sample number, M is positive integer, and k is the order of user's hidden eigenmatrix systematic learning training sample, and 1≤k≤M, η is learning rate, and c is learning process stipulations coefficients, and c=1-η λ, λ are learning process standardizing factor, r u,kfor P ua corresponding kth training sample; When training moment t, for the i-th row vector q in the hidden eigenmatrix Q of project i, its training process based on single-row stochastic gradient descent is: q i t = c k · q i t - 1 + η Σ h = 1 H c H - h [ r h , i - ⟨ q i t - 1 , p h t - 1 ⟩ ] · p h t - 1 , H is q icorresponding training sample number, H is positive integer, and h is the order of project hidden eigenmatrix systematic learning training sample, 1≤h≤H, r h,ifor q ih corresponding training sample;
Step 4, the hidden eigenmatrix of parallel training user and the hidden eigenmatrix of project, export the hidden eigenmatrix of user of having trained and the hidden eigenmatrix of project after completing;
The hidden eigenmatrix of parallel training user carries out according to the following steps:
A1, according to distributing space Z and training sample set R, successively the training sample of the m row vector in hidden for user eigenmatrix P and correspondence thereof is distributed the training node S to correspondence 1, until p zfor distributing training node S 1hidden vector set, R prepresent the training sample corresponding to the hidden proper vector in family;
After A2, the process of distributing complete, parallel training is carried out to the hidden proper vector of all users; The hidden eigenmatrix of parallel training project carries out according to the following steps:
B1, basis distribute space Z and training sample set R, successively the n-th line vector in hidden for project eigenmatrix Q and corresponding training sample thereof are distributed the training node S to correspondence 2, until q zfor distributing to node S 2hidden vector set, R qexpression project levies the training sample corresponding to matrix-vector;
After B2, the process of distributing complete, parallel training is carried out to the hidden proper vector of all projects.
In order to verify the correctness of method and accuracy, small-size computer cluster runs emulation experiment verify, this cluster comprises four nodes altogether, being configured to of each node: INTELi5-760 double-core 2.8GCPU, the internal memory of 8G, in experimental verification, the data set used is Movi eLens 1M data set, this data set derives from http://www.grouplens.org/node/12, contain 6040 users to the score information of 3900 projects more than 1,000,000, its user-project rating matrix consistency is respectively 4.25%, all user's scorings are all distributed in interval [0, 5] in, score value is higher, the interest of representative of consumer to respective item is stronger, experiment uses RMSE(root mean square error, root-mean-square error) as the evaluation index of recommending accuracy rate, RMSE value is lower, recommends accuracy rate higher, experiment uses speed-up ratio as the evaluation index of Algorithm parallelization performance, and speed-up ratio is higher, and the parallelization performance of algorithm is better, the algorithm in experiment, this patent proposed and the middle PRMF(parallel Regularized Matrix Factorization proposed of paper " A Paral l el Matrix Factori zat ion basedRecommender by Alternating Stochastic Gradient Decent ", parallel regular matrix decomposes) algorithm compares, this paper and be published in " Engineering Applications of ArtificialIntelligence(engineering and artificial intelligence application) " for 2011.
Wherein lines 1 are for adopting recommendation accuracy rate of the present invention, lines 2 are the recommendation accuracy rate of PRMF algorithm, as seen from Figure 4, the present invention is substantially identical with the recommendation accuracy rate of PRMF method, but the method that the present invention proposes possesses speed of convergence faster, the exercise wheel number reached needed for convergence is less, as seen from Figure 5, lines 3 are for adopting speed-up ratio of the present invention, lines 4 are the speed-up ratio of PRMF algorithm, along with the increase of concurrent process quantity, the speed-up ratio of the method that the present invention proposes obviously is better than the speed-up ratio of PRMF method, and thus the present invention possesses better parallel performance.
More than describe preferred embodiment of the present invention in detail.Should be appreciated that those of ordinary skill in the art just design according to the present invention can make many modifications and variations without the need to creative work.Therefore, all technician in the art, all should by the determined protection domain of claims under this invention's idea on the basis of existing technology by the available technical scheme of logical analysis, reasoning, or a limited experiment.

Claims (1)

1. an optimization training method for collaborative filtering recommending model, is characterized in that carrying out according to the following steps:
Step one, the hidden eigenmatrix of single-row user and the hidden eigenmatrix of project;
For the matrix factorisation recommended models needing structure, by hidden for its user eigenmatrix P or/and project hidden eigenmatrix Q is single-row;
Step 2, judge whether hidden eigenmatrix restrains; When the hidden eigenmatrix of user or the hidden eigenmatrix of project are not restrained, perform step 3;
Step 3, construct the hidden eigenmatrix of user based on single-row stochastic gradient descent or/and project hidden eigenmatrix vector training process;
According to the hidden eigenmatrix of single-row user and the hidden eigenmatrix of project, the hidden eigenmatrix training process of matrix factorisation recommended models is decomposed, obtains user's hidden eigenmatrix training process or/and two training subprocess of the hidden eigenmatrix training process of project; When training moment t, for the vectorial P that u in user's hidden eigenmatrix P is capable u, its training process based on single-row stochastic gradient descent is:
P u t = c k &CenterDot; p u t - 1 + &eta; &Sigma; k = 1 M c M - k [ r u , k - < p u t - 1 , q k t - 1 > ] &CenterDot; q k t - 1 , M is P ucorresponding training sample number, M is positive integer, and k is the order of user's hidden eigenmatrix systematic learning training sample, and 1≤k≤M, η is learning rate, and c is learning process stipulations coefficients, and c=1-η λ, λ are learning process standardizing factor, r u,kfor P ua corresponding kth training sample; When training moment t, for the i-th row vector q in the hidden eigenmatrix Q of project i, its training process based on single-row stochastic gradient descent is:
q i t = c k &CenterDot; q i t - 1 + &eta; &Sigma; h = 1 H c H - h [ r h , i - < q i t - 1 , p h t - 1 > ] &CenterDot; p h t - 1 , H is q icorresponding training sample number, H is positive integer, and h is the order of project hidden eigenmatrix systematic learning training sample, 1≤h≤H, r h,ifor q ih corresponding training sample;
The hidden eigenmatrix of parallel training user is also comprised or/and the step of the hidden eigenmatrix of project after described step 3;
The hidden eigenmatrix of parallel training user carries out according to the following steps:
A1, according to distributing space Z and training sample set R, successively the training sample of the m row vector in hidden for user eigenmatrix P and correspondence thereof is distributed the training node S to correspondence 1, until p zfor distributing training node S 1hidden vector set, R prepresent the training sample corresponding to the hidden eigenmatrix of user;
After A2, the process of distributing complete, parallel training is carried out to the hidden eigenmatrix of all users;
The hidden eigenmatrix of parallel training project carries out according to the following steps:
B1, basis distribute space Z and training sample set R, successively the n-th line vector in hidden for project eigenmatrix Q and corresponding training sample thereof are distributed the training node S to correspondence 2, until q zfor distributing to training node S 2hidden vector set, R qtraining sample corresponding to the hidden eigenmatrix of expression project;
After B2, the process of distributing complete, parallel training is carried out to the hidden eigenmatrix of all projects.
CN201210389800.8A 2012-10-15 2012-10-15 Optimal training method of collaborative filtering recommendation model Expired - Fee Related CN102930341B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210389800.8A CN102930341B (en) 2012-10-15 2012-10-15 Optimal training method of collaborative filtering recommendation model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210389800.8A CN102930341B (en) 2012-10-15 2012-10-15 Optimal training method of collaborative filtering recommendation model

Publications (2)

Publication Number Publication Date
CN102930341A CN102930341A (en) 2013-02-13
CN102930341B true CN102930341B (en) 2015-01-28

Family

ID=47645135

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210389800.8A Expired - Fee Related CN102930341B (en) 2012-10-15 2012-10-15 Optimal training method of collaborative filtering recommendation model

Country Status (1)

Country Link
CN (1) CN102930341B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390032B (en) * 2013-07-04 2017-01-18 上海交通大学 Recommendation system and method based on relationship type cooperative topic regression
CN106776928B (en) * 2016-12-01 2020-11-24 重庆大学 Position recommendation method based on memory computing framework and fusing social contact and space-time data
CN109446420B (en) * 2018-10-17 2022-01-25 青岛科技大学 Cross-domain collaborative filtering method and system
CN110390561B (en) * 2019-07-04 2022-04-29 壹融站信息技术(深圳)有限公司 User-financial product selection tendency high-speed prediction method and device based on momentum acceleration random gradient decline

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129462A (en) * 2011-03-11 2011-07-20 北京航空航天大学 Method for optimizing collaborative filtering recommendation system by aggregation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102135989A (en) * 2011-03-09 2011-07-27 北京航空航天大学 Normalized matrix-factorization-based incremental collaborative filtering recommending method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129462A (en) * 2011-03-11 2011-07-20 北京航空航天大学 Method for optimizing collaborative filtering recommendation system by aggregation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A parallel matrix factorization based recommender by alternating stochastic gradient decent;Xin Luo et al;《Engineering Applications of Artificial Intelligence》;20110925;1403-1412 *
Incremental Collaborative Filtering recommender based on Regularized Matrix Factorization;Xin Luo et al;《Knowledge-Based Systems》;20110916;271-280 *
基于矩阵分解与用户近邻模型的协同过滤推荐算法;杨阳等;《计算机应用》;20120201;第32卷(第2期);395-398 *

Also Published As

Publication number Publication date
CN102930341A (en) 2013-02-13

Similar Documents

Publication Publication Date Title
Wang et al. Bi-directional long short-term memory method based on attention mechanism and rolling update for short-term load forecasting
CN112365040B (en) Short-term wind power prediction method based on multi-channel convolution neural network and time convolution network
Mocanu et al. Unsupervised energy prediction in a Smart Grid context using reinforcement cross-building transfer learning
CN102231144B (en) A kind of power distribution network method for predicting theoretical line loss based on Boosting algorithm
Borgstein et al. Evaluating energy performance in non-domestic buildings: A review
Uzlu et al. Estimates of energy consumption in Turkey using neural networks with the teaching–learning-based optimization algorithm
López et al. Application of SOM neural networks to short-term load forecasting: The Spanish electricity market case study
Kankal et al. Modeling and forecasting of Turkey’s energy consumption using socio-economic and demographic variables
Chen et al. On the estimation of transfer functions, regularizations and Gaussian processes—Revisited
Hellmann et al. Evolution of social networks
CN103886218B (en) Storehouse, the lake algal bloom Forecasting Methodology compensated with neutral net and support vector machine based on polynary non-stationary time series
CN104636801A (en) Transmission line audible noise prediction method based on BP neural network optimization
CN106909990A (en) A kind of Forecasting Methodology and device based on historical data
CN102930341B (en) Optimal training method of collaborative filtering recommendation model
CN104155574A (en) Power distribution network fault classification method based on adaptive neuro-fuzzy inference system
Yu et al. Error correction method based on data transformational GM (1, 1) and application on tax forecasting
Ghaderi et al. Behavioral simulation and optimization of generation companies in electricity markets by fuzzy cognitive map
CN109165799A (en) The class&#39;s of walking education course arrangement system based on genetic algorithm
CN105184368A (en) Distributed extreme learning machine optimization integrated framework system and method
Agami et al. A neural network based dynamic forecasting model for Trend Impact Analysis
Alizadeh et al. Monthly Brent oil price forecasting using artificial neural networks and a crisis index
CN104503420A (en) Non-linear process industry fault prediction method based on novel FDE-ELM and EFSM
Zhang et al. Predicting rooftop solar adoption using agent-based modeling
Huo et al. A BP neural network predictor model for stock price
Zhu et al. A day-ahead industrial load forecasting model using load change rate features and combining FA-ELM and the AdaBoost algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: CHENGDU GUOKE HAIBO INFORMATION TECHNOLOGY CO., LT

Free format text: FORMER OWNER: LUO XIN

Effective date: 20141226

Free format text: FORMER OWNER: XIA YUNNI

Effective date: 20141226

C41 Transfer of patent application or patent right or utility model
C53 Correction of patent for invention or patent application
CB03 Change of inventor or designer information

Inventor after: Luo Xin

Inventor after: Chen Peng

Inventor after: Wu Lei

Inventor after: Xia Yunni

Inventor before: Luo Xin

Inventor before: Xia Yunni

COR Change of bibliographic data

Free format text: CORRECT: INVENTOR; FROM: LUO XIN XIA YUNNI TO: LUO XIN CHEN PENG WU LEI XIA YUNNI

Free format text: CORRECT: ADDRESS; FROM: 400012 YUZHONG, CHONGQING TO: 610041 CHENGDU, SICHUAN PROVINCE

TA01 Transfer of patent application right

Effective date of registration: 20141226

Address after: 610041, 4 Building 1, ideal center, No. 38 Tianyi street, hi tech Zone, Sichuan, Chengdu

Applicant after: CHENGDU GKHB INFORMATION TECHNOLOGY CO., LTD.

Address before: 400012 No. 187 East Jiefang Road, Yuzhong District, Chongqing, 25-4

Applicant before: Luo Xin

Applicant before: Xia Yunni

C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
CB03 Change of inventor or designer information

Inventor after: Luo Xin

Inventor after: Xia Yunni

Inventor before: Luo Xin

Inventor before: Chen Peng

Inventor before: Wu Lei

Inventor before: Xia Yunni

COR Change of bibliographic data
TR01 Transfer of patent right

Effective date of registration: 20160623

Address after: 400045 Chongqing city Shapingba District Yang Gong Bridge No. 104 of No. 7 20-3

Patentee after: Chongqing cloud core software technology Co., Ltd.

Address before: 610041, 4 Building 1, ideal center, No. 38 Tianyi street, hi tech Zone, Sichuan, Chengdu

Patentee before: CHENGDU GKHB INFORMATION TECHNOLOGY CO., LTD.

C41 Transfer of patent application or patent right or utility model
CB03 Change of inventor or designer information

Inventor after: Luo Xin

Inventor after: Chen Peng

Inventor after: Wu Lei

Inventor after: Xia Yunni

Inventor before: Luo Xin

Inventor before: Xia Yunni

COR Change of bibliographic data
TR01 Transfer of patent right

Effective date of registration: 20160811

Address after: 610041, 4 Building 1, ideal center, No. 38 Tianyi street, hi tech Zone, Sichuan, Chengdu

Patentee after: CHENGDU GKHB INFORMATION TECHNOLOGY CO., LTD.

Address before: 400045 Chongqing city Shapingba District Yang Gong Bridge No. 104 of No. 7 20-3

Patentee before: Chongqing cloud core software technology Co., Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150128

Termination date: 20191015