CN103676649A - Local self-adaptive WNN (Wavelet Neural Network) training system, device and method - Google Patents

Local self-adaptive WNN (Wavelet Neural Network) training system, device and method Download PDF

Info

Publication number
CN103676649A
CN103676649A CN201310466382.2A CN201310466382A CN103676649A CN 103676649 A CN103676649 A CN 103676649A CN 201310466382 A CN201310466382 A CN 201310466382A CN 103676649 A CN103676649 A CN 103676649A
Authority
CN
China
Prior art keywords
wnn
parameter
module
matrix
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310466382.2A
Other languages
Chinese (zh)
Inventor
任世锦
凌萍
倪银龙
王高峰
杨茂云
吕俊怀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Normal University
Original Assignee
Jiangsu Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Normal University filed Critical Jiangsu Normal University
Priority to CN201310466382.2A priority Critical patent/CN103676649A/en
Publication of CN103676649A publication Critical patent/CN103676649A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a local self-adaptive WNN training system, device and method. The method comprises the steps of online local self-adaptive WNN structure adjustment; online WNN weight update; and WNN update strategy selection. According to the invention, a WNN model is updated online, it is ensured that the model can be popularized, the problems, which are caused by different uncertain factors and condition change of the system in practical, in WNN model adaption are overcome, and the operational stability of the system is improved, the fluctuation of production quality is lowered and service life of equipment is prolonged when the system, device and method are applied to industrial process control.

Description

Local auto-adaptive wavelet neural network training system, Apparatus and method for
Technical field
The present invention relates to a kind of local auto-adaptive wavelet neural network training system, Apparatus and method for, local auto-adaptive wavelet neural network training system, Apparatus and method for that especially a kind of system stationarity is high.
Background technology
Be provided with a sample point, its Input output Relationship is by wavelet neural network (WNN) model representation
Figure DEST_PATH_892117DEST_PATH_IMAGE003
(1)
Here, be radial basis wavelet function, its form is, and the translation parameters that is respectively and scale parameter, is the implicit interstitial content of WNN.When training WNN, model small echo neuron Candidate Set, small echo neuron parameter in Candidate Set, wavelet function initial parameter value is determined by the result of data set cluster, detail with reference [Stephen A. Billings, Hua-Liang Wei. A new class of wavelet networks for nonlinear system identification. IEEE Transaction on Neural Networks, 16 (4): 862-870,2005].
Obviously, WNN is a kind of 3-tier architecture neural network, and its model parameter comprises that hidden layer node quantity, hidden layer-output layer connection weight and WNN imply node parameter.WNN has good a plurality of yardsticks and approaches characteristic and promote performance, and small echo node parameter has clear and definite physical meaning, wavelet function has good local support, is widely used in the field such as modeling, Nonlinear Classifier modeling of nonlinear dynamic system.Promoting performance (i.e. the precision of prediction to new samples) is to weigh the important indicator of WNN performance, and popularization performance directly depends on that the structural complexity (wavelet network hidden layer node number) of WNN, wavelet network imply choosing of node parameter.
WNN training method is mainly divided into following several class methods at present:
(1) the WNN training method based on model criterions such as AIC, BIC.
These class methods mainly contain two kinds of methods:
A. first choose the little nodal point of bulk redundancy, keep using above-mentioned model selection criteria to determine WNN structure under small echo node parameter permanence condition.Owing to choosing suitable small echo node parameter, be a very difficult problem, this method is difficult to ask for optimum WNN model;
B. use genetic algorithm (GA) to choose optimum small echo node parameter and WNN structure based on above-mentioned model selection criteria.This method calculation cost is too large, is difficult in practice application.
(2) the empirical risk minimization WNN training method based on Gradient Descent method.
Because this optimization problem is multivariate, non-linear, non-protruding optimization problem, the method is identical with the defect of the conventional neural metwork training method based on Gradient Descent, all exist training speed slow, be easy to converge to local minimum point, be difficult to determine the suitable defects such as WNN structure.Although there is researcher to use conjugation ladder descent method can improve to a certain extent WNN training speed, above-mentioned drawback does not still solve.
(3) use support vector machine (SVM) theoretical method to improve the Generalization Capability of WNN.
Be the marked achievement of Statistical Learning Theory development, it not only has solid theoretical foundation, and has good popularization performance, receives the very big concern of academia and industry member.Because SVM and WNN have identical structure, applicant has proved that radial basis is exactly a kind of kernel function of the Mercer of meeting condition to base kernel function, and the multi-scale wavelet SVM(WSVM of a kind of similar WNN proposed) modeling method, the relevant achievement of applicant is published in 2008 4 phase < < Circuits and Systems journal > >.Owing to being subject to calculating quantitative limitation, the method has only provided the WNN modeling method on two yardsticks.
(4) WNN based on Multiple Kernel Learning theory improves one's methods.
There is in recent years scholar to propose multinuclear SVM method.The method is used the linear combination of a plurality of kernel functions to represent the kernel function of SVM, obtains weight and the SVM model parameter of optimum kernel function by solving-optimizing problem.When kernel function is radial basis wavelet function, multiple dimensioned SVM and multinuclear SVM have the multiple dimensioned characteristic of approaching of WNN in essence.Although the method has been inherited SVM and multiple dimensioned advantage of approaching, and cannot adjust kernel functional parameter, has a strong impact on the performance of model.
Should be noted that, said method all cannot be adjusted WNN model structure and parameter online, there will be in actual applications the uncertain factors such as sample distribution inhomogeneous (the modeling training data of some mode is not sufficient or complete), equipment working condition change, input interference, external environment, ageing equipment, cause the WNN model prediction precise decreasing having trained, thereby need to train online WNN model.
Although modeling object is in the process characteristic form of expression of each operation mode (perform region) difference to some extent, but comprehensively it seems, all process characteristics have similar potential characteristic, i.e. " common mode (Common part) " part, and " changing pattern (Specific part) " part composition of describing the distinctive potential characteristic of operation mode.Therefore the present invention, according to the patterns of change of procedures system, takes different model modification strategies, can greatly reduce time and the computation complexity of model learning, thus the on-line study of implementation model.
Appendix: background context knowledge.
Optimum experimental design Optimum criterion.
To given data set, suppose that they exist linear relationship below;
Wherein, be Gaussian noise (noting, is stochastic variable, and its average is 0, and variance is).Optimum experimental design is exactly to select the experimental data that comprises maximum information to learn anticipation function, makes predicated error minimum.Order,.Optimum method of estimation is used least mean-square error as cost function exactly, and its optimum solution is.
In order to guarantee the generalization of regression model, we are the variance minimum of expectational model parameter also.Due to
Can find out that the model parameter estimation shown in above formula is that its covariance can be expressed as without partially estimating
Figure DEST_PATH_764620DEST_PATH_IMAGE021
Therefore, the prediction variance of forecast model is
Figure DEST_PATH_946203DEST_PATH_IMAGE022
From above formula, it is minimum that prediction variance minimum is equivalent to model estimated parameter variance.A plurality of Optimality Criteria measurement model parameter variances have been there are at present, its neutralization
Figure DEST_PATH_805071DEST_PATH_IMAGE025
optimum Optimality Criteria causes that people pay close attention to widely.
Figure DEST_PATH_31041DEST_PATH_IMAGE025
optimum Optimality Criteria is exactly the mark that minimizes parameter covariance matrix, its average variance of equal value.
Figure DEST_PATH_657194DEST_PATH_IMAGE026
optimum Optimality Criteria method is exactly determinant (determinant) minimum that makes parameter covariance matrix.
Because WNN output is with to imply node output linear, if the implicit node of WNN is found out to data characteristics, obviously controls WNN model complexity issue and be converted into feature selecting problem.Therefore, use Optimum method to eliminate the popularization performance that the implicit node of WNN redundancy can guarantee WNN.
Manifold regularization.
Manifold learning arithmetic can disclose the intrinsic geometry structure of low dimension data from high dimensional data, accumulate corresponding certain explanatory variable of dimension in each, can explain high dimensional data according to a small amount of hidden variable like this.Known according to Pareto's law, most key properties of nonlinear system are all to be portrayed by the locality of nonlinear system.Therefore, the local geometric character (as distance, angle) of consideration data set is to improve the effective way of WNN performance.Manifold learning arithmetic all rely on similarity matrix the same as linear projection method calculates, but its computation complexity does not have greatly increased.
To regression data collection, definition
Figure DEST_PATH_86219DEST_PATH_IMAGE028
. similarity matrix is used figure defined matrix, and figure limit weight matrix is defined as follows:
Here, for class label; For K-neighbour set.Suppose sample
Embedding being expressed as on low dimensional manifold
According to spectral graph theory, the flatness that the spectrogram regularization factor can be used matrix measures low-dimensional to represent.The Laplacian manifold regularization factor can be expressed as
Figure DEST_PATH_444891DEST_PATH_IMAGE039
Wherein, diagonal matrix,.By minimizing the Laplacian manifold regularization factor, the data set in lower dimensional space has kept the local geometry of higher-dimension raw data set, effectively improves the performance of learning machine.
Summary of the invention
In order to make the stationarity of system higher, the invention provides local auto-adaptive wavelet neural network exercise equipment, system and method that a kind of system stationarity is high.
For realizing this object, the invention provides a kind of local auto-adaptive wavelet neural network training system,
Off-line WNN training module and online updating WNN module that this local auto-adaptive wavelet neural network training system is connected by signal form.
Preferably, this off-line WNN training module is set up WNN initial model;
This online updating WNN module, according to the distribution character of the data of newly arriving, adopts different WNN model modification strategies to predict data.
The present invention also provides a kind of local auto-adaptive wavelet neural network exercise equipment, the data preprocessing module that this local auto-adaptive wavelet neural network exercise equipment is connected by signal, is satisfied with G-K fuzzy clustering module, wavelet function parameter online and module, WNN update strategy are set select module, implicit node to select module, EKF (EKF) training module, Laplacian manifold regularization LSSVM module, optimum experimental design Optimum module, sample to increase that WNN weight update module, sample remove WNN weight update module, WNN prediction module forms.
Preferably, the Function and operation of this data preprocessing module: function input parameter is data set, output parameter is normalization data set;
This is satisfied with the Function and operation of G-K fuzzy clustering module online:
Input parameter: data set, initial degree of membership matrix, number of clusters;
Output parameter: number of clusters, degree of membership matrix;
This wavelet function parameter arranges the Function and operation of module:
Input parameter: degree of membership matrix, data set, number of clusters, node function generation strategy;
Output parameter: wavelet function parameter matrix;
This WNN update strategy is selected the Function and operation of module:
Input parameter: the past is degree of membership matrix constantly, the past is number of clusters constantly, current degree of membership matrix, current number of clusters;
Output parameter: degree of membership matrix, number of clusters;
This implicit node is selected the Function and operation of module:
Input parameter: both candidate nodes set, WNN weight vectors, fitting data collection;
Output parameter: small echo node parameter, respective weights;
The Function and operation of this EKF (EKF) training module:
Input parameter: small echo node parameter, algorithm stops threshold value, training dataset;
Output parameter: small echo node parameter, respective weights;
The Function and operation of this Laplacian manifold regularization LSSVM module:
Input parameter: training dataset, model parameter,, matrix L;
Output parameter: weight vectors;
This optimum experimental design
Figure DEST_PATH_268063DEST_PATH_IMAGE045
the Function and operation of Optimum module:
Input parameter: training dataset, weight vectors, the parameter matrix that the implicit node of WNN forms, vertex ticks vector to be selected;
Output parameter: node selected marker vector;
This sample increases the Function and operation of WNN weight update module:
The wide mouthful data set that slides, newly adds data, Q matrix, and R matrix W NN implies node parameter matrix;
Output parameter: upgrade Q matrix, upgrade R matrix;
This sample removes the Function and operation of WNN weight update module:
Input parameter: Q matrix, R matrix, removes data number;
Output parameter: upgrade Q matrix, upgrade R matrix;
The Function and operation of this WNN prediction module:
Input data, WNN implies node parameter matrix, weight matrix, input data vector;
Output parameter: prediction output data.
The present invention provides again a kind of local auto-adaptive wavelet neural network training method, and this local auto-adaptive wavelet neural network training method comprises:
S31, online local auto-adaptive WNN structural adjustment;
S32, online updating WNN weight;
S33, WNN upgrade selection strategy.
Preferably, S31, online local auto-adaptive WNN structural adjustment, specifically comprise:
S311, choose the implicit node of WNN;
S312, control WNN model complexity.
Preferably, this S312, control WNN model complexity, specifically comprise:
S3121, the WNN weight based on Laplacian manifold regularization LSSVM are estimated;
S3122, based on
Figure DEST_PATH_686406DEST_PATH_IMAGE045
the implicit sequential selection of node of WNN of Optimum.
Preferably, this S32, online updating WNN weight, specifically comprise:
S321, sample increase be the new stage more;
S322, sample remove the more new stage.
Preferably, this S33, WNN upgrade selection strategy, specifically comprise:
S331, initialization;
S332, according to degree of membership matrix, carry out cluster;
S333, judge whether to finish;
S334, the most dissimilar sample of searching and center of a sample are as new cluster centre
S335, the corresponding new initial degree of membership matrix of calculating;
S336, order, turn S332.
The beneficial effect that the technical scheme that the embodiment of the present invention provides is brought is:
1). based on above-mentioned thought, the WNN node of describing " common mode " part in WNN modeling process remains unchanged, and only needs to adjust local WNN structure matching " changing pattern ", greatly reduces the required calculated amount of training WNN, is applicable to online WNN modeling;
2). from the implicit node Candidate Set of WNN, choose small echo neuron iteratively and join WNN, and adopt the adjustment of EKF (EKF) method to newly increase parameter and the associated weight of little nodal point;
3). the online updating WNN Weight algorithm decomposing based on sliding window QR, the online correction model weight of recursive algorithm that increases and subdue by sample;
4). introduce manifold learning thought, propose first Laplacian regularization with
Figure DEST_PATH_978848DEST_PATH_IMAGE045
the WNN complexity control method of Optimum Optimality Criteria combination, has considered the geometry of training dataset preferably, has guaranteed the popularization performance of WNN.
Like this in practical application in conjunction with predicated error and priori, by online adjustment WNN structure and control model complexity or WNN weight is upgraded, guaranteed the precision of prediction of WNN, effectively overcome existing WNN algorithm and be difficult to on-line study, promote the problem that performance is difficult to assurance.
Accompanying drawing explanation
By the description of a preferred embodiment of the present invention being carried out below in conjunction with accompanying drawing, it is clearer that technical scheme of the present invention and technique effect thereof will become, and easy to understand more.Wherein:
Fig. 1 shows the structural representation of the local auto-adaptive wavelet neural network training system of embodiments of the invention;
Fig. 2 shows the structural representation of the local auto-adaptive wavelet neural network exercise equipment of embodiments of the invention;
Fig. 3 shows the method flow diagram of the local auto-adaptive wavelet neural network training method of embodiments of the invention;
Fig. 4 shows the method flow diagram of the control WNN model complexity in Fig. 3;
Fig. 5 shows the method flow diagram of the online updating WNN weight in Fig. 3;
Fig. 6 shows the method flow diagram of the WNN renewal selection strategy in Fig. 3.
Embodiment
Below with reference to appended accompanying drawing, the preferred embodiments of the present invention are described.Need to indicate, wherein " left side ", " right side " are only used to be convenient to by reference to the accompanying drawings the purpose of illustration and description, are not limited only to this.
The first embodiment
Fig. 1 shows the structural representation of the local auto-adaptive wavelet neural network training system of the first embodiment of the present invention.
Off-line WNN training module and online updating WNN module that this local auto-adaptive wavelet neural network training system is connected by signal form.
Off-line WNN training module is mainly set up WNN initial model.
Its step is first to use based on being satisfied with online G-K fuzzy clustering method training dataset is carried out to cluster, and according to cluster result, determine the parameter of a plurality of wavelet functions.Wavelet function scale parameter and translation parameters generate at random according to the center of cluster, radius and variance result, and these small echo node functions form the implicit node candidate collection of WNN; Then use existing WNN training algorithm to set up initial WNN model as current WNN.
Online updating WNN module, according to the cluster result of the data of newly arriving, adopts different WNN model modification strategies that WNN is upgraded and new data is predicted.
Its step is that use is satisfied with online G-K fuzzy clustering method the data of newly arriving are carried out to cluster, according to cluster result, adopts following 3 kinds of strategies to predict data;
If S11 new data all belonged to identical cluster and degree of membership >0.5 with former data constantly, still use current WNN to predict new data.
, if new data cluster result is that newly-increased cluster or new data transferred to other cluster (degree of membership >0.5) or original cluster degree of membership <0.2(has been departed to current working model), use online local auto-adaptive WNN update algorithm, first from Candidate Set, select optimal wavelet function recurrence to add the hidden layer of current WNN model to, and use the newly-increased parameter that implies node of EKF training, until meet error threshold; Then Laplacian regularization is combined with Optimum Optimality Criteria and is selected the implicit node of WNN, and the node of the redundancy of deleting from WNN model is joined to the implicit node Candidate Set of WNN.
If, new data to degree of membership >=0.2 of original cluster and≤0.5 o'clock, show that object has been subject to the impact of system Dynamic Uncertain factor, only need to upgrade WNN Model Weight parameter.The method by WNN weight after increasing Sample Refreshment WNN weight stage and old sample remove in moving window, more realize WNN weight and upgrade by the new stage.If while using S11 or S12 to upgrade WNN model, need to replace existing WNN model as current WNN predicted data by the WNN model after upgrading.
The technical scheme that the embodiment of the present invention provides can make full use of historical knowledge (candidate WNN implies node set) and upgrade fast WNN model, is convenient to train online WNN.
The second embodiment
Fig. 2 shows the structural representation of the local auto-adaptive wavelet neural network exercise equipment of the second embodiment of the present invention.
The data preprocessing module that this local auto-adaptive wavelet neural network exercise equipment is connected by signal, be satisfied with G-K fuzzy clustering module, wavelet function parameter online and module, WNN update strategy are set select module, implicit node to select module, spreading kalman (EKF) training module, Laplacian regularization LSSVM module, experimental design Optimum module, sample to increase that WNN weight update module, sample remove WNN weight update module, WNN prediction module forms.
The Function and operation of data preprocessing module: function input parameter is data set, output parameter is normalization data set.
Be satisfied with online the Function and operation of G-K fuzzy clustering module:
Input parameter: data set, initial degree of membership matrix, number of clusters;
Output parameter: number of clusters, degree of membership matrix.
Wavelet function parameter arranges the Function and operation of module:
Input parameter: degree of membership matrix, data set, number of clusters, node function generation strategy;
Output parameter: wavelet function parameter matrix.
Update strategy is selected the Function and operation of module:
Input parameter: the past is degree of membership matrix constantly, the past is number of clusters constantly, current degree of membership matrix, current number of clusters;
Output parameter: degree of membership matrix, number of clusters.
Implicit node is selected the Function and operation of module:
Input parameter: candidate WNN implies node set, WNN weight vectors, fitting data collection;
Output parameter: small echo node parameter, respective weights.
The Function and operation of spreading kalman (EKF) training module:
Input parameter: small echo node parameter, algorithm stops threshold value, training dataset;
Output parameter: small echo node parameter, respective weights.
The Function and operation of regularization LSSVM module:
Input parameter: training dataset, model parameter (cross validation method is selected), (suggestion range of choice is [2,20]), matrix (neighbour's quantity is selected in [3,5]);
Output parameter: weight vectors.
Optimum experimental design the Function and operation of Optimum module:
Input parameter: training dataset, weight vectors, the parameter matrix that the implicit node of WNN forms, vertex ticks vector to be selected (this node of 0-removes, and this node of 1-is to be selected);
Output parameter: node selected marker vector.
Sample increases the Function and operation of WNN weight update module:
The wide mouthful data set that slides, newly adds data, Q matrix, and R matrix W NN implies node parameter matrix;
Output parameter: upgrade Q matrix, upgrade R matrix.
Sample removes the Function and operation of WNN weight update module:
Input parameter: Q matrix, R matrix, removes data number;
Output parameter: upgrade Q matrix, upgrade R matrix.
The Function and operation of prediction module:
Input parameter: WNN implies node parameter matrix, weight matrix, input data vector;
Output parameter: prediction output data.
According to the relation of module in Fig. 2, in conjunction with the input and output of above-mentioned module, be easy to analyze the message exchange between modules and deal with relationship, technician can realize the inventive method without creative work.
This equipment is completed by said procedure module, without user, systematic parameter is set, very easy to use, has good maintainability, without increasing hardware device, conveniently existing system is transformed in actual applications.
The 3rd embodiment
Fig. 3 shows the method flow diagram of the local auto-adaptive wavelet neural network training method of the third embodiment of the present invention.
This local auto-adaptive wavelet neural network training method comprises:
S31, online local auto-adaptive WNN structural adjustment;
S32, online updating WNN weight;
S33, WNN upgrade selection strategy.
This S31, online local auto-adaptive WNN structural adjustment, specifically comprise:
S311, choose the implicit node of WNN;
S312, control WNN model complexity.
S311, choose the implicit node of WNN, progressively increase implicit node the knot modification parameter of WNN, until error of fitting meets prior setting threshold; Specifically comprise:
Online local auto-adaptive WNN method of adjustment changes for overcoming because system works modal transformation causes system nonlinear organization WNN model and the real system mismatch problems causing.Its method is exactly from small echo node Candidate Set, progressively to choose to make the maximum little nodal point of WNN approximate error decline add WNN, then uses the adjustment of EKF method newly to add implicit node parameter and weighted value.It is described in detail as follows:
Suppose that current WNN has had individual small echo neuron, for training sample set, order, for this WNN about output, be WNN approximate error,, note.First estimate both candidate nodes and concentrate the importance of each node, its importance is estimated by the error decline degree producing in upper projection.Upper being projected as
(2)
Wherein, be the vector of standardizing.Note, above formula (2) can be regarded approaching on vector as, so its corresponding projection (approaching) error is
Figure DEST_PATH_722550DEST_PATH_IMAGE061
(3)
Obviously, Select Error reduces maximum node,
Figure DEST_PATH_100442DEST_PATH_IMAGE062
(4)
Here, be very little positive number, can be regarded as the regularization factor, object is to prevent because of the too little over-fitting problem that causes.Obviously, increase small echo neuron and can reduce the approximate error of WNN to sample.Can prove, this algorithm is (the analytic process omission) of convergence, and its speed of convergence is omitted by theorem 1(proof below) provide.
It is the limited function relation that meets certain bounded the unknown that theorem 1. is established sample set, if there is positive number, makes all establishments, and for approximate error threshold value arbitrarily, algorithm passes through at the most herein
Figure DEST_PATH_278690DEST_PATH_IMAGE070
(5)
Inferior iteration increases after small echo neuron, and error of fitting is no more than.Wherein; For rounding operator.Wherein, (revise original idea and mean the output of node on data set, for consistent above, therefore change into) is the degree of correlation between node and sample set, larger representative function and between correlation degree larger.
Theorem 1 shows, can make error reach setting value by increasing progressively the implicit node of WNN.Yet in practice, be difficult to set in advance the parameter of Candidate Set Wavelets, inevitably cause implicit node too much.From Occam shaver principle, the implicit node of redundancy can reduce the Generalization Capability of WNN model.Order is for having the error of fitting of an implicit node WNN, and after the implicit little nodal point of increase, the error of fitting of WNN reduces with speed,
(6)
Because degree of correlation determines the decline rate of error, and then affect the quantity of the implicit node of WNN.Therefore, be necessary current selection small echo neuron parameter to carry out part adjustment, at utmost improve the correlativity of local little nodal point, thereby reduce sample fitting error.The present invention adopts the search of EKF method to newly increase optimized parameter and the associated weight of little nodal point.The local neuronic translation of small echo and yardstick and weight, note.For accelerating the speed of convergence of training algorithm, increase the parameter learning rate factor herein, the parameter training mode based on EKF algorithm is as follows:
Figure DEST_PATH_491976DEST_PATH_IMAGE095
(7)
(8)
Figure DEST_PATH_299712DEST_PATH_IMAGE097
(9)
Figure DEST_PATH_430479DEST_PATH_IMAGE098
(10)
Wherein, be respectively the output of local small echo neuron and desired output; It is training error; ; For learning rate; It is Kalman gain vector; The noise covariance of estimating, can ask for by carrying out recurrence; It is state estimation error covariance matrix; It is the iterations of EKF algorithm; .For self-adaptation regularized learning algorithm speed is to accelerate convergence, theorem 2 has provided the condition that learning rate need be satisfied, has solved preferably the problem that rate adaptation is selected.
The prediction output that the parameter that theorem 2. orders are implicit node is implicit node to given new samples, is the desired output of implicit node, is the learning rate of relevant parameter, is EKF gain vector.If learning rate meets
Figure DEST_PATH_78892DEST_PATH_IMAGE119
(11)
WNN training algorithm based on EKF is uniformly convergent so, wherein.
We choose Lyapunov function and are easy to prove theorem 2, wherein, are the prediction output of WNN to sample.
Because the method progressively increases new implicit node and uses variable step EKF algorithm to adjust local small echo node parameter, speed of convergence is very fast, and algorithm on average moves 40s just can obtain satisfied modeling error.
In actual applications, although progressively increase the implicit number of nodes of WNN, can effectively reduce modeling error, also unavoidably can there will be redundancy to imply node.According to Ao Muka shaver principle, the too much redundant node of WNN can reduce the Generalization Capability of model.Utilizing data local geometry information is to improve the important means of WNN performance.Reference manifold learning is theoretical, the present invention propose Laplacian regularization with
Figure DEST_PATH_430425DEST_PATH_IMAGE125
the WNN model complexity control method of Optimum combination.
, control WNN model complexity, based on Laplacian regularization with
Figure DEST_PATH_995267DEST_PATH_IMAGE125
the WNN complexity control method of Optimum Optimality Criteria combination.
According to optimum experimental design method, keeping under the prerequisite that the implicit node parameter of WNN is constant, the variance that minimizes expectational model parameter is equivalent to the key character of selecting data set.Based on optimum experimental design
Figure DEST_PATH_596013DEST_PATH_IMAGE125
optimum criterion, consider the local geometry of data set, the present invention proposes the implicit node selecting method of two step WNN: first the least square method supporting vector machine based on Laplacian manifold regularization (LSSVM) is estimated regression model WNN weight parameter, then according to estimated parameter variance minimum
Figure DEST_PATH_59355DEST_PATH_IMAGE125
optimum Optimality Criteria selects WNN to imply node sequentially.
As shown in Figure 4, this S312, control WNN model complexity, specifically comprise:
S3121, the WNN weight based on Laplacian regularization LSSVM are estimated;
To data set above, suppose that WNN exists an implicit node, establish input sample,,, WNN output and imply that node output is designated as and.Note, makes the vector forming into data set the feature, feature set.Obviously, the of the corresponding WNN of the feature of data set implicit node.Therefore select an important WNN node to be just equivalent to from a collection selection key character.Suppose to select individual being characterized as.The sample set of order for selecting to form after feature, wherein.Order,,, it is defined as follows:
Figure DEST_PATH_692746DEST_PATH_IMAGE150
(12)
.In the feature space of selecting, regression model is so
Wherein, be the error that unknown average is 0.The error of supposing different pieces of information sample is separate, but has identical variance.For the local geometry that guarantees to select the Generalization Capability of node and consider data, the optimal value that we use the LSSVM based on Laplacian regularization to ask for, its optimization problem is following form:
Figure DEST_PATH_408898DEST_PATH_IMAGE155
(13)
Wherein, be the regularization factor, matrix and weight definition can be with reference to background technology relevant portions.Objective function is to differentiate and equal 0 available optimum solution:
Figure DEST_PATH_67598DEST_PATH_IMAGE160
Wherein, for unit matrix.Order, so
Figure DEST_PATH_528218DEST_PATH_IMAGE164
(14)
S3122, based on
Figure DEST_PATH_8878DEST_PATH_IMAGE165
the implicit sequential selection of node of WNN of Optimum.
Notice, positive definite symmetry and, order
Have bias and the variance of estimated parameter are
Figure DEST_PATH_60635DEST_PATH_IMAGE171
(15)
Figure DEST_PATH_812690DEST_PATH_IMAGE172
(16)
According to the known predicted value of formula (14), be that the variance of predicated error is
Figure DEST_PATH_902186DEST_PATH_IMAGE174
(17)
Notice and,, according to formula (16), the variance of parameter estimating error is
Figure DEST_PATH_642292DEST_PATH_IMAGE178
(18)
Above formula is brought into formula (17) to be obtained
In general, regularization coefficient arranges smaller, and the setting of error penalty coefficient is larger, notice and be positive definite matrix, and we has
Figure DEST_PATH_952553DEST_PATH_IMAGE185
(19)
Similarly
Figure DEST_PATH_681475DEST_PATH_IMAGE186
(20)
Based on optimum experimental design principle, we expect that the character subset of selecting can make the covariance matrix of estimated parameter minimum.And minimize, also can make the predicated error of new samples minimum, this problem can be equivalent to
Figure DEST_PATH_445218DEST_PATH_IMAGE188
optimim Optimality Criteria:.
To matrix, L is positive semidefinite matrix, is positive definite, invertible matrix.Order, obtains according to Woodbury formula
Figure DEST_PATH_472604DEST_PATH_IMAGE193
(21)
Notice, we can obtain
Figure DEST_PATH_331156DEST_PATH_IMAGE195
(22)
Consider it is constant, therefore select the implicit node problems of WNN to be just converted into following optimization problem:
Figure DEST_PATH_854858DEST_PATH_IMAGE197
(23)
Notice and only comprise a selection feature:, therefore can be written as.Due to,, so shown in formula (23), optimization problem is converted into
Figure DEST_PATH_473610DEST_PATH_IMAGE205
(24)
We use sequential optimization method to solve above-mentioned optimization problem.First hypothesis has been selected a node, can choose the node by optimization problem below:
Figure DEST_PATH_926588DEST_PATH_IMAGE210
(25)
Order, can obtain according to Woodbury and Sherman-Morrison formula
Figure DEST_PATH_373936DEST_PATH_IMAGE212
(26)
Due to be normal matrix, so the sequential optimization problem shown in formula (25) is converted into
Figure DEST_PATH_891002DEST_PATH_IMAGE215
(27)
Like this, by the optimization problem solving above, can choose successively the important implicit node of WNN.
The method is that applicant proposes first, compared with prior art, this method thought and technical method are advanced, can adjust online WNN complexity, and ask for WNN optimal value of the parameter, be particularly useful for system nonlinear organization and change, have not modeling dynamic uncertainty, there is the System Discrimination occasion of manifold structure data set.
, online updating WNN weight.
For overcome industrial object in operational process, be subject to equipment dynamically, uncertain factor impact, avoid using a large amount of calculator memories, reduce the impact of old sample on model, the present invention adopts the online updating Weight algorithm based on regular length moving window.Moving window need to remove a sample the most unessential after increasing new samples from training sample.In online updating WNN Weight algorithm, suppose that first WNN model obtains containing a neuronic WNN model of small echo according to training sample set.
As shown in Figure 5, S32, online updating WNN weight, specifically comprise:
S321, sample increase progressively the more new stage;
S322, sample remove the more new stage.
, sample increases progressively the more new stage, specifically comprise:
If new samples is, the implicit node of WNN is output as, .If QR be decomposed into, the upper triangular matrix that QR is so decomposed into can be tried to achieve by following formula line by line:
Figure DEST_PATH_829078DEST_PATH_IMAGE223
Figure DEST_PATH_814352DEST_PATH_IMAGE224
Figure DEST_PATH_124110DEST_PATH_IMAGE225
(28)
Wherein, be the row of matrix.Suppose to remove the sample, because final forecast model is irrelevant with the order of sample, first sample and exchange, then provide the sample recursive form removing after sample.If carrying out QR, matrix A is decomposed into.If the QR to matrix after the row of matrix A and row exchange is decomposed into, and is wherein orthogonal matrix, is upper triangular matrix, so.Here be the matrix after matrix Q row and row exchange.By said method, the most unessential sample is moved to the 1st row like this, so only need remove the first row sample.
, sample removes the more new stage, specifically comprise:
The recursion QR that removes system matrix after the first row sample decomposes.The form of the implicit node output vector of the corresponding WNN of known sample is, QR be decomposed into
Figure DEST_PATH_732388DEST_PATH_IMAGE244
, the QR supposing is decomposed into, and the upper triangular matrix that the QR of matrix decomposes so can be obtained by following formula line by line:
Figure DEST_PATH_954422DEST_PATH_IMAGE249
Figure DEST_PATH_647441DEST_PATH_IMAGE250
Figure DEST_PATH_863658DEST_PATH_IMAGE251
Wherein, be the row of matrix.By method above, be easy to ask for the recursion QR decomposed form that removes system matrix after sample, and then obtain the weights after renewal.If just remove the oldest sample from individual training sample, above-mentioned sample can be increased progressively to the value step of updating of eliminating with sample so and merge into a step.
After the calculating in above-mentioned two stages, WNN weight is upgraded and is calculated by following formula
Figure DEST_PATH_837431DEST_PATH_IMAGE256
This algorithm steps is eliminated matrix and the calculating of inverting, and only needs their QR of recursive calculation to decompose, and notices that upper triangular matrix inversion calculation amount is very little, meets on-line study requirement completely.
, WNN upgrades selection strategy.
To new data sample, need to determine the above-mentioned update strategy whether WNN model upgrades and take according to certain strategy.Existing renewal criterion is exactly according to relative prediction residual in setting-up time section, to be greater than the threshold value differentiation result of predefined, determines whether the strategy that upgrades and upgrade.
Conventional method needs to determine and differentiate time period length parameter and two threshold parameters in advance, how to choose the problem that these parameters remain an opening.Consider to have a large amount of juxtapositions between complex class data, data clusters result physical significance obviously, reliably, decryption distribution character well, the present invention uses based on satisfied and is satisfied with online G-K fuzzy clustering method [Lee twists etc., utilize fuzzy satisfactory clustering to set up PH N-process model. control and decision-making 2002] determine WNN weight parameter or WNN structural adjustment learning strategy.Compare with existing update strategy selection criterion, the present invention uses the method that data clusters result is accurately selected WNN update strategy that proposes, and is applicable to the canbe used on line of complex data distributional clustering, without selecting number of clusters parameter, has overcome existing methodical shortcoming.The method is the significant advantage of another one also, selects scale parameter and the translation parameters of wavelet function according to cluster result, can greatly reduce and adjust parameter required time.
Supposing that its schedule of samples is shown, is wherein the input of system, is the output of system.If the input and output of system are regarded as to a sample, that is, sample set is expressed as.
As shown in Figure 6, S33, WNN upgrade selection strategy, and, based on being satisfied with online G-K fuzzy clustering, implementation step is as follows:
S331, initialization: number and the algorithm of establishing initial clustering finish threshold value, initial degree of membership matrix.
, according to degree of membership Matrix Cluster: according to initial degree of membership matrix, solve and be satisfied with G-K fuzzy clustering optimization problem
Wherein, degree of membership value; For sample; For cluster centre; M is blur level,, covariance wherein ,
Figure DEST_PATH_598876DEST_PATH_IMAGE275
.Calculate degree of membership matrix, then according to the degree of membership of each cluster under sample, choose maximal value and classify, sample set is divided into c subset.
, judge whether to finish: calculate the currency of given system performance index, when (
Figure DEST_PATH_375071DEST_PATH_IMAGE280
for predetermined threshold value) time algorithm finish, otherwise algorithm forwards next step to.Generally get as performance index, for father's generation number, generally get
S334, find new cluster; According to degree of membership matrix and by finding out one and the equal dissimilar sample of each cluster.For avoiding noise, generally should find out several similar samples, ask its mean value as new cluster centre.
, order is for new cluster initial center, calculates corresponding new initial degree of membership matrix.
, order
Figure DEST_PATH_401189DEST_PATH_IMAGE289
, turn S332.
For new sample, update strategy system of selection of the present invention is as follows:
(1) keep WNN constant: the degree of membership that calculating sample belongs to current cluster is greater than 0.5;
(2) need to adjust the partial structurtes of WNN: in the time of need to increasing a new cluster or belong to another one cluster;
(3) WNN weight is upgraded: when sample is at the juxtaposition place of a plurality of clusters (whether the degree of membership of generally getting current cluster is positioned at interval (0.2,0.5)).
The present invention is less demanding to cpu performance, can in embedded system, realize and apply, and greatly improves the scope of application of the method, as the sorter in pattern-recognition, and the interpolation of Complex Nonlinear System and matching etc.
The present invention can online updating WNN model, and can guarantee the popularization performance of model, overcome well that system in reality is subject to various uncertain factors, working conditions change and the problem of the WNN model adaptation that causes, for industrial process, control and can increase system traveling comfort, reduce the undulatory property of product quality, in the life-span of raising equipment, obtain good economic benefit.
For person of ordinary skill in the field, along with the development of technology, the present invention's design can realize by different way.Embodiments of the present invention are not limited in embodiment described above, and can change within the scope of the claims.

Claims (9)

1. a local auto-adaptive wavelet neural network training system, is characterized in that,
Off-line WNN training module and online updating WNN module that this local auto-adaptive wavelet neural network training system is connected by signal form.
2. local auto-adaptive wavelet neural network training system as claimed in claim 1, is characterized in that,
This off-line WNN training module is set up WNN initial model;
This online updating WNN module, according to the distribution character of the data of newly arriving, adopts different WNN model modification strategies to predict data.
3. a local auto-adaptive wavelet neural network exercise equipment, is characterized in that,
The data preprocessing module that this local auto-adaptive wavelet neural network exercise equipment is connected by signal, be satisfied with G-K fuzzy clustering module, wavelet function parameter online and module, WNN update strategy are set select module, implicit node to select module, spreading kalman (EKF) training module, Laplacian regularization LSSVM module, experimental design Optimum module, sample to increase that WNN weight update module, sample remove WNN weight update module, WNN prediction module forms.
4. local auto-adaptive wavelet neural network exercise equipment as claimed in claim 3, is characterized in that,
The Function and operation of this data preprocessing module: function input parameter is data set, output parameter is normalization data set;
This is satisfied with the Function and operation of G-K fuzzy clustering module online:
Input parameter: data set, initial degree of membership matrix, number of clusters;
Output parameter: number of clusters, degree of membership matrix;
This wavelet function parameter arranges the Function and operation of module:
Input parameter: degree of membership matrix, data set, number of clusters, node function generation strategy;
Output parameter: wavelet function parameter matrix;
This WNN update strategy is selected the Function and operation of module:
Input parameter: the past is degree of membership matrix constantly, the past is number of clusters constantly, current degree of membership matrix, current number of clusters;
Output parameter: degree of membership matrix, number of clusters;
This implicit node is selected the Function and operation of module:
Input parameter: both candidate nodes set, WNN weight vectors, fitting data collection;
Output parameter: small echo node parameter, respective weights;
The Function and operation of this spreading kalman (EKF) training module:
Input parameter: small echo node parameter, algorithm stops threshold value, training dataset;
Output parameter: small echo node parameter, respective weights;
The Function and operation of this Laplacian regularization LSSVM module:
Input parameter: training dataset, model parameter
Figure 73879DEST_PATH_IMAGE001
,
Figure 888252DEST_PATH_IMAGE002
, matrix
Figure 888876DEST_PATH_IMAGE003
;
Output parameter: weight vectors;
The Function and operation of this experimental design Optimum module:
Input parameter: training dataset, weight vectors, the parameter matrix that the implicit node of WNN forms, vertex ticks vector to be selected;
Output parameter: node selected marker vector;
This sample increases the Function and operation of WNN weight update module:
The wide mouthful data set that slides, newly adds data, Q matrix, and R matrix W NN implies node parameter matrix;
Output parameter: upgrade Q matrix, upgrade R matrix;
This sample removes the Function and operation of WNN weight update module:
Input parameter: Q matrix, R matrix, removes data number;
Output parameter: upgrade Q matrix, upgrade R matrix;
The Function and operation of this WNN prediction module:
Input data, WNN implies node parameter matrix, weight matrix, input data vector;
Output parameter: prediction output data.
5. a local auto-adaptive wavelet neural network training method, is characterized in that,
This local auto-adaptive wavelet neural network training method comprises:
S31, online local auto-adaptive WNN structural adjustment;
S32, online updating WNN weight;
S33, WNN upgrade selection strategy.
6. local auto-adaptive wavelet neural network training method as claimed in claim 5, is characterized in that,
S31, online local auto-adaptive WNN structural adjustment, specifically comprise:
S311, choose the implicit node of WNN;
S312, control WNN model complexity.
7. local auto-adaptive wavelet neural network training method as claimed in claim 6, is characterized in that,
This S312, control WNN model complexity, specifically comprise:
S3121, the WNN weight based on Laplacian regularization LSSVM are estimated;
S3122, the WNN based on Optimum imply the sequential selection of node.
8. local auto-adaptive wavelet neural network training method as claimed in claim 5, is characterized in that,
This S32, online updating WNN weight, specifically comprise:
S321, sample increase progressively the more new stage;
S322, sample remove the more new stage.
9. local auto-adaptive wavelet neural network training method as claimed in claim 5, is characterized in that,
This S33, WNN upgrade selection strategy, specifically comprise:
S331, initialization;
S332, according to degree of membership Matrix Cluster;
S333, judge whether to finish;
S334, find new cluster;
S335, the corresponding new initial degree of membership matrix of calculating;
S336, order, turn S332.
CN201310466382.2A 2013-10-09 2013-10-09 Local self-adaptive WNN (Wavelet Neural Network) training system, device and method Pending CN103676649A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310466382.2A CN103676649A (en) 2013-10-09 2013-10-09 Local self-adaptive WNN (Wavelet Neural Network) training system, device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310466382.2A CN103676649A (en) 2013-10-09 2013-10-09 Local self-adaptive WNN (Wavelet Neural Network) training system, device and method

Publications (1)

Publication Number Publication Date
CN103676649A true CN103676649A (en) 2014-03-26

Family

ID=50314559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310466382.2A Pending CN103676649A (en) 2013-10-09 2013-10-09 Local self-adaptive WNN (Wavelet Neural Network) training system, device and method

Country Status (1)

Country Link
CN (1) CN103676649A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598552A (en) * 2014-12-31 2015-05-06 大连钜正科技有限公司 Method for learning incremental update-supported big data features
CN104915566A (en) * 2015-06-17 2015-09-16 大连理工大学 Design method for depth calculation model supporting incremental updating
CN105490764A (en) * 2015-12-11 2016-04-13 中国联合网络通信集团有限公司 Channel model correction method and apparatus
CN108226887A (en) * 2018-01-23 2018-06-29 哈尔滨工程大学 A kind of waterborne target rescue method for estimating state in the case of observed quantity transient loss
CN111433689A (en) * 2017-11-01 2020-07-17 卡里尔斯公司 Generation of control system for target system
CN112417722A (en) * 2020-11-13 2021-02-26 华侨大学 Sliding window NPE-based linear time-varying structure working mode identification method
CN113093540A (en) * 2021-03-31 2021-07-09 中国科学院光电技术研究所 Sliding mode disturbance observer design method based on wavelet threshold denoising
CN113420815A (en) * 2021-06-24 2021-09-21 江苏师范大学 Semi-supervised RSDAE nonlinear PLS intermittent process monitoring method
WO2022121030A1 (en) * 2020-12-10 2022-06-16 广州广电运通金融电子股份有限公司 Central party selection method, storage medium, and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5268834A (en) * 1991-06-24 1993-12-07 Massachusetts Institute Of Technology Stable adaptive neural network controller
GB2386437A (en) * 2002-02-07 2003-09-17 Fisher Rosemount Systems Inc Adaptation of Advanced Process Control Blocks in Response to Variable Process Delay
CN103064292A (en) * 2013-01-15 2013-04-24 镇江市江大科技有限责任公司 Biological fermentation adaptive control system and control method based on neural network inverse
CN103279038A (en) * 2013-06-19 2013-09-04 河海大学常州校区 Self-adaptive control method of sliding formwork of micro gyroscope based on T-S fuzzy model
CN103324091A (en) * 2013-06-03 2013-09-25 上海交通大学 Multi-model self-adaptive controller and control method of zero-order closely-bounded nonlinear multivariable system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5268834A (en) * 1991-06-24 1993-12-07 Massachusetts Institute Of Technology Stable adaptive neural network controller
GB2386437A (en) * 2002-02-07 2003-09-17 Fisher Rosemount Systems Inc Adaptation of Advanced Process Control Blocks in Response to Variable Process Delay
CN103064292A (en) * 2013-01-15 2013-04-24 镇江市江大科技有限责任公司 Biological fermentation adaptive control system and control method based on neural network inverse
CN103324091A (en) * 2013-06-03 2013-09-25 上海交通大学 Multi-model self-adaptive controller and control method of zero-order closely-bounded nonlinear multivariable system
CN103279038A (en) * 2013-06-19 2013-09-04 河海大学常州校区 Self-adaptive control method of sliding formwork of micro gyroscope based on T-S fuzzy model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
S. CHEN等: "Orthogonal-least-squares regression: a unified approach for data modeling", 《NEUROCOMPUTING》, 31 December 2009 (2009-12-31), pages 2670 - 2681 *
王高峰: "火电厂燃烧系统预测控制技术研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 12, 15 December 2011 (2011-12-15), pages 26 - 40 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598552A (en) * 2014-12-31 2015-05-06 大连钜正科技有限公司 Method for learning incremental update-supported big data features
CN104915566A (en) * 2015-06-17 2015-09-16 大连理工大学 Design method for depth calculation model supporting incremental updating
CN105490764A (en) * 2015-12-11 2016-04-13 中国联合网络通信集团有限公司 Channel model correction method and apparatus
CN105490764B (en) * 2015-12-11 2018-05-11 中国联合网络通信集团有限公司 A kind of channel model bearing calibration and device
CN111433689A (en) * 2017-11-01 2020-07-17 卡里尔斯公司 Generation of control system for target system
CN108226887B (en) * 2018-01-23 2021-06-01 哈尔滨工程大学 Water surface target rescue state estimation method under condition of transient observation loss
CN108226887A (en) * 2018-01-23 2018-06-29 哈尔滨工程大学 A kind of waterborne target rescue method for estimating state in the case of observed quantity transient loss
CN112417722A (en) * 2020-11-13 2021-02-26 华侨大学 Sliding window NPE-based linear time-varying structure working mode identification method
CN112417722B (en) * 2020-11-13 2023-02-03 华侨大学 Sliding window NPE-based linear time-varying structure working mode identification method
WO2022121030A1 (en) * 2020-12-10 2022-06-16 广州广电运通金融电子股份有限公司 Central party selection method, storage medium, and system
CN113093540A (en) * 2021-03-31 2021-07-09 中国科学院光电技术研究所 Sliding mode disturbance observer design method based on wavelet threshold denoising
CN113093540B (en) * 2021-03-31 2022-06-28 中国科学院光电技术研究所 Sliding mode disturbance observer design method based on wavelet threshold denoising
CN113420815A (en) * 2021-06-24 2021-09-21 江苏师范大学 Semi-supervised RSDAE nonlinear PLS intermittent process monitoring method
CN113420815B (en) * 2021-06-24 2024-04-30 江苏师范大学 Nonlinear PLS intermittent process monitoring method of semi-supervision RSDAE

Similar Documents

Publication Publication Date Title
CN103676649A (en) Local self-adaptive WNN (Wavelet Neural Network) training system, device and method
Kim et al. A capsule network for traffic speed prediction in complex road networks
Wang et al. A novel hybrid approach for wind speed prediction
Ren et al. Optimal parameters selection for BP neural network based on particle swarm optimization: A case study of wind speed forecasting
Xu et al. Accurate and interpretable bayesian mars for traffic flow prediction
Quan et al. Particle swarm optimization for construction of neural network-based prediction intervals
Khosravi et al. Constructing optimal prediction intervals by using neural networks and bootstrap method
Chamroukhi et al. A hidden process regression model for functional data description. Application to curve discrimination
Khaled et al. TFGAN: Traffic forecasting using generative adversarial network with multi-graph convolutional network
Fieldsend et al. The rolling tide evolutionary algorithm: A multiobjective optimizer for noisy optimization problems
Langone et al. Incremental kernel spectral clustering for online learning of non-stationary data
Zhao et al. A self-organizing forecast of day-ahead wind speed: Selective ensemble strategy based on numerical weather predictions
CN108564326A (en) Prediction technique and device, computer-readable medium, the logistics system of order
Liu et al. Simulation optimization based on Taylor Kriging and evolutionary algorithm
Zhang et al. Improved most likely heteroscedastic Gaussian process regression via Bayesian residual moment estimator
Csikós et al. Traffic speed prediction method for urban networks—An ANN approach
Li et al. Self-paced ARIMA for robust time series prediction
Kultur et al. Ensemble of neural networks with associative memory (ENNA) for estimating software development costs
Zhu et al. Time-varying interval prediction and decision-making for short-term wind power using convolutional gated recurrent unit and multi-objective elephant clan optimization
Sun et al. A two stages prediction strategy for evolutionary dynamic multi-objective optimization
Chen et al. Fractional-order convolutional neural networks with population extremal optimization
Zhou et al. Integrated dynamic framework for predicting urban flooding and providing early warning
Sapnken et al. Learning latent dynamics with a grey neural ODE prediction model and its application
CN113255963A (en) Road surface use performance prediction method based on road element splitting and deep learning model LSTM
Ni et al. A self-organising mixture autoregressive network for FX time series modelling and prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140326