CN112330044A - Support vector regression model based on iterative aggregation grid search algorithm - Google Patents
Support vector regression model based on iterative aggregation grid search algorithm Download PDFInfo
- Publication number
- CN112330044A CN112330044A CN202011286631.6A CN202011286631A CN112330044A CN 112330044 A CN112330044 A CN 112330044A CN 202011286631 A CN202011286631 A CN 202011286631A CN 112330044 A CN112330044 A CN 112330044A
- Authority
- CN
- China
- Prior art keywords
- grid
- parameter
- search
- algorithm
- svr
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000010845 search algorithm Methods 0.000 title claims abstract description 23
- 230000002776 aggregation Effects 0.000 title claims abstract description 13
- 238000004220 aggregation Methods 0.000 title claims abstract description 13
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 80
- 238000005457 optimization Methods 0.000 claims abstract description 68
- 238000000034 method Methods 0.000 claims description 51
- 239000011159 matrix material Substances 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 11
- 230000009977 dual effect Effects 0.000 claims description 7
- 238000012549 training Methods 0.000 claims description 7
- 239000002699 waste material Substances 0.000 abstract description 7
- 238000002922 simulated annealing Methods 0.000 abstract description 6
- 230000002068 genetic effect Effects 0.000 abstract description 3
- 239000002245 particle Substances 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 33
- 238000013528 artificial neural network Methods 0.000 description 11
- 238000011156 evaluation Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 6
- 238000012360 testing method Methods 0.000 description 6
- 101001095088 Homo sapiens Melanoma antigen preferentially expressed in tumors Proteins 0.000 description 5
- 102100037020 Melanoma antigen preferentially expressed in tumors Human genes 0.000 description 5
- 238000013473 artificial intelligence Methods 0.000 description 5
- 230000035945 sensitivity Effects 0.000 description 5
- 238000007619 statistical method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000012804 iterative process Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000035699 permeability Effects 0.000 description 2
- 238000010187 selection method Methods 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 208000025721 COVID-19 Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Economics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Development Economics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a support vector regression model (IFGS-SVR) based on an iterative aggregation grid search algorithm. The invention provides an iterative clustering grid search (IFGS) algorithm to solve the detailed selection problem of SVR hyper-parameters, which searches for an optimal sub-area by checking the performance of each sub-area, thereby avoiding a great deal of waste of grid setting. The formed IFGS-SVR model is compared with other SVR models, the parameters of which are obtained by grid search (GS-SVR), particle swarm optimization (PSO-SVR), simulated annealing (SA-SVR), differential evolution (DE-SVR), ant colony optimization (ACO-SVR) and genetic algorithm (GA-SVR), by using a real power load data set in a certain county of Jiangxi province, China. The experimental results reveal that the IFGS-SVR model is superior to other models in precision and runtime.
Description
Technical Field
The invention relates to the technical field of short-term power load prediction, in particular to a support vector regression model based on an iterative aggregation grid search algorithm.
Background
Short-term power load forecasting is a global concern because electric utilities around the world are responsible for meeting the power demands of their customers. Accurate load prediction is not only the basis for setting up electricity prices, but is also a necessary condition for planning, managing and operating power supply systems. Since the generation and consumption of electrical energy are done simultaneously, this requires that both must be in equilibrium. If the power supply is insufficient, the small-area power failure probability can be caused; if the power supply is excessive, the waste of electric energy is caused. Therefore, the accurate power load prediction saves unnecessary power consumption waste, reduces the waste of related resources of the power system, and accords with the sustainable development of economy. The short-term load prediction provides powerful guarantee for effective management of the power company and stable operation of the society. Therefore, it is of great practical significance to improve the accuracy of short-term power load prediction.
The power load prediction refers to analyzing the development trend of future load consumption behaviors by comprehensively considering the influence of various factors (economy, politics, environment, historical load, day type and the like) on the load consumption behaviors. Generally, there are probably two main methods for power load prediction, including statistical methods and artificial intelligence methods. Statistical methods such as auto-regressive moving average (ARMA), auto-regressive differential moving average (ARIMA), exponential smoothing, linear regression, etc. are relatively simple to model compared to artificial intelligence methods, do not require complex parameter selection processes, or can use default parameters to achieve good results. The statistical method is based on a statistical analysis technology, and can quickly obtain a good prediction effect on data with small sample size and simple relation. Today, electrical loads are affected by a variety of factors to exhibit non-linear and stochastic patterns, which increases the complexity of the model when building predictive models. Therefore, simple statistical methods are increasingly unsuitable for load prediction in modern society.
The unique features of artificial intelligence methods enable them to fully understand the potential non-linear relationships between the electrical load and the variables used in its modeling. Artificial intelligence methods include knowledge-based expert systems, fuzzy inference, Artificial Neural Networks (ANN), and Support Vector Regression (SVR). Among them, ANN and SVR are receiving much attention due to their strong ability to process nonlinear data. Accordingly, ANN and SVR are employed by many scholars and combined with other techniques to improve prediction performance. Khwaja et al provides a neural network model combining bagging and boosting technologies, and proves that the method reduces prediction errors compared with the existing load prediction method. Some researchers have compared the performance of neural networks with ARIMA models based on a particular building and found that the prediction accuracy of neural networks is 22.6% higher than ARIMA. The neural network has been further developed and Chitalia et al propose a recurrent neural network framework for different types of commercial building load predictions in different countries. Huang et al propose a probabilistic prediction model for convolutional neural networks for the new England area. However, there are many parameters to be determined in the neural network, such as the number of layers, the number of neurons in the hidden layers, the activation function, the learning rate, etc., and if these parameters are not properly selected, there may be a risk of under-fitting or over-fitting.
SVR is a main representative of artificial intelligence methods, and has better generalization ability than neural networks. The initial version of SVR was a Support Vector Machine (SVM) that was extended for regression. SVR has been widely used in real life and is excellent in permeability estimation, wind speed prediction, air quality prediction, COVID-19 case prediction, and the like. The optimal performance of SVR and the nonlinear kernel transform depends largely on their parameter settings, but the complexity of the process is O (N ^3) (N is the number of training data). Therefore, in many studies, SVR parameters are the key to the study, which is also one of the main inventive contents of the present invention. Scholars combine some parameter optimization techniques with SVR to carry out modeling, thereby achieving better effect. Wang et al proposed a SVR model combined with a Differential Evolution (DE) algorithm for annual load prediction, and the results showed that the performance of the model was superior to that of the bp neural network model and the regression model. Yang et al propose a support vector regression-based sequential lattice method (SGA-SVR) that demonstrates the higher performance of this model than the SVR model with default parameters.
At present, besides the parameter optimization techniques proposed by the above scholars, there are some typical parameter optimization techniques including Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Simulated Annealing (SA), and the like. By applying these techniques to practical problems, Akane et al use the PSO algorithm to select parameters of the SVR model for reservoir permeability prediction. Hong et al uses the PSO algorithm to optimize the parameter combinations of the SVR model, which can effectively ensure that the proposed SVR-PSO model has acceptable prediction accuracy. Wen utilizes the ACO algorithm to optimize initial weights and thresholds for the extreme learning machine network of the wind turbine. Most of these optimization algorithms are meta-heuristic algorithms, which usually use random number search techniques. They can be applied to a very wide range of problems, but do not guarantee the reliability of each optimization. A good alternative to this problem is the commonly used parameter optimization method, namely the grid search method (GS). The grid search method is a conventional parameter optimization method. However, since the grid settings are uniformly distributed, all grid points must be refined to realize fine search, which wastes a large amount of grid settings in the parameter optimization process and greatly reduces the calculation efficiency. Therefore, aiming at the two problems that the grid search has low computational efficiency and the meta-heuristic algorithm cannot ensure the reliability of optimization, the invention provides an iterative aggregated grid search (IFGS) algorithm which searches for the optimal sub-area by checking the performance of each sub-area, thereby avoiding a great deal of waste of grid setting. The main idea is as follows: in the first iteration, the algorithm searches for the optimal sub-region within a relatively large grid area. Then, the optimal sub-area is searched again in the optimal sub-area, thereby realizing the dynamic aggregation of the grid area. Since this search strategy does not perform a fine search over the entire grid, the time complexity of the algorithm can be significantly reduced while ensuring the reliability of the optimization.
Disclosure of Invention
The present invention aims to solve the above problems by providing a support vector regression model based on an iterative aggregated mesh search algorithm.
The invention realizes the purpose through the following technical scheme:
the invention comprises the following steps:
s1: the function supporting vector regression is defined as:
f(x)=ωψ(x)+b (1)
where ω is a weight vector, b is a constant, and the following expression is defined as an optimization function;
called the epsilon-insensitive loss function, epsilon is the width of the pipeline, C is a penalty factor, and therefore two relaxation factors xi and xi are introduced*The following expression can be obtained:
the optimization function is a quadratic programming problem, and according to the algorithm, the quadratic programming problem is introduced into the Lagrange multiplier and is converted into an even space thereof for solving:
therefore, the original optimization problem can be converted into an unconstrained form, the optimization target meets the KKT condition, namely the Lagrange dual is utilized to convert the optimization problem into an equivalent dual problem, and the solving process is as follows: firstly, the optimization function L is solved for omega, b, xi and xi*Then the optimization function L is calculated for the Lagrange multiplier alpha, alpha*,β,β*Maximum value of (d); the above process needs to satisfy the KKT condition(ii) a Finally, a solution is obtained that supports vector regression as
S2: using a Radial Basis (RBF) kernel function: k (x)i,xj)=exp(-γ||xi-xj||2) (7)
Three parameters (C, γ, e) need to be determined before modeling;
s3: grid search method: in the grid search method, assuming that the number of all possible values of the parameters C, γ, and epsilon in the grid is K, L and M, the number of three-dimensional grid points is K × L × M, and therefore, the time complexity of the parameter optimization of the grid search method can be given by the following expression:
T1(n)=O(K*L*M) (8)
assuming that K ═ L ═ M, equation (8) can be converted to:
T1(n)=O(K3) (9)
to get a good combination of parameters, K, L, M are usually set very large, so the time complexity of the grid search method is a very large value;
s4: iterative aggregated grid search: in the first iteration, the algorithm searches for the optimal sub-region within a relatively large grid area. Then, with the iteration of the algorithm, the optimal sub-area is searched again in the optimal sub-area, so that the dynamic aggregation of the gridded area is realized. Since this search strategy does not perform a fine search over the entire grid, the time complexity of the algorithm can be significantly reduced;
further, the iterative aggregated mesh search algorithm of step S4 includes the following steps:
inputting:
training data set: d;
parameter search interval: a ish≤(C,γ,ε)≤bh;
Number of grid points per dimension: g; stopping conditions are as follows: delta
And (3) outputting:
global optimal parameter combination of SVR model: (C)*,γ*,ε*)
Total number of iterations of the algorithm: t is
Step 1: calculating the value interval (step length) of the parameters: lambda [ alpha ]
Step 2: generating all values of the parameters;
and step 3: constructing a three-dimensional grid by using all values of the three parameters;
and 4, step 4: establishing an SVR model by using points in the grid;
and 5: calculating the fitness;
step 6: acquiring optimal fitness;
and 7: acquiring a parameter combination corresponding to the optimal fitness;
and 8: calculating error variation: e;
and step 9: if e < delta, Then returns the global optimum parameter combination (C)*,γ*,ε*) And a total number of iterations; else, update search interval: (a)h+1,bh+1) Returning to the step 1;
further, in the iterative aggregated grid search algorithm, the number of grid points in the grid search region, that is, the value of the g parameter needs to be determined, so that the total number of grid points in the entire grid search region is g3. In the present invention, the g parameter is set to 10, and therefore, the total number of grid points is 1031000. Thus, the time complexity of parameter optimization for iterative aggregated grid search can be found as follows:
T2(n)=O(T*g3) (10)
wherein T is the total number of iterations of the algorithm; generally, g < K; thus, the present invention can obtain:
T2(n)=O(T*g3)<<T1(n)=O(K3) (11)
therefore, the iterative aggregated grid search can greatly reduce the time complexity and obtain the optimal solution; assume that in the h-th iteration, the search interval is as follows:
according to the given g parameter, the invention can calculate the value spacing (step length) of the parameter, as shown in the following formula:
all values of the parameters can be generated according to the step length and are respectively stored in the array; all values of the parameters are as follows:
thus, a three-dimensional grid is established with all values of the parameter gamma as the x-axis, all values of the parameter C as the y-axis, and all values of the parameter epsilon as the z-axis, so that each grid point represents a parameter combination. Each combination of parameters in the grid is used to build an SVR model and then calculate its fitness. When all the grid points are traversed, a three-dimensional fitness matrix of g × g × g is obtained. For convenience of illustration, we only use the two-dimensional matrix of the parameters C and γ as an example here. Through one iteration, we can obtain a fitness matrix, which is expressed by M;
superscript h denotes the h-th iteration; thus, the optimal fitness in the fitness matrix can be obtained through the function min (), and the optimal fitness is represented by minMAE. The parameter combination corresponding to the current optimal fitness is represented by (C ', γ ', ∈ '), and is called a local optimal parameter combination. That is, in this iteration, the SVR model built using this combination of parameters performs best, indicating that (C ', γ ', ∈ ') is the best parameter in this iteration. The minimum value in the acquisition matrix can be represented by equation (16); the min () function is provided by the Python's own toolkit; the min () is a function carried by Python, a container needing to obtain the minimum value is arranged in brackets, and the function returns the minimum value in the container.
minMAEh=min(Mh) (16)
Similarly, the optimal fitness in the iteration can be obtained through the iteration. Thus, the error variation can be given by equation (17); note that the algorithm begins to compute the error variance after the second iteration;
e=|minMAEh+1-minMAEh| (17)
updating the search interval:
after the error variation is calculated, it is necessary to determine whether a stop condition is satisfied. If the stopping condition is satisfied, the global optimum parameter is combined by (C)*,γ*,ε*) Expressed as locally optimal parameter combinations, i.e.
(C*,γ*,ε*)=(C′,γ′,ε′) (18)
If the stop condition is not satisfied, the search interval needs to be updated. There are three cases: if C' just falls at the upper bound of the search interval, the upper bound b of the search interval for parameter CcAnd is not changed. If C' is located just at the lower bound of the search interval, the lower bound a of the search interval of parameter CcAnd is not changed. If C' is located inside the search interval, the new search interval will be in step λcGathering the units around C', and giving a new search interval by the formula;
similarly, new search intervals for the parameter γ and the parameter ε can also be obtained, as shown in the formula;
thus, as the algorithm iterates, the grid regions gradually cluster toward the globally optimal parameter combination. Since the g parameter is a constant, the step size will become smaller and smaller, thereby enabling a fine search.
The invention has the beneficial effects that:
the invention relates to a support vector regression model based on an iterative aggregation grid search algorithm. Specifically, an SVR parameter selection method for iterative aggregated grid search is provided. The basic principle is as follows: in the first iteration, the algorithm searches for the optimal sub-region within a relatively large grid area. Then, the optimal sub-area is searched again in the optimal sub-area, thereby realizing the dynamic aggregation of the gridded area. To this end, a support vector regression (IFGS-SVR) model based on iterative aggregated grid search is proposed and applied to short-term power load prediction. Theoretically, the IFGS-SVR model can obviously reduce parameter optimization time and calculation cost, and experimental results prove that the IFGS-SVR model has the advantages of reducing parameter optimization time and calculation cost. Meanwhile, the experimental result also shows that the IFGS-SVR model has higher precision than the reference model.
Drawings
FIG. 1 is a flow diagram of an iterative aggregated grid search of the present invention;
FIG. 2 is a graph of daily load curves and daily temperature change;
FIG. 3 is a diagram of the variation of optimal fitness and step size;
FIG. 4 is a variation diagram of the grid search area of the iterative aggregated grid search algorithm during the iterative process, with the shaded portion being the optimal sub-area;
FIG. 5 is a graph of optimal fitness and optimization time for different g-parameters;
FIG. 6 is a graph of the prediction results and point prediction errors for the IFGS-SVR model;
Detailed Description
The invention provides an SVR parameter selection method for iterative aggregation grid search. The basic principle is as follows: in the first iteration, the algorithm searches for the optimal sub-region within a relatively large grid area. Then, the optimal sub-area is searched again in the optimal sub-area, thereby realizing the dynamic aggregation of the gridded area. To this end, a support vector regression (IFGS-SVR) model based on iterative focused grid search is proposed and applied to short-term power load prediction. Theoretically, the IFGS-SVR model can remarkably reduce parameter optimization time and calculation cost, and experiments are carried out on the national science fund (Grant No.71971105), the national statistical scientific research project (Grant No.2020LZ03) and the Jiangxi province double thousand plan project (Grant No. jxsq2019201064), and the experimental results prove that the IFGS-SVR model has the advantages of being capable of remarkably reducing parameter optimization time and calculation cost. Meanwhile, the experimental result also shows that the IFGS-SVR model has higher precision than the reference model.
The invention will be further described with reference to the accompanying drawings in which:
support vector regression: the function supporting vector regression can be defined as:
f(x)=ωψ(x)+b (1)
where ω is a weight vector and b is a constant. The following expression is defined as the optimization function.
Called the epsilon-insensitive loss function, epsilon is the width of the pipeline and C is a penalty factor. Thus, two relaxation factors xi and xi are introduced*The following expression can be obtained.
The above optimization function is a quadratic programming problem. According to the algorithm, the quadratic programming problem is introduced into Lagrange multipliers, and is converted into dual space to be solved.
Thus, the original optimization problem can be transformed into an unconstrained form. The optimization target of the invention meets the KKT condition, namely, the Lagrange dual is utilized to convert the optimization problem into the equivalent dual problem. The solving process is as follows: firstly, the optimization function L is solved for omega, b, xi and xi*Then the optimization function L is calculated for the Lagrange multiplier alpha, alpha*,β,β*Is measured. The above process needs to satisfy the KKT condition. Finally, a solution is obtained that supports vector regression as
Kernel function:
support vector regression theoretically considers only the dot product operation K (x) in the high-dimensional feature spacei,xj)=ψ(xi)·ψ(xj) Without directly using the function ψ, the problem that the function ψ cannot be expressed because ω is unknown is solved ingeniously. K (x)i,xj) Referred to as a kernel function. It has been demonstrated that symmetric functions can be used as kernel functions as long as the Mercer condition is met. Table 1 lists some commonly used kernel functions. Among them, the RBF kernel is widely used in SVR due to its excellent local approximation characteristics. Thus, the present invention uses RBF kernel functions.
TABLE 1 some common Kernel functions
In summary, three parameters (C, γ, ε) need to be determined before modeling, which is important to whether the SVR can achieve good performance. The penalty factor C serves to balance model complexity and training errors. The parameter γ controls the width of the RBF kernel. ε represents the approximate accuracy of the training data points. In the next section, the method proposed by the present invention for selecting these three parameters (C, γ, e) will be described in detail.
Parameter selection of SVR:
the grid search method is a brute force method, also called enumeration method, and is widely applied to parameter optimization. The grid search method searches for the best parameters by combining all possible solutions into a grid area, i.e., one point in the grid represents one solution, and then traversing all points in the grid. When the grid area is large enough and the step size is small enough, the method can generally obtain the global optimal solution, but the time consumption is long. In support vector regression, the number of points of the three-dimensional grid is K × L × M, assuming that the number of all possible values of the parameters C, γ, and epsilon in the grid is K, L and M, respectively. Thus, the temporal complexity of the parameter optimization of the grid search method can be given by the following expression:
T1(n)=O(K*L*M) (7)
assuming that K ═ L ═ M, equation (7) can be converted to:
T1(n)=O(K3) (8)
in general, K, L, and M are usually set to be very large in order to obtain a good solution, so the time complexity of the grid search method is a very large value. In other words, the grid search method requires a large amount of execution time and computational overhead.
Iterative aggregated grid search:
based on the above reasons, we propose an iterative aggregated grid search to solve the parameter optimization problem of support vector regression. In the first iteration, the algorithm searches for the optimal sub-region within a relatively large grid area. Then, the optimal sub-area is searched again in the optimal sub-area, thereby realizing the dynamic aggregation of the gridded area. Since this search strategy does not perform a fine search over the entire grid, the time complexity of the algorithm can be significantly reduced.
The iterative aggregated grid search work can be summarized as follows. Firstly, a given search region R belongs to { (C, gamma, epsilon) | a ≦ (C, gamma, epsilon) ≦ b }, wherein a is the lower bound of the search interval and b is the upper bound of the search interval; the grid point number of each dimension is given and is expressed by g; the stop threshold is given, denoted by δ. The algorithm passes the check of each childAnd searching for the optimal sub-area through the performance of the area, and then taking the optimal sub-area as a grid search area of the next iteration by updating the upper and lower boundaries of the search interval. The performance of each sub-region is measured by a fitness function. Here, the present invention uses 10-th cross-validation mean absolute error as the fitness function, whose mathematical model is:and defining the minimum fitness of the current grid area as the optimal fitness, and expressing the minimum fitness by minMAE. And defining the variation of the optimal fitness between two adjacent iterations as error variation, and expressing the error variation by using e. If the error variation is smaller than the stop threshold, it indicates that the performance of the SVR model has not been significantly improved by continuing to gather the grid regions, and it can be said that the performance of the SVR model has reached the optimal level. Thus, the algorithm stops iterating.
In the invention, the number of grid points in the grid search area, namely the value of the g parameter, needs to be determined, and the total number of the grid points in the whole grid search area is g3(ii) a The g parameter is set to 10, so the total number of grid points is 1031000. Thus, the time complexity of parameter optimization for iterative aggregated grid search can be found as follows:
T2(n)=O(T*g3) (9)
where T is the total number of iterations of the algorithm. In general, g < K. Thus, the present invention can obtain:
T2(n)=O(T*g3)<<T1(n)=O(K3) (10)
therefore, iterative aggregated grid search can greatly reduce the time complexity and obtain an optimal solution. Assume that in the h-th iteration, the search interval is as follows:
according to the given g parameter, the invention can calculate the value spacing (step length) of the parameter, as shown in the following formula.
All values of the parameters can be generated according to the step length and are respectively stored in the arrays. All values of the parameters are as follows.
And establishing a three-dimensional grid by taking all values of the parameter gamma as an x-axis, all values of the parameter C as a y-axis and all values of the parameter epsilon as a z-axis, so that each grid point represents a parameter combination. Each combination of parameters in the grid is used to build an SVR model and then calculate its fitness. When all the grid points are traversed, a three-dimensional fitness matrix of g × g × g is obtained. For convenience of illustration, we only use the two-dimensional matrix of the parameters C and γ as an example here. Through one iteration, we can obtain a fitness matrix, which is denoted by M.
The superscript h denotes the h-th iteration. Thus, the optimal fitness in the fitness matrix can be obtained through the min () function. The parameter combination corresponding to the current optimal fitness is represented by (C ', γ ', ∈ '), and is called a local optimal parameter combination. That is, in this iteration, the SVR model built using this combination of parameters performs best, indicating that (C ', γ ', ε ') is the best parameter. The minimum value in the acquisition matrix can be represented by equation (16). The min () function is provided by the Python's own toolkit.
minMAEh=min(Mh) (15)
Similarly, the optimal fitness in the iteration can be obtained through the iteration. Thus, the error variation can be given by equation (17). Note that the algorithm begins to compute the error variance after the second iteration.
e=|minMAEh+1-minMAEh| (16)
Updating the search interval:
after the error variation is calculated, it is necessary to determine whether a stop condition is satisfied. If the stopping condition is satisfied, the global optimum parameter is combined by (C)*,γ*,ε*) Expressed as locally optimal parameter combinations, i.e.
(C*,γ*,ε*)=(C′,γ′,ε′) (17)
If the stop condition is not satisfied, the search interval needs to be updated. There are three cases: if C' just falls at the upper bound of the search interval, the upper bound b of the search interval for parameter CcAnd is not changed. If C' is located just at the lower bound of the search interval, the lower bound a of the parameter interval of parameter CcAnd is not changed. If C' is located inside the search interval, the new search interval will be in step λcFor units clustered around C', the formula gives a new search interval.
Similarly, new search intervals for the parameter γ and the parameter ε may also be obtained, as shown in the following formula.
Thus, as the algorithm iterates, the grid regions gradually cluster toward the globally optimal parameter combination. Since the g parameter is a constant, the step size will become smaller and smaller, thereby enabling a fine search. Fig. 1 shows a flow of an iterative focus grid search.
Example (b):
case study:
this section demonstrates the performance evaluation of an iterative aggregated grid search algorithm based support vector regression model (IFGS-SVR) in a realistic case. The evaluation was performed in Python.
Description of the data set:
the invention takes the annual power load data of a certain county in Jiangxi province as an example. Jiangxi province is the big agricultural province in China, the population of the whole province is 4666.1 thousands of people in 2020, and the method has important practical significance in power load prediction. The power load data includes a daily load sequence and a maximum temperature and minimum temperature daily sequence for 365 days from 1/2013 to 31/12/2013. The load demand of the next day is predicted by the load of the first seven days, so that the daily load sequence can be converted into a data set of input-output mode. 85% was randomly drawn as a training set for training the model and selecting parameters. The remaining 15% was used as a test set to evaluate the performance of the model. The partitioning of the data set is shown in table 2.
TABLE 2 partitioning of data sets
Fig. 2 shows daily power load consumption and daily temperature conditions in a certain county in the province of Jiangxi. As shown in fig. 2, the power load curve starts to rise from month 6, reaches a peak by 8 middle of the month, and then starts to fall. One possible reason is that the temperature during this period is relatively high, and people need to use cooling electric devices such as air conditioners to lower the indoor temperature, resulting in increased power consumption. The power load curve slowly rises from 9 months to 1 month of the following year. This is because the temperature in Jiangxi province has decreased from 9 months, and people need to turn on the heating means to maintain a comfortable indoor temperature. Several sudden changes in the power load curve may be due to power outages, political factors, etc.
Data normalization and predictive evaluation criteria:
in general, to avoid the influence of a large difference in the value range data of the input features, data is generally normalized. The following formula is used to normalize the data:
where μ and σ are the mean and standard deviation, respectively, of the raw data x.
The invention selects five evaluation criteria to evaluate the performance of the proposed model, namely MAE, RMSE, MAPE and R2And CPU Time (in seconds, s). Their calculation formula is given by table 3:
TABLE 3 evaluation criteria
SVR parameter selection based on iterative aggregated grid search:
this section demonstrates the iterative process of an iterative aggregated grid search algorithm. The literature indicates that C and γ are key factors affecting SVR performance. Therefore, to improve modeling efficiency, the present invention optimizes C and γ using only an iterative aggregated grid search algorithm, with the value of the parameter ε set to 0.05. Since both C and γ are larger than zero, their initial points are set to 0.001. In the present invention, the range of the SVR parameter is set, C ∈ [0.001,1000], γ ∈ [0.001,1], the value of the g parameter is set to 10, and the stop threshold δ is 0.001.
And (3) parameter selection results:
table 4 gives the detailed information of the algorithm during the iteration. FIG. 3 shows the optimal fitness, λcAnd λγThe variation of (2). As can be seen from table 4, the algorithm can dynamically adjust the grid area. With the iteration of the algorithm, the grid regions can be rapidly gathered towards the direction of the globally optimal parameter combination, which proves that the iterative gathered grid search algorithm can greatly shorten the parameter optimization time from experiments. At the same time, step λcAnd λγAnd is rapidly reduced, thereby realizing more and more refined search.
TABLE 4 iterative procedure for iterative aggregated grid search algorithm
In order to analyze the advantages of the algorithm more intuitively, the invention visually shows the parameter optimization process of the iterative aggregated grid search, as shown in fig. 4. In a given search area, the optimum sub-area (red-shaded portion) is searched by checking the performance of each sub-area. Then, in graph (b) (i.e., the second iteration), the algorithm searches further only in the optimal sub-region, rather than performing a fine search in the given search region. Thus, iterative aggregated grid searching can significantly reduce execution time and computational overhead.
As shown in fig. 4, the search strategy of the iterative aggregated grid search can be summarized as follows: first, each point in the grid is used for SVR modeling, and then the fitness value is calculated, so that each grid point corresponds to a fitness (different colors represent the size of the fitness). Then, the optimal fitness and the corresponding grid point are found, and the grid point and the neighborhood thereof are used as a search area of the next iteration. For example, in graph (a), the grid search region is R ∈ { (C, γ) |0.001 ≦ C ≦ 1000,0.001 ≦ γ ≦ 1}, and accordingly, the step length λ ≦ 1}, the length of the grid search regioncIs 111.111, λγIs 0.111. The current optimal fitness is 5.4444, and the corresponding grid point is (333.334, 0.001), so that the grid point and its neighborhood (red shaded part) are the optimal sub-region, and the optimal sub-region is used as the search region of the next iteration. As a result, the new search area is R ∈ { (C, γ) |222.223 ≦ C ≦ 444.445,0.001 ≦ γ ≦ 0.112},as shown in fig. (b). According to equation (12), step λcCorrespondingly reduced to 24.691 lambdaγCorrespondingly to 0.012. And so on, until the 4 th iteration, the error variation e is 0.0001 and is smaller than the stop threshold δ, and the algorithm is terminated.
And (3) sensitivity test:
this section illustrates the sensitivity of the search area and the setting of the g parameter to the iterative aggregated grid search algorithm.
Table 5 compares the parameter-optimized sensitivity tests based on different search areas. As can be seen from Table 5, if the search region is chosen reasonably, the algorithm can always find the appropriate C and γ, which allows the SVR model to perform best. If the search area is too small, the optimization time may be reduced, but may fall into local optima; if the search area is too large, the optimization time will become long. Since the selection of the search area contains more empirical components, in practice different search areas may be tried several times in order to select the globally optimal parameter combination.
TABLE 5 parameter-optimized susceptibility testing based on different search regions
TABLE 6 Parametric optimization sensitivity test based on different g-parameters
Figure 5 and table 6 show the parameter optimization sensitivity test based on different g-parameters. From the results, the algorithm always finds similar combinations of parameters even though the g parameter takes different values, and the optimization time of the algorithm increases as the g parameter increases. Therefore, the optimization accuracy and the optimization time need to be measured to select a proper g parameter. In this study we set the g parameter to 10, both in terms of optimization accuracy and optimization time.
In general, the algorithm proposed by the present invention is effectively feasible. In practical applications, the algorithm of the present invention can always find the optimal parameter combination in a short time, as long as a suitable search area is given.
Case study results:
fig. 6 shows the predicted results of the IFGS-SVR model, fig. 6(a) is the predicted results of the power load, fig. 6(b) is the corresponding point mean absolute error, fig. 6(c) is the corresponding point root mean square error, and fig. 6(d) is the corresponding point mean absolute percentage error. The point mean absolute percentage error of the IFGS-SVR model was less than 10 for most samples. It is generally considered that when MAPE is less than 10, prediction accuracy is high. In a few cases, the prediction error may be large because the original sequence contains noise factors.
Comparison with the reference model:
to highlight the superiority of the IFGS-SVR model, 6 reference models (GS-SVR, PSO-SVR, SA-SVR, DE-SVR, ACO-SVR and GA-SVR) were used for comparison. In the modeling of these reference models, the main parameters of the GS, PSO, SA, DE, ACO and GA algorithms are listed in table 7, where the maximum number of iterations max _ iter of the PSO, DE, ACO and GA algorithms is 40 and the number of individuals popsize is 20. The range of SVR parameters, C ∈ [0.001,1000], γ ∈ [0.001,1 ]. Table 8 gives the results of the parameter optimization of these algorithms.
Table 7 main parameters of algorithm
TABLE 8 results of parameter optimization
The IFGS-SVR model is first compared to the GS-SVR model. Table 9 shows the two models in MAE, RMSE, MAPE, R2And a CPUTime comparison results. In the CPU Time, the optimization Time of the IFGS algorithm is 70.1s, while the optimization Time of the GS algorithm is up to 34156.6s, and the Time complexity of the parameter optimization of the IFGS algorithm is proved to be far lower than that of the grid search method directly from experiments. However, although the grid search method takes a lot of time, its accuracy is inferior to the IFGS algorithm because the grid setting of the grid search method is not small enough.
Then, we compared IFGS algorithm with several typical meta-heuristic algorithms at present in detail, and used MAE, RMSE, MAPE, R2And CPU Time is used as an evaluation index to analyze and compare the performance of the model formed by the methods. Table 10 gives the results of comparison of the six models. The mean errors for the proposed IFGS-SVR models on MAE, RMSE, MAPE were 4.379, 6.033 and 9.683, with the three error criteria at the lowest level in all models. At R2In the above, the IFGS-SVR model has the maximum value of 0.874. R2 measures the degree of interpretation of the independent variable on the dependent variable, high R2The representation model fits best to the true values. From the error and goodness of fit evaluation, the performance of the IFGS algorithm is superior to that of the current several typical algorithms. On CPU Time, the IFGS algorithm runs for 70.1s, the PSO algorithm for 88.8s, the SA algorithm for 89.7s, the DE algorithm for 118.8s, the ACO algorithm for 675.9s, and the GA algorithm for 101.7 s. Therefore, the IFGS algorithm achieves higher precision in less time, and the superiority of the IFGS algorithm is illustrated.
Overall, our algorithm can achieve higher accuracy in less time. Moreover, similar to the meta-heuristic algorithm, it can also be extended to the parameter optimization of other models by giving different fitness functions. Therefore, our algorithm has the advantages of low complexity, accuracy and easy expandability.
TABLE 9 comparison of IFGS-SVR model with GS-SVR model
TABLE 10 comparison of IFGS-SVR model with a Metaheuristic Algorithm-based benchmark model
The main contributions of the present invention are summarized in the following aspects:
1) an iterative aggregated grid search algorithm is proposed that searches for optimal sub-regions by examining the performance of each sub-region, thereby avoiding a large waste of grid settings. The experimental results show the effectiveness of the method.
2) The super-parameters of the SVR model are optimized by using an iterative aggregation grid search algorithm, so that the performance of short-term power load prediction is improved.
3) The superiority of the IFGS-SVR model is demonstrated by the real power data of a certain county in Jiangxi province.
4) The IFGS-SVR model formed was compared with other SVR models, the parameters of which were obtained by grid search (GS-SVR), particle swarm optimization (PSO-SVR), simulated annealing (SA-SVR), differential evolution (DE-SVR), ant colony optimization (ACO-SVR), and genetic algorithm (GA-SVR).
The foregoing shows and describes the general principles and features of the present invention, together with the advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (3)
1. A support vector regression model based on an iterative clustering grid search algorithm is characterized by comprising the following steps:
s1: the function supporting vector regression is defined as:
f(x)=ωψ(x)+b (1)
where ω is a weight vector, b is a constant, and the following expression is defined as an optimization function;
called the epsilon-insensitive loss function, epsilon is the width of the pipeline, C is a penalty factor, and therefore two relaxation factors xi and xi are introduced*The following expression can be obtained:
the optimization function is a quadratic programming problem, and according to the algorithm, the quadratic programming problem is introduced into the Lagrange multiplier and is converted into an even space thereof for solving:
therefore, the original optimization problem can be converted into an unconstrained form, the optimization target meets the KKT condition, namely the Lagrange dual is utilized to convert the optimization problem into an equivalent dual problem, and the solving process is as follows: firstly, the optimization function L is solved for omega, b, xi and xi*Then the optimization function L is calculated for the Lagrange multiplier alpha, alpha*,β,β*Maximum value of (d); the above process needs to satisfy the KKT condition; finally, a solution is obtained that supports vector regression as
S2: using RBF kernel function: k (x)i,xj)=exp(-γ||xi-xj||2) (7)
Three parameters (C, γ, e) need to be determined before modeling;
s3: grid search method: in the grid search method, assuming that the number of all possible values of the parameters C, γ, and epsilon in the grid is K, L and M, the number of points of the three-dimensional grid is K × L × M, and therefore, the time complexity of the parameter optimization of the grid search method can be given by the following expression:
T1(n)=O(K*L*M) (8)
assuming that K ═ L ═ M, equation (8) can be converted to:
T1(n)=O(K3) (9)
to get a good combination of parameters, K, L, M are usually set very large, so the time complexity of the grid search method is a very large value;
s4: iterative aggregated grid search: in the first iteration, the algorithm searches for the optimal sub-region within a relatively large grid area. Then, with the iteration of the algorithm, the optimal sub-area is searched again in the optimal sub-area, so that the dynamic aggregation of the gridded area is realized. Since this search strategy does not perform a fine search over the entire grid, the time complexity of the algorithm can be significantly reduced.
2. The iterative aggregated mesh search algorithm-based support vector regression model of claim 1, wherein: the iterative aggregated mesh search algorithm of step S4 includes the following steps:
inputting:
training data set: d;
parameter search interval: a ish≤(C,γ,ε)≤bh;
Number of grid points per dimension: g; stopping the threshold value: delta
And (3) outputting:
global optimal parameter combination of SVR model: (C)*,γ*,ε*)
Total number of iterations of the algorithm: t is
Step 1: calculating the value interval (step length) of the parameters: lambda [ alpha ]
Step 2: generating all values of the parameters;
and step 3: constructing a three-dimensional grid by using all values of the three parameters;
and 4, step 4: establishing an SVR model by using points in the grid;
and 5: calculating the fitness;
step 6: acquiring optimal fitness;
and 7: acquiring a parameter combination corresponding to the optimal fitness;
and 8: calculating error variation: e;
and step 9: if e < delta, Then returns the global optimum parameter combination (C)*,γ*,ε*) And a total number of iterations;
else, update search Interval (a)h+1,bh+1) And returning to the step 1.
3. The iterative aggregated mesh search algorithm-based support vector regression model of claim 2, wherein: in the iterative aggregated grid search algorithm, the number of grid points in a grid search region needs to be determined, that is, the number of grid points of each dimension is determined and is represented by g, so that the total number of grid points in the whole grid search region is g3(ii) a The g parameter is set to 10, so the total number of grid points is 1031000. Thus, the time complexity of the parameter optimization of the iterative aggregated grid search algorithm may be found as follows:
T2(n)=O(T*g3) (10)
wherein T is the total number of iterations of the algorithm; generally, g < K; thus we can get:
T2(n)=O(T*g3)<<T1(n)=O(K3) (11)
therefore, the iterative aggregated grid search can greatly reduce the time complexity and obtain the optimal solution; assume that in the h-th iteration, the search interval is as follows:
from the given g-parameter, we can calculate the span (step length) of the parameter, as shown in the following formula:
all values of the parameters can be generated according to the step length and are respectively stored in the array; all values of the parameters are as follows:
thus, a three-dimensional grid is established with all values of the parameter gamma as the x-axis, all values of the parameter C as the y-axis, and all values of the parameter epsilon as the z-axis, so that each grid point represents a parameter combination. Each combination of parameters in the grid is used to build an SVR model and then the fitness of the combination of parameters is calculated. When all the grid points are traversed, a three-dimensional fitness matrix of g × g × g is obtained. For convenience of illustration, we only use the two-dimensional matrix of the parameters C and γ as an example here. Through one iteration, we can obtain a fitness matrix, which is expressed by M;
the superscript h denotes the h-th iteration. Thus, the optimal fitness in the fitness matrix can be obtained through the min () function, and the optimal fitness is represented by minMAE. The parameter combination corresponding to the current optimal fitness is represented by (C ', γ ', ∈ '), and is called a local optimal parameter combination. That is, in this iteration, the SVR model built using this combination of parameters performs best, indicating that (C ', γ ', ε ') is the best parameter. The minimum value in the acquisition matrix can be represented by equation (16); the min () function is provided by the Python's own toolkit;
minMAEh=min(Mh) (16)
similarly, the optimal fitness in the iteration can be obtained through the iteration. Thus, the error variation can be given by equation (17); note that the algorithm begins to compute the error variance after the second iteration;
e=|minMAEh+1-minMAEh| (17)
updating the search interval:
after the error variation is calculated, it is necessary to determine whether a stop condition is satisfied. If the stopping condition is satisfied, the global optimum parameter is combined by (C)*,γ*,ε*) Expressed as locally optimal parameter combinations, i.e.
(C*,γ*,ε*)=(C′,γ′,ε′) (18)
If the stop condition is not satisfied, the search interval needs to be updated. There are three cases: if C' just falls at the upper bound of the search interval, the upper bound b of the search interval for parameter CcAnd is not changed. If C' is located just at the lower bound of the search interval, the lower bound a of the search interval of parameter CcAnd is not changed. If C' is located inside the search interval, the new search interval will be in step λcGathering the units around C', and giving a new search interval by the formula;
similarly, new search intervals for the parameter γ and the parameter ε can also be obtained, as shown in the formula;
thus, as the algorithm iterates, the grid regions gradually cluster toward the globally optimal parameter combination. Since the g parameter is a constant, the step size will become smaller and smaller, thereby enabling a fine search.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011286631.6A CN112330044A (en) | 2020-11-17 | 2020-11-17 | Support vector regression model based on iterative aggregation grid search algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011286631.6A CN112330044A (en) | 2020-11-17 | 2020-11-17 | Support vector regression model based on iterative aggregation grid search algorithm |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112330044A true CN112330044A (en) | 2021-02-05 |
Family
ID=74321481
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011286631.6A Withdrawn CN112330044A (en) | 2020-11-17 | 2020-11-17 | Support vector regression model based on iterative aggregation grid search algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112330044A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114511039A (en) * | 2022-02-28 | 2022-05-17 | 智汇(中山)信息技术有限公司 | Software development behavior monitoring system |
CN115935859A (en) * | 2023-03-01 | 2023-04-07 | 成都前沿动力科技有限公司 | SVR-MODEA-based profile structure optimization method, system, equipment and medium |
-
2020
- 2020-11-17 CN CN202011286631.6A patent/CN112330044A/en not_active Withdrawn
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114511039A (en) * | 2022-02-28 | 2022-05-17 | 智汇(中山)信息技术有限公司 | Software development behavior monitoring system |
CN115935859A (en) * | 2023-03-01 | 2023-04-07 | 成都前沿动力科技有限公司 | SVR-MODEA-based profile structure optimization method, system, equipment and medium |
CN115935859B (en) * | 2023-03-01 | 2023-05-23 | 成都前沿动力科技有限公司 | SVR-MODEA-based profile structure optimization method, system, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wu et al. | Fractional ARIMA with an improved cuckoo search optimization for the efficient Short-term power load forecasting | |
Wang et al. | A novel combined model based on hybrid optimization algorithm for electrical load forecasting | |
Veeramsetty et al. | Short-term electric power load forecasting using random forest and gated recurrent unit | |
Pai | System reliability forecasting by support vector machines with genetic algorithms | |
Yao et al. | Short-term load forecasting method based on feature preference strategy and LightGBM-XGboost | |
Li et al. | A novel double incremental learning algorithm for time series prediction | |
Liang et al. | Towards online deep learning-based energy forecasting | |
Shaikh et al. | A new approach to seasonal energy consumption forecasting using temporal convolutional networks | |
Raghavendra et al. | Artificial humming bird with data science enabled stability prediction model for smart grids | |
Mehedi et al. | Intelligent machine learning with evolutionary algorithm based short term load forecasting in power systems | |
Feng et al. | Saturated load forecasting based on clustering and logistic iterative regression | |
CN112330044A (en) | Support vector regression model based on iterative aggregation grid search algorithm | |
Zhang et al. | Load Prediction Based on Hybrid Model of VMD‐mRMR‐BPNN‐LSSVM | |
Poczeta et al. | Application of fuzzy cognitive maps to multi-step ahead prediction of electricity consumption | |
Deng et al. | An intelligent hybrid short-term load forecasting model optimized by switching delayed PSO of micro-grids | |
Tian | Approach for short-term wind power prediction via kernel principal component analysis and echo state network optimized by improved particle swarm optimization algorithm | |
Cao et al. | A hybrid electricity load prediction system based on weighted fuzzy time series and multi-objective differential evolution | |
Zhen et al. | Simultaneous prediction for multiple source–loads based sliding time window and convolutional neural network | |
Liang et al. | A wind speed combination forecasting method based on multifaceted feature fusion and transfer learning for centralized control center | |
Zhu | Research on adaptive combined wind speed prediction for each season based on improved gray relational analysis | |
Almalaq et al. | Comparison of recursive and non-recursive ANNs in energy consumption forecasting in buildings | |
Wei et al. | An instance based multi-source transfer learning strategy for building’s short-term electricity loads prediction under sparse data scenarios | |
Zhang et al. | Deep reinforcement learning based interpretable photovoltaic power prediction framework | |
Wang et al. | Cloud computing and extreme learning machine for a distributed energy consumption forecasting in equipment-manufacturing enterprises | |
Sun et al. | Imitation learning‐based online optimal scheduling for microgrids: An approach integrating input clustering and output classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210205 |