CN116451824B - Photovoltaic power prediction method based on neural network and honeypot optimization - Google Patents
Photovoltaic power prediction method based on neural network and honeypot optimization Download PDFInfo
- Publication number
- CN116451824B CN116451824B CN202211645456.4A CN202211645456A CN116451824B CN 116451824 B CN116451824 B CN 116451824B CN 202211645456 A CN202211645456 A CN 202211645456A CN 116451824 B CN116451824 B CN 116451824B
- Authority
- CN
- China
- Prior art keywords
- parameter
- neural network
- data
- weights
- optimization
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 46
- 238000005457 optimization Methods 0.000 title claims abstract description 21
- 238000000034 method Methods 0.000 title claims abstract description 17
- 238000012549 training Methods 0.000 claims description 22
- 238000012360 testing method Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 6
- 239000013598 vector Substances 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000005286 illumination Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 abstract description 22
- 238000010248 power generation Methods 0.000 abstract description 15
- 238000004088 simulation Methods 0.000 abstract description 5
- 241000282346 Meles meles Species 0.000 abstract description 3
- 230000007547 defect Effects 0.000 abstract description 3
- 230000000694 effects Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 241000283153 Cetacea Species 0.000 description 1
- 241000511338 Haliaeetus leucocephalus Species 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000019637 foraging behavior Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000011158 quantitative evaluation Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J3/00—Circuit arrangements for ac mains or ac distribution networks
- H02J3/004—Generation forecast, e.g. methods or systems for forecasting future energy generation
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J2300/00—Systems for supplying or distributing electric power characterised by decentralized, dispersed, or local generation
- H02J2300/20—The dispersed energy generation being of renewable origin
- H02J2300/22—The renewable source being solar energy
- H02J2300/24—The renewable source being solar energy of photovoltaic origin
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Economics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Human Resources & Organizations (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Water Supply & Treatment (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Power Engineering (AREA)
- Development Economics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to a photovoltaic power prediction method based on neural network and honeypot optimization, and provides a photovoltaic power generation prediction model based on a badger algorithm and an enhanced counter-propagation neural network aiming at the defect that random initial weight and bias seriously affect the performance of a BP neural network. The solar irradiance per hour and the ambient temperature of the peltier cell are used as input data for the predictive model, and the historical power generation per hour (power) is used as the expected output. HBAs are introduced in the BP neural network to optimize initial weights and biases. The simulation results of various indexes show that the prediction accuracy of the HBA-BP neural network is improved to a certain extent relative to a BP prediction model.
Description
Technical Field
The invention belongs to a power prediction algorithm for improving prediction precision and efficiency, and relates to a photovoltaic power prediction method based on neural network and honeypot optimization.
Background
Meta-heuristic algorithms are widely used as one of the main techniques to obtain optimal solutions in various complex problems in a number of engineering and scientific research fields. They are very useful in solving complex problems of deterministic algorithms that fall into local optimizations, because they have the versatility of finding several local solutions, such as the optimal design of structural engineering problems in practical applications, renewable energy systems, deep Neural Network (DNNs) model optimization, and other applications. The meta-heuristic algorithm is not perfect, and one weakness is that the quality of the optimal solution depends on the number of parameter values and the stopping condition of the algorithm, usually determined by the number of iterations. Many meta-heuristic algorithms have been developed in recent years. Such as Particle Swarm Optimization (PSO), whale Optimization (WOA), color change Long Qun (CSA), african bald eagle optimization (AVOA), honeypot (HBA), and the like. The honeypot algorithm aims at solving the optimal solution with higher precision. Back Propagation (BP) neural networks have been applied as a predictive algorithm to power generation predictions for Photovoltaic (PV) systems, and prediction accuracy in practical applications has been a problem. Therefore, we combine honeypot algorithm with back propagation neural network to improve control performance of the PV system. The result shows that the prediction precision and efficiency are superior to those of the traditional back propagation neural network model.
The honeypot optimization algorithm is different from other meta-heuristic algorithms. Inspired by the intelligent foraging behavior of the badgers, an effective searching strategy for solving the optimization problem is developed mathematically. By means of a controlled randomization technique, the HBA can maintain a sufficient population diversity (here, the population is the weight and bias, solved as the optimal weight and bias) even at the end of the search process, whereas the (BP) neural network has a lower prediction accuracy in the power generation prediction of the photovoltaic system, due to the random setting of the initial weight and bias. Therefore, there is still room for improvement in BP neural networks for the prediction of power generation of photovoltaic systems.
Disclosure of Invention
Technical problem to be solved
In order to avoid the defects of the prior art, the invention provides a photovoltaic power prediction method based on neural network and honeypot optimization.
Based on the analysis, the BP neural network algorithm can be modified. By combining the advantages of the HBA algorithm, namely considering the influence of initial weight and bias, the performance of the algorithm is improved, so that the BP neural network algorithm obtains better dynamic performance in the application of a PV system. The weight and bias are adjusted by the HBA so as to improve the accuracy of the BP neural network algorithm. The specific contributions of this work are as follows:
(1) Based on the BP neural network algorithm applied to the prediction of photovoltaic power generation, an optimization algorithm (HBA-BP) for enhancing the accuracy is provided so as to enhance the accuracy of the prediction of the power generation amount in the PV system.
(2) Comparing and analyzing the predicted result of the HBA-BP on photovoltaic power generation with the predicted result of the BP.
(3) The proposed HBA-BP is evaluated using performance metrics of the optimum design problem, including Mean Absolute Error (MAE) and Root Mean Square Error (RMSE), and taking into full account the effects of temperature variations of the PV system.
Technical proposal
A photovoltaic power prediction method based on neural network and honeypot optimization is characterized by comprising the following steps:
Step 1: respectively carrying out normalization processing on three types of original data, wherein the three types of original data comprise illumination intensity y 1, ambient temperature y 2 and power y 3, and obtaining parameter data y n normalized by the three types of original data;
Wherein y n (n=1, 2, 3) represents normalized data corresponding to the raw data y m (m=1, 2, 3) (-1<y n<1);ym,min and y m,max are the lower and upper boundaries of three classes of corresponding raw data, respectively;
Randomly dividing each type of normalized data into two groups, namely training data and test data;
Step 2: initializing parameters of the BP neural network, wherein the parameters are defined as vectors formed by weights and offsets:
xi=lbi+rand×(ubi-lbi);rand∈[0,1]
Where ub i is the upper boundary of the parameter and lb i is the lower boundary of the parameter; rand is a random value within [0,1], i representing the ith parameter;
Three types of training data are input into a BP neural network, and the BP neural network outputs power values corresponding to parameters;
step 3: the power values under all parameters are evaluated:
The minimum mean square error is the vector formed by the optimal parameter value, namely the corresponding weight and offset, and is stored in T pos;
step 4: calculation intensity I, i.e. parameter concentration:
Wherein: the value is 2, I i represents the parameter concentration degree, and d i represents the distance between the parameter and other parameters;
updating the decrementing factor alpha:
Wherein: t represents the current iteration number, C is a constant, and t max is the maximum iteration number;
step 5: updating the parameter x new, and returning to the step 3;
The calculation of the parameter x new is as follows:
When R < 0.5:
xnew=xprey+βIFxprey+r1αdiF|cos(2πr2)×[1-cos(2πr3)]|
When R is more than or equal to 0.5:
xnew=xprey+βIFxprey+r4αdiF
Wherein x new represents a new parameter, the ability to solve the optimal parameter is defined as β+.1, R, r 1、r2、r3, and r 4 are independent random numbers between 0 and 1, and x prey represents the current best parameter;
The F is used for changing the updating direction of the parameter value:
step 6: weights and biases within T pos are trained:
where T pos represents the optimal set of weights and biases, W t+1 represents the weights of the (t+1) training, and the defined learning rate lr is also called the step size. Loss is a measure of expected and predicted value differences in training, W t represents the weight of t training;
step 7: the weight and bias obtained by the training are carried into the following formula, and a predicted value Y p is output:
where w ij and b j are the weights and bias parameters of the input layer to the hidden layer node. w jk and b k represent weights and biases from the hidden layer node to the output layer node, yp represents a predicted value, i.e., predicted power. Is an activation function of a node in the hidden layer,For the activation function of the output layer node, m is the number of output layer nodes, n is the number of input layer nodes, i is the network node of the input layer, j is the network node of the hidden layer, and k is the network node of the output layer.
The training data in the two groups of data were 80% and the test data were 20%.
The constant C takes a value of 2.
The βdefault=6.
Advantageous effects
The invention provides a photovoltaic power prediction method based on neural network and honeypot optimization, and provides a photovoltaic power generation prediction model based on a badger algorithm and an enhanced counter-propagation neural network aiming at the defect that random initial weight and bias seriously affect the performance of a BP neural network. The solar irradiance per hour and the ambient temperature of the peltier cell are used as input data for the predictive model, and the historical power generation per hour (power) is used as the expected output. HBAs are introduced in the BP neural network to optimize initial weights and biases. The simulation results of various indexes show that the prediction accuracy of the HBA-BP neural network is improved to a certain extent relative to a BP prediction model.
In the prior art, although the generated energy of the PV system can be predicted through the BP neural network, the prediction accuracy can be influenced by the BP structure; specifically, the initial weights and offsets are set randomly, resulting in low prediction accuracy. The optimal photovoltaic power generation expected model (HBA-BP) based on BP neural network and HBA integration provided by the invention can utilize HBA to adjust weight and bias. Then, based on the data simulation experiment of the peltier, comparison of the HBA-BP and BP model prediction results shows that the HBA can improve the performance of the BP neural network to obtain more accurate results.
Drawings
Fig. 1: the method flow is as shown in figure 1: (wherein t represents the current number of iterations, and t max is the maximum number of iterations.)
Fig. 2: power prediction comparison test chart
Fig. 3: daily electrical quantity prediction contrast
Fig. 4: prediction error of BP and HBA-BP
Detailed Description
The invention will now be further described with reference to examples, figures:
the flow of the embodiment method is shown in fig. 1: (wherein t represents the current number of iterations, and t max is the maximum number of iterations.)
(1) Data is prepared and normalized and divided into training and test sets.
Where y n (n=1, 2, 3) represents normalized parameter data corresponding to the original parameter y m (m=1, 2, 3) (-1<y n<1).y1、y2、y3 is the original parameter (y 1 is the illumination intensity; y 2 is the ambient temperature; y 3 is the power),. Y m,min and y m,max are the lower and upper boundaries of the original parameter, respectively), and then the normalized parameter data is randomly divided into two groups, i.e., 80% training data and 20% test data.
(2) Initializing a parameter (hereinafter, a parameter refers to a vector constituted by a weight and a bias) (hereinafter, i represents an i-th parameter).
xi=lbi+rand×(ubi-lbi);rand∈[0,1] (2)
Where ub i is the upper boundary of the parameter and lb i is the lower boundary of the parameter; rand is a random value within [0,1 ].
(3) The power values under all parameters (weights and offsets) are evaluated and the best parameter values (weights and offsets) are saved to T pos, i.e. the weight and offset corresponding to the minimum mean square error is the best parameter.
Fitness i is the Mean Square Error (MSE) of the ith power, and P p and P a represent the predicted and actual values of the output power of the PV system, respectively. d is the number of powers and M is the total number of power values.
(4) The decrementing factor alpha is updated and the intensity I is calculated.
Where C is a constant and has a value of 2, i i denotes a parameter concentration (here, a parameter means a vector constituted by a weight and a bias), d i denotes a distance between a parameter and other parameters, and α is a time-varying parameter.
(5) The parameters (weights and offsets) are updated by the following equation and returned to step (3) for evaluation, and if the mean square error at x new is smaller, then x new is assigned to T pos, compared to the original T pos.
When R < 0.5:
xnew=xprey+βIFxprey+r1αdiF|cos(2πr2)×[1-cos(2πr3)]| (5a)
When R is more than or equal to 0.5:
xnew=xprey+βIFxprey+r4αdiF (5b)
Where x new represents the new parameters (weights and offsets). The ability to solve for the optimal parameters can be defined as β+.1 (default=6). R, r 1、r2、r3 and r 4 are independent random numbers between 0 and 1, x prey representing the currently best parameters. As a flag, F is used to change the direction of parameter value update (the training process of BP includes forward propagation and backward propagation) as follows:
(6) Weights and biases within T pos are trained.
Where T pos represents the optimal set of weights and biases, W t+1 represents the weights of the (t+1) training, and the defined learning rate lr is also called the step size. Loss is a measure of expected and predicted value differences in training, W t representing the weight of t training.
(7) The weights and offsets obtained by the training are taken into the following formula, simulation and prediction are carried out, and the predicted value Y p output is compared with Y 3.
Where w ij and b j are the weights and bias parameters of the input layer to the hidden layer node. w jk and b k represent weights and biases from the hidden layer node to the output layer node, yp represents a predicted value, i.e., predicted power.Is an activation function of a node in the hidden layer,For the activation function of the output layer node, m is the number of output layer nodes, n is the number of input layer nodes, i is the network node of the input layer, j is the network node of the hidden layer, and k is the network node of the output layer.
In the embodiment example
(1) Parameter setting:
In order to clearly demonstrate the scientificity and reliability of the proposed network optimization, the BP neural network and the HBA-BP neural network were trained with 400 sets of data, respectively. The remaining 100 sets of data were used as test sets to verify the performance of both algorithms.
TABLE 1 parameter set table
Parameters (parameters) | Numerical value | Parameters (parameters) | Numerical value |
Input layer | 1 | Data set size | 30 |
Output layer node | 1 | Maximum number of iterations | 100 |
Hidden layer | 1 | Dimension of data set | 17 |
Hidden layer node | 4 | Data setting | 500 |
Output layer | 1 | Learning rate | 0.1 |
Output layer node | 1 | Training times | 100 |
(2) The predicted result and the overall error are shown in fig. 2 when the average solar irradiance per hour and the ambient temperature are taken as input and the power generation amount per hour is taken as output expectation, and the power predicted result of the test HBA-BP model is presented together with the actual power generation amount. The predictive performance of the HBA-BP model is shown to be superior to that of the BP model.
(3) 24-Hour predictions are shown in FIG. 3 for more pronounced BP and HBA-BP performance;
The daily power predictions for the HBA-BP and BP models are presented with actual power generation as shown in fig. 3. The prediction of HBA-BP is closer to the actual value than BP, which indicates that the prediction performance of the HBA-BP model is better than that of the BP model. I.e. the predicted outcome will be more accurate.
(4) The prediction errors of the BP model and the HBA-BP model are compared in FIG. 4. The prediction error of BP is obviously higher than that of HBA-BP, so that the HBA-BP shows better nonlinear fitting capability than that of a BP neural network prediction model.
(5) To more accurately and efficiently demonstrate the superiority of HBBA-BP predictive model, mean Absolute Error (MAE) and Root Mean Square Error (RMSE) were applied to evaluate the performance of HBA-BP predictive model.
Quantitative evaluations of the BP neural network prediction model and the HBA-BP prediction model are shown in Table 2.
TABLE 2 comparison of the Performance of HBA-BP prediction model and BP prediction model
Performance index item | BP model | HBA-BP model |
Mean Square Error (MSE) | 305.15*10-6 | 5.585*10-6 |
Mean Absolute Error (MAE) | 1.846 | 0.264 |
Root Mean Square Error (RMSE) | 0.00175 | 0.000236 |
The results show that after the HBA is introduced into the BP prediction model, all three indexes are greatly reduced. The RMSE value surprisingly decreases from 0.00175 to 0.000236, 7.5 times, which represents the superior predictive performance and accuracy of the HBA-BP predictive model, and will exert more excellent effects in practical applications.
Aiming at the influence of random weight and bias on the performance of the BP neural network, the provided photovoltaic power generation prediction model based on the mel-type algorithm and the enhanced counter-propagation neural network is used for optimizing the weight and bias to obtain the optimal effect, so that the prediction performance of the neural network, namely the prediction accuracy of the neural network is improved. The simulation results of all the indexes after optimization show that the prediction accuracy of the HBA-BP neural network is greatly improved relative to a BP prediction model, an excellent prediction effect is shown, and the real-time performance and accuracy requirements on the control of the PV system are greatly improved.
Claims (4)
1. A photovoltaic power prediction method based on neural network and honeypot optimization is characterized by comprising the following steps:
Step 1: respectively carrying out normalization processing on three types of original data, wherein the three types of original data comprise illumination intensity y 1, ambient temperature y 2 and power y 3, and obtaining parameter data y n normalized by the three types of original data;
Wherein y n (n=1, 2, 3) represents normalized data corresponding to the raw data y m (m=1, 2, 3) (-1 < y n<1);ym,min and y m,max are the lower and upper boundaries of three types of corresponding raw data, respectively;
Randomly dividing each type of normalized data into two groups, namely training data and test data;
Step 2: initializing parameters of the BP neural network, wherein the parameters are defined as vectors formed by weights and offsets:
xi=lbi+rand×(ubi-lbi);rand∈[0,1]
Where ub i is the upper boundary of the parameter and lb i is the lower boundary of the parameter; rand is a random value within [0,1], i representing the ith parameter;
Three types of training data are input into a BP neural network, and the BP neural network outputs power values corresponding to parameters;
step 3: the power values under all parameters are evaluated:
The minimum mean square error is the vector formed by the optimal parameter value, namely the corresponding weight and offset, and is stored in T pos;
step 4: calculation intensity I, i.e. parameter concentration:
Wherein: the value is 2, I i represents the parameter concentration degree, and d i represents the distance between the parameter and other parameters;
updating the decrementing factor alpha:
Wherein: t represents the current iteration number, C is a constant, and t max is the maximum iteration number;
step 5: updating the parameter x new, and returning to the step 3;
The calculation of the parameter x new is as follows:
when R < 0.5:
xnew=xprey+βIFxprey+r1αdiF|cos(2πr2)×[1-cos(2πr3)]|
When R is more than or equal to 0.5:
xnew=xprey+βIFxprey+r4αdiF
Wherein x new represents a new parameter, the ability to solve the optimal parameter is defined as β+.1, R, r 1、r2、r3, and r 4 are independent random numbers between 0 and 1, and x prey represents the current best parameter;
The F is used for changing the updating direction of the parameter value:
step 6: weights and biases within T pos are trained:
Where T pos represents the optimal set of weights and biases, W t+1 represents the weights of (t+1) training, and the defined learning rate lr is also called step size; loss is a measure of expected and predicted value differences in training, W t represents the weight of t training;
step 7: the weight and bias obtained by the training are carried into the following formula, and a predicted value Y p is output:
Wherein w ij and b j are the weights and bias parameters of the input layer to the hidden layer node; w jk and b k represent weights and biases from the hidden layer node to the output layer node, yp represents a predicted value, i.e., predicted power; is an activation function of a node in the hidden layer, For the activation function of the output layer node, m is the number of output layer nodes, n is the number of input layer nodes, i is the network node of the input layer, j is the network node of the hidden layer, and k is the network node of the output layer.
2. The photovoltaic power prediction method based on neural network and honeypot optimization of claim 1, wherein: the training data in the two groups of data were 80% and the test data were 20%.
3. The photovoltaic power prediction method based on neural network and honeypot optimization of claim 1, wherein: the constant C takes a value of 2.
4. The photovoltaic power prediction method based on neural network and honeypot optimization of claim 1, wherein:
the βdefault=6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211645456.4A CN116451824B (en) | 2022-12-16 | 2022-12-16 | Photovoltaic power prediction method based on neural network and honeypot optimization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211645456.4A CN116451824B (en) | 2022-12-16 | 2022-12-16 | Photovoltaic power prediction method based on neural network and honeypot optimization |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116451824A CN116451824A (en) | 2023-07-18 |
CN116451824B true CN116451824B (en) | 2024-08-20 |
Family
ID=87134425
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211645456.4A Active CN116451824B (en) | 2022-12-16 | 2022-12-16 | Photovoltaic power prediction method based on neural network and honeypot optimization |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116451824B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112149883A (en) * | 2020-09-07 | 2020-12-29 | 南京邮电大学 | Photovoltaic power prediction method based on FWA-BP neural network |
CN113469426A (en) * | 2021-06-23 | 2021-10-01 | 国网山东省电力公司东营供电公司 | Photovoltaic output power prediction method and system based on improved BP neural network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102329590B1 (en) * | 2018-03-19 | 2021-11-19 | 에스알아이 인터내셔널 | Dynamic adaptation of deep neural networks |
-
2022
- 2022-12-16 CN CN202211645456.4A patent/CN116451824B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112149883A (en) * | 2020-09-07 | 2020-12-29 | 南京邮电大学 | Photovoltaic power prediction method based on FWA-BP neural network |
CN113469426A (en) * | 2021-06-23 | 2021-10-01 | 国网山东省电力公司东营供电公司 | Photovoltaic output power prediction method and system based on improved BP neural network |
Also Published As
Publication number | Publication date |
---|---|
CN116451824A (en) | 2023-07-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | Photovoltaic cell parameter estimation based on improved equilibrium optimizer algorithm | |
CN113282122B (en) | Commercial building energy consumption prediction optimization method and system | |
CN105279346B (en) | A method of distributed photovoltaic ability is received for assessing power distribution network | |
CN106952183A (en) | A kind of short-term load forecasting method based on particle group optimizing least square method supporting vector machine | |
CN109978283B (en) | Photovoltaic power generation power prediction method based on branch evolution neural network | |
CN107609667B (en) | Heat supply load prediction method and system based on Box _ cox transformation and UFCNN | |
CN105447509A (en) | Short-term power prediction method for photovoltaic power generation system | |
CN109818775A (en) | Short-term network method for predicting based on adaptive differential evolution algorithm Optimization of Wavelet neural network | |
CN110942205A (en) | Short-term photovoltaic power generation power prediction method based on HIMVO-SVM | |
CN110264012A (en) | Renewable energy power combination prediction technique and system based on empirical mode decomposition | |
CN112149883A (en) | Photovoltaic power prediction method based on FWA-BP neural network | |
CN112330487A (en) | Photovoltaic power generation short-term power prediction method | |
CN117973644B (en) | Distributed photovoltaic power virtual acquisition method considering optimization of reference power station | |
CN111008790A (en) | Hydropower station group power generation electric scheduling rule extraction method | |
CN113255138A (en) | Load distribution optimization method for power system | |
CN112508279A (en) | Regional distributed photovoltaic prediction method and system based on spatial correlation | |
CN113705098A (en) | Air duct heater modeling method based on PCA and GA-BP network | |
CN117875752A (en) | Power system flexible operation domain assessment method based on self-organizing map decision tree | |
Xu et al. | Short-term electricity consumption forecasting method for residential users based on cluster classification and backpropagation neural network | |
CN116451824B (en) | Photovoltaic power prediction method based on neural network and honeypot optimization | |
CN109359671B (en) | Classification intelligent extraction method for hydropower station reservoir dispatching rules | |
CN116542385A (en) | Integrated learning method and system for wind and light power prediction | |
CN111340291A (en) | Medium-and-long-term power load combined prediction system and method based on cloud computing technology | |
CN113222216B (en) | Cold and hot electric load prediction method, device and system | |
CN115271254A (en) | Short-term wind power prediction method for optimizing extreme learning machine based on gull algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |