CN115660219A - Short-term power load prediction method based on HSNS-BP - Google Patents

Short-term power load prediction method based on HSNS-BP Download PDF

Info

Publication number
CN115660219A
CN115660219A CN202211431744.XA CN202211431744A CN115660219A CN 115660219 A CN115660219 A CN 115660219A CN 202211431744 A CN202211431744 A CN 202211431744A CN 115660219 A CN115660219 A CN 115660219A
Authority
CN
China
Prior art keywords
power load
formula
neural network
hsns
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211431744.XA
Other languages
Chinese (zh)
Inventor
张振程
薛国红
贾玉进
续欣莹
张喆
薛占平
刘亚龙
王建华
郭文辉
贾璇璇
李雷雷
王政政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HUAJIN COKING COAL CO Ltd
Original Assignee
HUAJIN COKING COAL CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HUAJIN COKING COAL CO Ltd filed Critical HUAJIN COKING COAL CO Ltd
Priority to CN202211431744.XA priority Critical patent/CN115660219A/en
Publication of CN115660219A publication Critical patent/CN115660219A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention belongs to the field of power systems, and relates to a short-term power load prediction method based on HSNS-BP, which can predict a short-term power load. The method for predicting the short-term power load based on the HSNS-BP comprises the following steps: the method comprises the steps of forming a sample data set by the acquired power load data and the influence factors thereof, constructing a BP network structure after preprocessing, optimizing BP neural network parameters by adopting a multi-strategy hybrid social network search algorithm obtained by introducing self-adaptive parameters, cauchy variation and Gaussian variation of spiral factors, constructing an HSNS-BP prediction model by using the optimal network parameters, and predicting the load of short-term power.

Description

Short-term power load prediction method based on HSNS-BP
Technical Field
The invention belongs to the field of power systems, and particularly relates to a short-term power load prediction method based on HSNS-BP.
Background
With the development of industry and agriculture and the increasing living standard of people, the demand of society for electric power is larger and larger. The development of the power industry needs to consume huge investment and energy, has huge influence on national economy, and reasonably plans a power system, thereby not only obtaining huge economic benefit, but also obtaining huge social benefit. In contrast, a misplanning of an electric power system may bring irreparable losses to national construction. Therefore, it is significant to analyze and research the power planning to maximize the planning quality, and the first step of achieving this goal is to make power load prediction work. The accuracy of power load prediction directly influences the reasonability of investment, the reasonability of network layout and the reasonability of operation. The power load prediction may be classified into long-term, medium-term, and short-term power load prediction according to the prediction duration. Short-term power load prediction refers to predicting power load hours or days in the future.
The traditional short-term power load prediction methods include a regression analysis method, a Kalman filtering method, a time series method and the like. However, a complex nonlinear relationship exists between actual data, and the method is difficult to solve well, so that the prediction accuracy is low. With the continuous development of computer technology, machine learning and deep learning are widely applied to the field of load prediction. The artificial neural network does not need to determine the mapping relation between input and output in advance, and can automatically learn the appropriate weight parameters from the data. The BP neural network is a multi-layer feedforward network trained by error back propagation, the algorithm of the BP neural network is called BP algorithm, and gradient search technology is used for minimizing the mean square error of the actual output value and the expected output value of the network. The initial connection weight is related to the speed and the convergence rate of the network training speed, in the basic neural network, the weight is set randomly, and is adjusted continuously along the direction of error reduction in the network training process, so that the time consumption is long, and the prediction accuracy cannot be guaranteed.
Disclosure of Invention
In order to overcome the defects in the related art, the invention provides a short-term power load prediction method based on HSNS-BP. In the invention, the improved social network searching algorithm HSNS optimizes the network structure of the BP neural network, thereby better predicting the short-term power load.
In order to achieve the purpose, the invention provides a short-term power load prediction method based on HSNS-BP. The short-term power load prediction method based on the HSNS-BP comprises the following steps: acquiring power load data and influence factor data thereof at a plurality of historical moments in a certain place; forming a new sample data set by the extracted power load data and the influence factor data of the power load data at a plurality of historical moments, and performing normalization operation after eliminating abnormal data; training a neural network model optimized by a multi-strategy hybrid social network search algorithm; and predicting the power load value of 5 continuous days by using the trained HSNS-BP model.
Preferably, the improved social network searching algorithm comprises:
1) When an initial network is established, firstly, the number of users and the maximum iteration number are determined, and a user viewpoint is initialized:
X 0 =LB+rand(1,dim)×(UB-LB) (1)
in the formula, X 0 Is the primitive view vector for each user, i.e., the user perspective; dim is the dimension of the viewpoint vector; rand (1, D) is the interval [1, D ]]A random vector of inner; UB and LB are the maximum vector and the minimum vector of the variable respectively;
2) And in the emotion selection stage of the users, each user randomly selects an emotion updating viewpoint. The 4 emotions are imitation, conversation, dispute and innovation.
The updated formula of the user viewpoint in the simulation is as follows:
X inew =X j +rand(-1,1)×R (2)
R=rand(0,1)×r (3)
r=X j -X i (4)
in the formula 2, X inew Representing a new viewpoint obtained after the ith user viewpoint is updated; in formula 3, R in the simulation reflects the influence of the jth user, and the influence is considered as a multiple of R, wherein the value of R represents the popular radius of the jth user and is obtained by the difference between two viewpoints; in formula 4, X i Represents the ith user viewpoint vector, X j A vector (i ≠ j) that represents the j-th user perspective that was randomly selected; rand (-1, 1) and rand (0, 1) are the interval [ -1, respectively]And [0,1]Two random vectors of (a).
The updating formula of the user viewpoint during the conversation is as follows:
X inew =X k +R (5)
R=rand(0,1)×D (6)
D=sign(f i -f j )×(X j -X i ) (7)
in formula 5, X inew Showing the new viewpoint, X, obtained after the ith user viewpoint is updated k Representing a random view of talking about a topic, as a vector representation, i ≠ j ≠ k; in formula 6, R is the chat effect, which is the effect formed by the difference generated in the conversation and represents the change of the user's opinion on the question; in equation 7, D is the difference between the user viewpoints, sign is the sign () function, sign (f) i -f j ) By comparison of f i And f j Determine X k Direction of movement of f i A fitness value, f, referring to the user's i point of view j Refer to adaptation of user j viewsValue, X i And X j The same as formula 4.
The update formula of the user viewpoint at the time of dispute is as follows:
X inew =X i +rand(0,1)×(M-AF×X i ) (8)
Figure BDA0003945293000000021
AF=1+Round(rand) (10)
in formula 8, X inew Representing a new viewpoint obtained after the ith user viewpoint is updated; in formula 9, M is N in total new Mean of opinion of individual users, N r Number of views or group size, 1 and N new Random number of between, N new Is a set number of network users (network size); in equation 10, AF is an acceptance Factor (acceptance Factor) representing how well a user adheres to his opinion when discussing with others, is a random integer, and may be 1 or 2, and the function of round () is to round the input to the nearest integer;
the updating formula of the user viewpoint during innovation is as follows:
Figure BDA0003945293000000031
Figure BDA0003945293000000032
t=rand(0,1) (13)
in equation 11, d denotes a user viewpoint interval [1, dim ]]A d-dimension variable selected at random is arranged in the space,
Figure BDA0003945293000000033
is an updated view of the user view dimension i d,
Figure BDA0003945293000000034
is a current view on the d-th dimension variable proposed by another user (randomly chosen jth user, i ≠ j); in formula 12,ub d And lb d Are the maximum and minimum values of the d-th dimension variable,
Figure BDA0003945293000000035
representing a new perspective of dimension d; in formula 13, t is the same as formula 3.
3) In the user emotion selection ending phase, different users generate new opinions under different emotion influences, and the adaptive parameters comprise:
Figure BDA0003945293000000036
r=X i -X j (15)
t=rand(0,1) (16)
in the formula (14), the compound represented by the formula,
Figure BDA0003945293000000037
is an updated view of the user view dimension i d,
Figure BDA0003945293000000038
representing the viewpoint of the optimal solution dimension d; in formula 15, X i And X j The same formula 4; in formula 16, t is the same as formula 3.
Wherein the adaptive parameter α is:
Figure BDA0003945293000000039
in formula 17, T max For the maximum number of iterations, t is the current number of iterations.
4) The coxib variants include:
Figure BDA00039452930000000310
in the formula (18), the compound represented by the formula,
Figure BDA00039452930000000311
and
Figure BDA00039452930000000312
equation 14, cauchy (0, 1) is a Cauchy random variable generation function:
Figure BDA00039452930000000313
5) The gaussian variation with helical factor comprises:
Figure BDA00039452930000000314
in the formula 20, the compound represented by the formula,
Figure BDA00039452930000000315
and
Figure BDA00039452930000000316
like formula 14, the helicity factor z is:
Figure BDA00039452930000000317
in the formula 21, b is a spiral constant factor, and a random number having an interval of [ -1,1] is taken as 1,p.
6) The replacement strategy updating formula of the user viewpoint is as follows:
Figure BDA0003945293000000041
in the formula 22, f is a set fitness function; because of the different emotions and processes of decision making, the opinion perspective of each user changes and a new perspective can be used, however, whether a new perspective can be shared depends on its value, as compared by the fitness function.
And repeating the steps 2) -6) until the maximum iteration number is reached, and outputting an optimal solution.
Preferably, the method for performing normalization processing on the acquired power load data by using a linear function method includes: all sample data in the new sample data set are normalized, all data are normalized to be between [ -1,1], and dimensions are removed; the formula of the normalization process is:
Figure BDA0003945293000000042
in formula 23, x s Is a normalized value, x min As a minimum value of training sample data, x max Is the maximum value of the training sample data.
Preferably, the method for constructing the HSNS-BP short-term power load prediction model and optimizing the weight and the threshold of the BP neural network by applying the HSNS algorithm comprises the following steps:
step 1: initializing a neural network: determining the number of nodes of an input layer, the number of nodes of a hidden layer and the number of nodes of an output layer according to the number of factors influencing the power load of an input sample and the value of the output power load; initializing connection weights among the output layer, the hidden layer and the output layer, and initializing a threshold value of the hidden layer and a threshold value of the output layer;
step 2: setting the dimension dim of the viewpoint vector of the user: the dimension of each user viewpoint represents the weight and the threshold of a group of networks, and after dimension information is decoded, a corresponding BP neural network model is established:
dim=m×n+n×p+n+p (24)
in formula 24, m is the number of neurons in the input layer of the BP neural network, n is the number of neurons in the hidden layer of the BP neural network, p is the number of neurons in the output layer of the BP neural network, and dim is the total number of network weights and thresholds; the user perspective can be expressed as:
X i =[x 1 ,x 2 ,x 3 ,…x d …x dim ] (25)
in formula 25, x d View for user X i The d-th vector in (1); when d is equal to [1, m × n ]]When x is d Is the connection weight between the input layer and the hidden layer in the neural network, when d is in the range of [1+ m × n, m × n + n]When x d For hidden layer threshold in neural network, when d is epsilon [1+ m × n + n, m × n + n + n × p]When x d Is the connection weight between hidden layer and output layer in the neural network, when d belongs to [1+ m × n + n + n × p, m × n + n×p+n+p]When x is d Is the output layer threshold value in the neural network;
and step 3: fitness function: taking the root mean square error as a fitness function f, the calculation formula is as follows:
Figure BDA0003945293000000043
in formula 26, Y i Power load value, y, for the ith training sample data i Calculating a power load value for the ith training sample data, wherein N is the total number of the training sample data;
and 4, step 4: obtaining an optimal solution vector: calling a multi-strategy hybrid social network search algorithm, and recording a user viewpoint vector with the best fitness by taking a mean square error function as a fitness function;
and 5: establishing an HSNS-BP short-term power load prediction model: decoding the dimension information of the optimal solution to generate a weight and a threshold vector of the neural network, and establishing an HSNS-BP short-term power load prediction model.
Preferably, the basic parameters set when the HSNS-BP short-term power load prediction model is constructed are as follows: setting a hidden layer neuron activation function as a tansig function, and setting an output layer neuron activation function as a purelin function; all weights and thresholds range from [0,1]; setting the maximum training times, the training requirement precision and the learning rate of a reasonable HSNS-BP short-term power load prediction model according to the size of training data;
preferably, the method for inputting the data set of the day to be predicted into the HSNS-BP short-term power load prediction model to predict the data set of the day to be predicted to obtain the predicted power load value includes: inputting a sample data set participating in training into the constructed HSNS-BP short-term power load prediction for training; after the optimal training effect is achieved, 5 influencing factors of the 5-day power load data to be predicted, namely dry-bulb temperature, dew-point temperature, wet-bulb temperature, humidity and electricity price are used as neural network input, the 5-day power load data obtained through prediction are used as output of the neural network, and short-term power load prediction is achieved.
Compared with the traditional short-term power load prediction method, the method has the beneficial effects that:
(1) According to the method, the accuracy of short-term power load prediction can be improved, and the weight and the threshold of the BP neural network are optimized by adopting an improved social network search algorithm, wherein the multi-strategy hybrid social network search algorithm is embodied by introducing a self-adaptive parameter alpha in the later iteration stage, the linear increase is realized along with the increase of the iteration times, the more areas around the optimal solution are searched, the stronger local search capability is realized in the later stage, and the performance of the BP network structure can be improved;
(2) The improved multi-strategy hybrid social network algorithm also comprises that under the condition that Cauchy variation and Gaussian variation with a helical factor are used in a matched mode, the Cauchy variation operator has good local escape capability under the conditions of one dimension and high dimension, the algorithm is favorable for jumping out of local optimum, and the global search capability is enhanced; the Gaussian distribution and the Cauchy distribution are the same as t distribution, only the degree of freedom is different, the Gaussian variation is narrower than the Cauchy variation at two ends, the local searching capability is better, in order to reduce the possibility that the Gaussian variation falls into the local optimum caused by the Gaussian variation, a spiral factor is introduced, the variation is searched in a space in a spiral mode, the exploration range of a user is expanded, and the algorithm is easier to jump out of the local optimum; therefore, a better balance can be found in the global and local searching capability by reasonably distributing the two variations;
(3) The improved social network search algorithm has global convergence and search capability in a high-dimensional space, so that the BP neural network is optimized, the generalization capability of the network is improved, and the convergence precision of the neural network is improved; analysis and evaluation of the actual prediction result show that the method has better prediction precision.
Drawings
In order to more clearly illustrate the embodiments of the present invention or technical solutions in related arts, the drawings used in the description of the embodiments or related arts will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a general flow diagram of a short term power load forecasting method based on HSNS-BP of the present invention;
FIG. 2 is a flow chart of an HSNS-BP optimized BP neural network provided by the present invention;
FIG. 3 is a graph of the convergence of the improved social network search algorithm of the present invention with 7 other intelligent optimization algorithms on a benchmark test function;
FIG. 4 is a comparison of the prediction results and sample values of the HSNS-BP prediction model provided by the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The following examples are set forth in detail in conjunction with the accompanying drawings:
in an embodiment of the invention, the invention provides a short-term power load prediction method based on HSNS-BP. As shown in fig. 1, the method for predicting short-term power load based on HSNS-BP includes:
s1, acquiring power load data and influence factor data thereof at a plurality of historical moments in a certain place;
s2, forming a new sample data set by the extracted power load data and the influence factor data of the power load data at the plurality of historical moments, and performing normalization operation after eliminating abnormal data;
s3, training a neural network model optimized by a multi-strategy hybrid social network search algorithm;
and S4, predicting the power load value of 5 continuous days by using the trained HSNS-BP model.
In this embodiment, the power load data is input into the HSNS-BP network structure, and the HSNS-BP network structure performs deep learning according to the power load data.
Adopting an improved social network search algorithm to calculate the optimal parameters of the HSNS-BP neural network structure, wherein the optimal parameters comprise: the connection weight between the input layer and the hidden layer, the connection weight between the hidden layer and the output layer, the threshold of the hidden layer and the threshold of the output layer.
And (4) endowing the optimized network parameters to a BP neural network structure, and predicting the load of short-term power according to the acquired internal logic of deep learning.
In this embodiment, as shown in fig. 2, the step of optimizing the BP neural network by the HSNS-BP includes:
s31, building a BP neural network structure;
s32, optimizing the structural parameters of the BP neural network according to the improved social network searching algorithm;
and S33, inputting the obtained optimal weight and the threshold value into a BP neural network, and training an optimal HSNS-BP prediction model.
The data of the influence factors influencing the power load comprise dry bulb temperature, dew point temperature, wet bulb temperature and electricity price of a region corresponding to the power equipment, all data are collected once every half hour, 48 groups of data are obtained in one day, a sample data set is formed according to the sequence of the dry bulb temperature, the dew point temperature, the wet bulb temperature, the electricity price and the power load, the five influence factors represent five inputs of an input layer of the BP neural network, and one power load data represents one output of an output layer of the BP neural network.
In this embodiment, the improved social network search algorithm includes:
when an initial network is established, firstly, the number of users and the maximum iteration number are determined, and a user viewpoint is initialized:
X 0 =LB+rand(1,dim)×(UB-LB) (1)
in the formula, X 0 Is the primitive view vector for each user, i.e., the user perspective; dim is the dimension of the viewpoint vector; rand (1, D) is the interval [1, D ]]A random vector of inner; UB and LB are the maximum vector and the minimum vector of the variable respectively;
and in the emotion selection stage of the users, each user randomly selects an emotion updating viewpoint. The 4 emotions are imitation, conversation, dispute and innovation.
The updated formula of the user viewpoint in the simulation is as follows:
X inew =X j +rand(-1,1)×R (2)
R=rand(0,1)×r (3)
r=X j -X i (4)
in the formula 2, X inew Representing a new viewpoint obtained after the ith user viewpoint is updated; in equation 3, R in the simulation reflects the magnitude of the influence of the jth user, which is considered to be a multiple of R, whose value represents the popular radius of the jth user, derived from the difference between the two viewpoints; in formula 4, X i Represents the ith user viewpoint vector, X j A vector (i ≠ j) that represents a randomly selected jth user perspective; rand (-1, 1) and rand (0, 1) are the interval [ -1, respectively]And [0,1]Two random vectors of (a).
The updating formula of the user viewpoint during the conversation is as follows:
X inew =X k +R(5)
R=rand(0,1)×D(6)
D=sign(f i -f j )×(X j -X i )(7)
in formula 5, X inew Showing the new viewpoint, X, obtained after the ith user viewpoint is updated k Representing a random view of talking about a topic, used as a vector representation, i ≠ j ≠ k; in formula 6, R is the chat effect, which is the effect formed by the difference generated in the conversation and represents the change of the user's opinion on the question; in equation 7, D is the difference between the user viewpoints, sign is the sign () function, sign (f) i -f j ) By comparison of f i And f j Determine X k Direction of movement of (f) i A fitness value, f, referring to the user's i point of view j Refers to the fitness value, X, of the user's j view i And X j The same as formula 4.
The update formula of the user viewpoint at the time of dispute is as follows:
X inew =X i +rand(0,1)×(M-AF×X i ) (8)
Figure BDA0003945293000000081
AF=1+Round(rand) (10)
in formula 8, X inew Representing a new viewpoint obtained after the ith user viewpoint is updated; in formula 9, M is N in total new Mean of opinion of individual users, N r Number of views or group size, 1 and N new Random number of between, N new Is a set number of network users (network size); in equation 10, AF is an Admission Factor (acceptance Factor) representing how well a user adheres to his opinion when discussing with others, is a random integer, which may be 1 or 2, and the function round () is to round the input to the nearest integer;
the updating formula of the user viewpoint during innovation is as follows:
Figure BDA0003945293000000082
Figure BDA0003945293000000083
t=rand(0,1)(13)
in equation 11, d denotes a user viewpoint interval [1, dim ]]A d-dimension variable selected at random is arranged in the space,
Figure BDA00039452930000000815
is an updated view of the user view i dimension d,
Figure BDA00039452930000000816
is a current view on the d-th dimensional variable proposed by another user (randomly selected jth user, i ≠ j); in the formula 12, ub d And lb d Are the maximum and minimum values of the d-th dimensional variable,
Figure BDA00039452930000000817
representing a new perspective of dimension d; in formula 13, t is the same as formula 3.
In the user emotion selection ending phase, different users generate new opinions under different emotion influences, and the adaptive parameters comprise:
Figure BDA0003945293000000084
r=X i -X j (15)
t=rand(0,1) (16)
in the case of the formula (14),
Figure BDA0003945293000000085
is an updated view of the user view dimension i d,
Figure BDA0003945293000000086
representing the viewpoint of the optimal solution dimension d; in formula 15, X i And X j The same formula 4; in formula 16, t is the same as formula 3.
Wherein the adaptive parameter α is:
Figure BDA0003945293000000087
in formula 17, T max For the maximum number of iterations, t is the current number of iterations.
The coxib variants include:
Figure BDA0003945293000000088
in the formula (18), the compound represented by the formula,
Figure BDA0003945293000000089
and
Figure BDA00039452930000000810
equation 14, cauchy (0, 1) is a Cauchy random variable generation function:
Figure BDA00039452930000000811
the gaussian variation with helical factor comprises:
Figure BDA0003945293000000091
in the formula 20, the compound represented by the formula,
Figure BDA0003945293000000092
and
Figure BDA0003945293000000093
by equation 14, the helix factor z is:
z=e b×p ·cos(2πp) (21)
in the formula 21, b is a helical constant factor, and 1, p is a random number in the range of [ -1, 1].
The replacement policy update formula of the user viewpoint is as follows:
Figure BDA0003945293000000094
in the formula 22, f is a set fitness function; because of the different emotions and processes of decision making, the opinion perspective of each user changes and a new perspective can be used, however, whether a new perspective can be shared depends on its value, and is judged by a fitness function.
And repeating iteration until the maximum iteration times and outputting the optimal solution.
As shown in fig. 3, the convergence curve graphs of the multi-strategy hybrid social network search algorithm and other 7 kinds of intelligent optimization algorithms on the basis test function are compared with a classical Particle Swarm Optimization (PSO), an emerging optimization algorithm such as a wolf optimization algorithm (GWO), a Whale Optimization Algorithm (WOA), a sparrow search algorithm (SNS), a Bat Algorithm (BA), a Sine and Cosine Algorithm (SCA), and a social network search algorithm (SNS).
In a simulation experiment, common parameters of algorithms are uniformly set: the population size N =50, and the number of iterations T =1000. The 6 test functions shown are:
Figure BDA0003945293000000095
preferably, the method for performing normalization processing on the acquired power load data comprises: all sample data in the new sample data set are normalized, all data are normalized to be between [ -1,1], and dimensions are removed; the formula of the normalization process is:
Figure BDA0003945293000000101
in formula 23, x s Is a normalized value, x min Is a minimum value, x max Is the maximum value.
Preferably, the method for constructing the HSNS-BP short-term power load prediction model and optimizing the weight and the threshold of the BP neural network by applying the HSNS algorithm comprises the following steps:
step 1: initializing a neural network: determining the number of nodes of an input layer, the number of nodes of a hidden layer and the number of nodes of an output layer according to the number of factors influencing the power load of an input sample and the value of the output power load; initializing connection weights among the output layer, the hidden layer and the output layer, and initializing a threshold value of the hidden layer and a threshold value of the output layer;
step 2: setting the dimension dim of the viewpoint vector of the user: the dimension of each user viewpoint represents the weight and the threshold of a group of networks, and after dimension information is decoded, a corresponding BP neural network model is established:
dim=m×n+n×p+n+p (24)
in formula 24, m is the number of neurons in the input layer of the BP neural network, n is the number of neurons in the hidden layer of the BP neural network, p is the number of neurons in the output layer of the BP neural network, and dim is the total number of network weights and thresholds; the user perspective can be expressed as:
X i =[x 1 ,x 2 ,x 3 ,…x d …x dim ] (25)
in formula 25, x d View for user X i The d-th vector in (1); when d is equal to [1, m is multiplied by n ]]When the utility model is used, the water is discharged,x d is the connection weight between the input layer and the hidden layer in the neural network, when d is in the range of [1+ m × n, m × n + n]When x is d For hidden layer threshold in neural network, when d is equal to [1+ m × n + n, m × n + n + n × p]When x d Is the connection weight between hidden layer and output layer in the neural network, when d belongs to [1+ m × n + n + n × p, m × n + n × p + n + p]When x d Is the output layer threshold in the neural network;
and 3, step 3: fitness function: taking the root mean square error as a fitness function f, the calculation formula is as follows:
Figure BDA0003945293000000102
in formula 26, Y i Power load value, y, for the ith training sample data i Calculating a power load value for the ith training sample data, wherein n is the total number of the training sample data;
and 4, step 4: obtaining an optimal solution vector: calling a multi-strategy hybrid social network search algorithm, and recording a user viewpoint vector with the best fitness by taking a mean square error function as a fitness function;
and 5: establishing an HSNS-BP short-term power load prediction model: decoding the dimension information of the optimal solution to generate a weight and a threshold vector of the neural network, and establishing an HSNS-BP short-term power load prediction model.
Preferably, the basic parameters set when the HSNS-BP short-term power load prediction model is constructed are as follows: setting a hidden layer neuron activation function as a tansig function, and setting an output layer neuron activation function as a purelin function; all weights and thresholds range from [0,1]; setting the maximum training times, the training requirement precision and the learning rate of a reasonable HSNS-BP short-term power load prediction model according to the size of training data;
preferably, the step 4 comprises;
s41, inputting the data set of the day to be predicted into an HSNS-BP short-term power load prediction model for prediction, wherein the implementation method for obtaining the predicted power load value comprises the following steps:
s42, inputting the sample data set participating in training into the constructed HSNS-BP short-term power load prediction model for training;
and S43, after the optimal training effect is achieved, inputting 5 influence factors of dry bulb temperature, dew point temperature, wet bulb temperature, humidity and electricity price of 5-day power load data to be predicted as a neural network, and outputting the 5-day power load data obtained through prediction as the output of the neural network to realize short-term power load prediction.
As shown in fig. 4, the prediction result of the HSNS-BP short-term power load prediction model is compared with the sample data collected on the day to be predicted and the prediction result of the prediction model for optimizing the BP neural network by using the gradient descent method, so that the prediction effect of the HSNS-BP short-term power load prediction model is obviously better.
The particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. A short-term power load forecasting method based on HSNS-BP is characterized by comprising the following steps:
step 1: acquiring dry bulb temperature, dew point temperature, wet bulb temperature, humidity, electricity price and power load data of a plurality of historical moments in a certain place;
step 2: combining the extracted dry-bulb temperature, dew point temperature, wet-bulb temperature, humidity, electricity price and power load data of a plurality of historical moments into a new sample data set, and normalizing the data by using a linear function method;
and step 3: obtaining a multi-strategy hybrid social network search algorithm by introducing self-adaptive parameters, cauchy variation and Gaussian variation of a spiral factor, constructing a short-term power load prediction model for optimizing the BP neural network by using the data sample after linear normalization in the step 2, and training the BP neural network by using 48 groups of sample data to be predicted 40 days before the day, which consists of dry-bulb temperature, dew point temperature, wet-bulb temperature, electricity price and power load data at a plurality of historical moments; in the training process of the HSNS-BP short-term power load prediction model, the weight and the threshold of a BP neural network are optimized by applying an HSNS algorithm;
and 4, step 4: and inputting the 5-day data to be predicted into an HSNS-BP short-term power load prediction model for prediction, and predicting to obtain the 5-day power load value.
2. The HSNS-BP based short-term power load prediction method according to claim 1, wherein: the method comprises the following specific steps:
1) When an initial network is established, firstly, the number of users and the maximum iteration number are determined, and a user view is initialized:
X 0 =LB+rand(1,dim)×(UB-LB) (1)
in the formula, X 0 Is the primitive view vector for each user, i.e., the user perspective; dim is the dimension of the viewpoint vector; rand (1, D) is the interval [1, D ]]A random vector within; UB and LB are the maximum vector and the minimum vector of the variable respectively;
2) In the user emotion selection stage, each user randomly selects an emotion updating viewpoint vector; the 4 emotions are respectively imitation, conversation, dispute and innovation;
3) In the user emotion selection ending phase, different users generate new opinions under different emotion influences, and the adaptive parameters comprise:
Figure FDA0003945292990000011
r=X i -X j (3)
t=rand(0,1) (4)
in the formula (2), the reaction mixture is,
Figure FDA0003945292990000012
is an updated view of the user view i dimension d,
Figure FDA0003945292990000013
expressing the d-th dimension of the optimal solution; in formula 3, X i Represents the ith user viewpoint vector, X j A vector (i ≠ j) that represents a randomly selected jth user perspective; in formula 4, t is the interval [0,1]]A random variable of (a);
wherein, the adaptive parameter α is:
Figure FDA0003945292990000014
in formula 5, T max Is the maximum iteration number, and t is the current iteration number;
4) The Cauchy variation comprises:
Figure FDA0003945292990000015
in the case of the formula 6, the compound,
Figure FDA0003945292990000016
and
Figure FDA0003945292990000017
equation 2, cauchy (0, 1) is a Cauchy random variable generation function:
Figure FDA0003945292990000018
5) Gaussian variations with helical factors include:
Figure FDA0003945292990000019
in the formula 8, the reaction mixture is,
Figure FDA00039452929900000110
and
Figure FDA00039452929900000111
with equation 2, the helix factor z is:
z=e b×p ·cos(2πp) (9)
in the formula 21, b is a helical constant factor, and 1, p is a random number in an interval of [ -1, 1];
6) The replacement strategy updating formula of the user viewpoint is as follows:
Figure FDA0003945292990000021
in the formula 10, f is a set fitness function; because of different emotions and processes of decision making, the opinion view of each user changes, and a new view can be used, however, whether a new view can be shared depends on the value of the new view, and is judged by a fitness function;
7) And repeating the steps 2) -6) until the maximum iteration number is reached, and outputting an optimal solution.
3. The HSNS-BP based short-term power load forecasting method according to claim 1, wherein the method for performing linear normalization processing on the data in step 2 is: all sample data in the new sample data set are normalized, so that all data are normalized to the range of [ -1,1], and dimensions are removed; the formula of the normalization process is:
Figure FDA0003945292990000022
in formula 11, x s Is a normalized value, x min As a minimum value of training sample data, x max Is the maximum value of the training sample data.
4. The method of claim 1, wherein the HSNS-BP-based short-term power load prediction model is constructed in step 3, and the method of applying the HSNS algorithm to optimize the weight and the threshold of the BP neural network comprises:
step 1: initializing a neural network: determining the number of nodes of an input layer, the number of nodes of a hidden layer and the number of nodes of an output layer according to the number of factors influencing the power load of an input sample and the value of the output power load; initializing connection weights among the output layer, the hidden layer and the output layer, and initializing a threshold value of the hidden layer and a threshold value of the output layer;
step 2: setting dimension dim of user viewpoint vector: the dimension of each user viewpoint represents the weight and the threshold of a group of networks, and after dimension information is decoded, a corresponding BP neural network model is established:
dim=m×n+n×p+n+p (12)
in formula 12, m is the number of neurons in the input layer of the BP neural network, n is the number of neurons in the hidden layer of the BP neural network, p is the number of neurons in the output layer of the BP neural network, and dim is the total number of network weights and thresholds; the user perspective can be expressed as:
X i =[x 1 ,x 2 ,x 3 ,…x d …x dim ] (13)
in formula 13, x d View for user X i The d-th vector in (1); when d is equal to [1, m × n ]]When x d Is the connection weight between the input layer and the hidden layer in the neural network, when d is in the range of [1+ m × n, m × n + n]When x is d For hidden layer threshold in neural network, when d is equal to [1+ m × n + n, m × n + n + n × p]When x d Is the connection weight between the hidden layer and the output layer in the neural network, when d belongs to [1+ m × n + n + n × p, m × n + n + n × p + p]When x d Is the output layer threshold in the neural network;
and 3, step 3: fitness function: taking the root mean square error as a fitness function f, the calculation formula is as follows:
Figure FDA0003945292990000023
in formula 14, Y i Power load value, y, for the ith training sample data i Calculating a power load value for the ith training sample data, wherein N is the total number of the training sample data;
and 4, step 4: obtaining an optimal solution vector: calling a multi-strategy hybrid social network search algorithm, and recording a user viewpoint vector with the best fitness by taking a mean square error function as a fitness function;
and 5: establishing an HSNS-BP short-term power load prediction model: decoding the dimension information of the optimal solution to generate the weight and the threshold vector of the neural network, and establishing an HSNS-BP neural network prediction model.
5. The method for optimizing weight and threshold of BP neural network by applying HSNS algorithm as claimed in claim 4, wherein the hidden layer neuron activation function in step 1 is tansig function, the output layer neuron activation function is purelin function; all weights and thresholds range from 0, 1.
6. The HSNS-BP based short-term power load forecasting method according to claim 1, wherein the implementation method of step 4 is: inputting a sample data set participating in training into the constructed HSNS-BP neural network for training; after the optimal training effect is achieved, 5 influencing factors of the 5-day power load data to be predicted, namely dry-bulb temperature, dew-point temperature, wet-bulb temperature, humidity and electricity price are used as neural network input, the 5-day power load data obtained through prediction are used as output of the neural network, and short-term power load prediction is achieved.
CN202211431744.XA 2022-11-16 2022-11-16 Short-term power load prediction method based on HSNS-BP Pending CN115660219A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211431744.XA CN115660219A (en) 2022-11-16 2022-11-16 Short-term power load prediction method based on HSNS-BP

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211431744.XA CN115660219A (en) 2022-11-16 2022-11-16 Short-term power load prediction method based on HSNS-BP

Publications (1)

Publication Number Publication Date
CN115660219A true CN115660219A (en) 2023-01-31

Family

ID=85021076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211431744.XA Pending CN115660219A (en) 2022-11-16 2022-11-16 Short-term power load prediction method based on HSNS-BP

Country Status (1)

Country Link
CN (1) CN115660219A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117669390A (en) * 2024-02-01 2024-03-08 中国石油大学(华东) Metal full-stage fatigue crack growth prediction method and system based on neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117669390A (en) * 2024-02-01 2024-03-08 中国石油大学(华东) Metal full-stage fatigue crack growth prediction method and system based on neural network

Similar Documents

Publication Publication Date Title
Chang et al. Intelligent control for modeling of real‐time reservoir operation, part II: artificial neural network with operating rule curves
Solomatine et al. Data-driven modelling: concepts, approaches and experiences
CN111340273A (en) Short-term load prediction method for power system based on GEP parameter optimization XGboost
Han et al. Short-term forecasting of individual residential load based on deep learning and K-means clustering
CN107133695A (en) A kind of wind power forecasting method and system
Bhadoria et al. Moth flame optimizer-based solution approach for unit commitment and generation scheduling problem of electric power system
CN110674965A (en) Multi-time step wind power prediction method based on dynamic feature selection
Alfred A genetic-based backpropagation neural network for forecasting in time-series data
CN111008790A (en) Hydropower station group power generation electric scheduling rule extraction method
CN115660219A (en) Short-term power load prediction method based on HSNS-BP
Xian et al. A novel fuzzy time series forecasting model based on the hybrid wolf pack algorithm and ordered weighted averaging aggregation operator
Ghanbari et al. An intelligent load forecasting expert system by integration of ant colony optimization, genetic algorithms and fuzzy logic
CN113850438A (en) Public building energy consumption prediction method, system, equipment and medium
Leśniak Supporting contractors’ bidding decision: RBF neural networks application
CN115470987A (en) Short-term photovoltaic power generation prediction method based on improved long-term and short-term memory neural network
Abdelaziz et al. Convolutional neural network with genetic algorithm for predicting energy consumption in public buildings
CN113762591B (en) Short-term electric quantity prediction method and system based on GRU and multi-core SVM countermeasure learning
CN111563614A (en) Load prediction method based on adaptive neural network and TLBO algorithm
Wang et al. The Power Load Forecasting Model of Combined SaDE-ELM and FA-CAWOA-SVM Based on CSSA
CN116702598A (en) Training method, device, equipment and storage medium for building achievement prediction model
CN115528750A (en) Data model hybrid drive unit combination method for power grid safety and stability
Yotov et al. Forecasting Electricity Consumption in a National Power System
Wang et al. Short term load forecasting: A dynamic neural network based genetic algorithm optimization
CN114912653A (en) Short-term load combined prediction method based on self-adaptive chirp modal decomposition and SSA-BilSTM
Jiahui et al. Short-term load forecasting based on GA-PSO optimized extreme learning machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication