CN116796639A - Short-term power load prediction method, device and equipment - Google Patents

Short-term power load prediction method, device and equipment Download PDF

Info

Publication number
CN116796639A
CN116796639A CN202310724331.9A CN202310724331A CN116796639A CN 116796639 A CN116796639 A CN 116796639A CN 202310724331 A CN202310724331 A CN 202310724331A CN 116796639 A CN116796639 A CN 116796639A
Authority
CN
China
Prior art keywords
differential evolution
coefficients
neural network
coefficient
individuals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310724331.9A
Other languages
Chinese (zh)
Inventor
关艳
高曦莹
陆心怡
孙佳音
杨文烨
曲英男
刘叶
王一苗
闫亦铭
周航
赵健博
蒋婷
蔡亦浓
郭丹
戴菁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Marketing Service Center Of State Grid Liaoning Electric Power Co ltd
Original Assignee
Marketing Service Center Of State Grid Liaoning Electric Power Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Marketing Service Center Of State Grid Liaoning Electric Power Co ltd filed Critical Marketing Service Center Of State Grid Liaoning Electric Power Co ltd
Priority to CN202310724331.9A priority Critical patent/CN116796639A/en
Publication of CN116796639A publication Critical patent/CN116796639A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention provides a short-term power load prediction method, a short-term power load prediction device and short-term power load prediction equipment. The method comprises the steps of obtaining power demand time series data, and decomposing the power demand time series data into a high-pass coefficient and a low-pass coefficient; convolving the wavelet coefficients with wavelet functions to obtain wavelet coefficients of different frequency bands, and sequentially arranging the wavelet coefficients to obtain input feature vectors; optimizing the wavelet coefficient by the input feature vector through a differential evolution algorithm to obtain a radial basis function neural network model based on the differential evolution optimization algorithm; parameter adjustment is carried out on a radial basis function neural network model based on a differential evolution optimization algorithm; and predicting the load of a future time point by using the radial basis function neural network model based on the differential evolution optimization algorithm after parameter adjustment. In this way, the problems of high-frequency volatility and nonlinearity in processing load data can be solved, the prediction precision is improved, the method is suitable for load demand prediction above the factory level, and better universality and applicability are achieved.

Description

Short-term power load prediction method, device and equipment
Technical Field
The present invention relates generally to the field of load prediction, and more particularly, to short-term power load prediction methods, apparatus, and devices.
Background
With the reform of the electric power market, industrial enterprises, especially intelligent manufacturing enterprises with machining, metallurgy and the like, have practical significance on strengthening competitive advantages. The diversification of the electric power market enables industrial enterprises to flexibly obtain electric power with competitive advantage in the market. Short-term load prediction is a basic condition for realizing cost reduction of energy consumption.
Demand prediction has been a research hotspot for many scholars in recent years. The conventional method mostly adopts probability models, such as autoregressive methods (ARMA), multiple linear regression, gray scale models and the like. However, most conventional model methods have such a limitation in terms of calculation. These limitations cannot address the non-linearity and data loss characteristics of the power demand data. Modern prediction techniques mostly employ hybrid machine learning methods.
The hybrid machine learning method has the advantage of good prediction accuracy while achieving a prediction function. The current research comprises a fuzzy-neural network method, a support vector machine-optimization method, an extreme learning machine-optimization method, a BP neural network and other combination methods. The problems of longer calculation time, lower precision, poor robustness and the like in the aspect of industrial load short-term prediction are more remarkable.
Disclosure of Invention
According to an embodiment of the present invention, a short-term power load prediction scheme is provided. The scheme solves the problems of high-frequency volatility and nonlinearity in processing load data, improves the prediction precision, is suitable for load demand prediction above the factory level, and has better universality and applicability.
In a first aspect of the invention, a short-term power load prediction method is provided. The method comprises the following steps:
acquiring power demand time series data, and decomposing the power demand time series data into a high-pass coefficient and a low-pass coefficient;
convolving the high-pass coefficient and the low-pass coefficient with wavelet functions respectively to obtain wavelet coefficients of different frequency bands, and arranging the wavelet coefficients in sequence to obtain an input feature vector;
optimizing the wavelet coefficient by the input feature vector through a differential evolution algorithm, and constructing a radial basis function neural network model based on the differential evolution optimization algorithm;
calculating the weights of a Gaussian function center, a Gaussian function width, a hidden unit and an output unit of the radial basis function neural network model based on the differential evolution optimization algorithm, and carrying out parameter adjustment on the radial basis function neural network model based on the differential evolution optimization algorithm;
and predicting the load of a future time point by using the radial basis function neural network model based on the differential evolution optimization algorithm after parameter adjustment.
Further, the decomposing the power demand time series data into a high-pass coefficient and a low-pass coefficient includes:
filtering the original time sequence of the power demand time sequence data for N times to obtain an approximate coefficient and a detail coefficient after each time of filtering, and obtaining N approximate coefficients and N detail coefficients in total; the N approximation coefficients are used as high-pass coefficients; the N detail coefficients are used as low pass coefficients.
Further, the optimizing the wavelet coefficient by the input feature vector through a differential evolution algorithm to obtain a radial basis function neural network model based on the differential evolution optimization algorithm, which comprises the following steps:
initializing a population;
randomly selecting a plurality of different individuals from the population, scaling the vector difference of the selected individuals, and carrying out vector synthesis with the individuals to be mutated to obtain mutation vectors;
performing cross operation on the reference vector and the variation vector;
selecting optimal individuals in the population by utilizing a differential evolution algorithm to obtain offspring individuals;
carrying out adaptability evaluation on the offspring individuals, and screening according to adaptability to obtain elite individuals;
and taking the elite individuals reaching the iteration condition as a radial basis function neural network model based on a differential evolution optimization algorithm.
Further, the calculating the gaussian function center, the gaussian function width and the weights of the hidden units and the output units of the radial basis function neural network model based on the differential evolution optimization algorithm comprises:
clustering the wavelet coefficients of different frequency bands through a K-means clustering algorithm, and taking a clustering center as the center of a Gaussian function;
using the mean square error as a loss function, and calculating to obtain weights of the hidden unit and the output unit through a back propagation algorithm;
the optimal width is selected as the width of the gaussian function by cross-validation.
Further, the predicting the load of the future time point by using the radial basis function neural network model based on the differential evolution optimization algorithm after parameter adjustment comprises the following steps:
and inputting the wavelet coefficients of different frequency bands into a radial basis function neural network model based on a differential evolution optimization algorithm after parameter adjustment, and outputting a load value of a continuous time sequence in the future.
In a second aspect of the invention, a short-term power load prediction apparatus is provided. The device comprises:
the decomposition module is used for acquiring the power demand time series data and decomposing the power demand time series data into a high-pass coefficient and a low-pass coefficient;
the convolution module is used for respectively convolving the high-pass coefficient and the low-pass coefficient with wavelet functions to obtain wavelet coefficients of different frequency bands, and sequentially arranging the wavelet coefficients to obtain input feature vectors;
the optimizing module is used for optimizing the wavelet coefficient through the differential evolution algorithm to obtain a radial basis function neural network model based on the differential evolution optimizing algorithm;
the computing module is used for computing the Gaussian function center, the Gaussian function width and the weights of the hidden units and the output units of the radial basis function neural network model based on the differential evolution optimization algorithm;
and the prediction module is used for predicting the load at a future time point by using the radial basis function neural network model based on the differential evolution optimization algorithm.
Further, the decomposing the power demand time series data into a high-pass coefficient and a low-pass coefficient includes:
filtering the original time sequence of the power demand time sequence data for N times to obtain an approximate coefficient and a detail coefficient after each time of filtering, and obtaining N approximate coefficients and N detail coefficients in total; the N approximation coefficients are used as high-pass coefficients; the N detail coefficients are used as low pass coefficients.
Further, the optimizing the wavelet coefficient by the input feature vector through a differential evolution algorithm to obtain a radial basis function neural network model based on the differential evolution optimization algorithm, which comprises the following steps:
initializing a population;
randomly selecting a plurality of different individuals from the population, scaling the vector difference of the selected individuals, and carrying out vector synthesis with the individuals to be mutated to obtain mutation vectors;
performing cross operation on the reference vector and the variation vector;
selecting optimal individuals in the population by utilizing a differential evolution algorithm to obtain offspring individuals;
carrying out adaptability evaluation on the offspring individuals, and screening according to adaptability to obtain elite individuals;
and taking the elite individuals reaching the iteration condition as a radial basis function neural network model based on a differential evolution optimization algorithm.
Further, the calculating the gaussian function center, the gaussian function width and the weights of the hidden units and the output units of the radial basis function neural network model based on the differential evolution optimization algorithm comprises:
clustering the wavelet coefficients of different frequency bands through a K-means clustering algorithm, and taking a clustering center as the center of a Gaussian function;
using the mean square error as a loss function, and calculating to obtain weights of the hidden unit and the output unit through a back propagation algorithm;
the optimal width is selected as the width of the gaussian function by cross-validation.
Further, the predicting the load of the future time point by using the radial basis function neural network model based on the differential evolution optimization algorithm after parameter adjustment comprises the following steps:
and inputting the wavelet coefficients of different frequency bands into a radial basis function neural network model based on a differential evolution optimization algorithm after parameter adjustment, and outputting a load value of a continuous time sequence in the future.
In a third aspect of the invention, an electronic device is provided. At least one processor of the electronic device; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect of the invention.
In a fourth aspect of the invention, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of the first aspect of the invention.
It should be understood that the description in this summary is not intended to limit the critical or essential features of the embodiments of the invention, nor is it intended to limit the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
The above and other features, advantages and aspects of embodiments of the present invention will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, wherein like or similar reference numerals denote like or similar elements, in which:
FIG. 1 illustrates a flow chart of a short-term power load prediction method according to an embodiment of the invention;
FIG. 2 shows a schematic diagram of an optimization process for optimizing wavelet coefficients by a differential evolution algorithm according to an embodiment of the present invention;
FIG. 3 illustrates a block diagram of a short-term power load prediction apparatus according to an embodiment of the invention;
FIG. 4 illustrates a block diagram of an exemplary electronic device capable of implementing embodiments of the invention;
wherein 400 is an electronic device, 401 is a computing unit, 402 is a ROM, 403 is a RAM, 404 is a bus, 405 is an I/O interface, 406 is an input unit, 407 is an output unit, 408 is a storage unit, 409 is a communication unit.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In addition, the term "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
Fig. 1 shows a flow chart of a short-term power load prediction method of an embodiment of the present invention. The DE-RBFNN model represents a radial basis function neural network model based on a differential evolution optimization algorithm.
The method comprises the following steps:
s101, acquiring power demand time series data, and decomposing the power demand time series data into a high-pass coefficient and a low-pass coefficient.
In this embodiment, the user history data obtained by the power system is user load data, which may be 15min time intervals, with a unit of KW; or the time interval is a 1hour time interval in MW.
In this embodiment, the original time sequence of the power demand time sequence data is filtered for N times, and an approximation coefficient and a detail coefficient are obtained after each filtering, so that N approximation coefficients and N detail coefficients are obtained in total; the N approximation coefficients are used as high-pass coefficients; the N detail coefficients are used as low pass coefficients.
Specifically, the power demand time series data is decomposed into a frequency domain and a time domain using DWT (Discrete Wavelet Transformation) discrete wavelet transform as a preprocessing technique. Noise present in the data is reduced and the power data is made smoother. The signal is decomposed into a set of approximate high-pass coefficients and low-pass coefficients using a discrete wavelet transform. And decomposing the power data into more than three levels of information. For example, the decomposition process is four stages, n=4, 4 approximation coefficients are (a 1 ,A 2 ,A 3 ,A 4 ) And 4 detail coefficients (D 1 ,D 2 ,D 3 ,D 4 ) The number of (2) is four respectively.
In the above embodiment, the original time sequence is filtered once by a filter to obtain the first-order approximation coefficient A 1 And first level detail coefficient D 1
Then, the first-order approximation coefficient A 1 And first level detail coefficient D 1 Then the second-order filtering is carried out by a filter to obtain a second-order approximation coefficient A 2 And a second level detail coefficient D 2
Then, the second-level approximation coefficient A 2 And a second level detail coefficient D 2 Then the third-level approximate coefficient A is obtained by filtering for three times by a filter 3 And third level detail coefficient D 3
Finally, the third-level approximation coefficient A 3 And third level detail coefficient D 3 Filtering for four times by a filter to obtain a fourth-order approximation coefficient A 4 And third level detail coefficient D 4
As an embodiment of the present invention, the above-mentioned 4 approximation coefficients are (A 1 ,A 2 ,A 3 ,A 4 ) And 4 detail coefficients (D 1 ,D 2 ,D 3 ,D 4 ) In which the last-level approximation coefficient, namely the fourth-level approximation coefficient A, is selected 4 And all of the detailsThe pitch coefficient. Reconstruction time series x (t):
in the above embodiment, the power demand time series data is decomposed into the frequency domain and the time domain using DWT as a preprocessing technique. Noise present in the data is reduced and the power data is made smoother.
S102, the high-pass coefficient and the low-pass coefficient are respectively convolved with wavelet functions to obtain wavelet coefficients of different frequency bands, and the wavelet coefficients are sequentially arranged to obtain input feature vectors.
Wavelet decomposition is a technique that decomposes a signal into frequency bands of different scales. The DWT obtains wavelet coefficients for different frequency bands by convolving the signal with a wavelet function. The wavelet coefficient features with different scales are arranged according to a certain sequence to form an input feature vector.
And S103, optimizing the wavelet coefficient by the input feature vector through a differential evolution algorithm to obtain the DE-RBFNN model.
DE (Differential Evolution Algorithm) is a differential evolution algorithm. RBFNN (Radial basis function neural network) is a radial basis function neural network.
In this embodiment, as shown in fig. 2, the optimization process for optimizing the wavelet coefficients by the differential evolution algorithm includes:
s201, initializing population p k
Specifically, the population consists of a group of individualsEach individual->Representing a set of parameters for an RBF neural network. According to the formula->Randomly selecting an initial value, wherein>A j-th value representing an i-th individual of the 0 th generation; u (0, 1) represents random numbers uniformly distributed in the interval (0, 1).
S202, randomly selecting a plurality of different individuals from the population, scaling the vector difference of the selected individuals, and carrying out vector synthesis with the individuals to be mutated to obtain mutation vectors.
Specifically, 2 different individuals are randomly selected in the population, vector differences are scaled, and vector synthesis is carried out on the individuals to be mutated, so that mutation is completedIn which x is j1 ,x j2 ,x j3 3 different individuals randomly selected from the current population; f is a scaling factor, determining the step length and speed of searching, and the value range is [0,1]。
S203, performing cross operation on the reference vector and the variation vector.
Specifically, the reference vector and the variance vector are subjected to cross operation, and the binomial cross operator is calculated asWherein: rand of i Is interval [0,1 ]]Random numbers uniformly distributed in the inner part; i.e n Is interval [1, n ]]Random numbers are uniformly distributed in the inner part; CR is crossover probability, and the weight of genetic information before and after mutation is determined, and the value range is [0,1 ]]。
S204, selecting the optimal individuals in the population by utilizing a differential evolution algorithm to obtain offspring individuals.
The differential evolution algorithm, namely the DE algorithm, adopts a greedy selection mechanism, and can ensure that the population always evolves towards the global optimal directionThe DE algorithm has the advantages of few control parameters, high convergence speed, high reliability of solving nonlinear problems and the like, and is widely applied to solving various problems. HoweverThe DE algorithm has the advantages of high control parameter selection pressure, high dependence of algorithm performance on control parameters, easiness in population individuals to fall into problems of premature convergence, local optimization, search stagnation and the like, and certain limitation in the application solving process.
And S205, carrying out adaptability evaluation on the offspring individuals, and obtaining elite individuals according to adaptability screening.
Specifically, for each individual, the fitness function value thereof is calculated. The fitness function may be a measure of training error, such as Root Mean Square Error (RMSE) or cross entropy loss. Fitness is calculated by evaluating the predicted performance of an individual over a training set. For newly generated individualsAnd calculating the fitness function value. Newly generated individuals->Is->And comparing, and reserving an individual with better adaptability as an elite individual.
S206, taking the elite individual when the iteration condition is reached as a DE-RBFNN model.
Specifically, it is checked whether the termination condition is satisfied. The termination condition may be that a maximum number of iterations is reached or that the fitness function value reaches a predefined threshold. And finally, selecting the individual with the best adaptability as a final DE-RBFNN model.
S104, calculating the Gaussian function center, the Gaussian function width and the weights of the hidden units and the output units of the DE-RBFNN model, and carrying out parameter adjustment on the DE-RBFNN model.
In this embodiment, the wavelet coefficients in the training set are clustered by a K-means clustering algorithm, the clustering center is used as the center of a gaussian function, the Mean Square Error (MSE) is used as a loss function, the weight is updated by a back propagation algorithm, and the optimal width is selected by a cross-validation or trial-and-error method. Different width values may be tried, model trained and validated for each value, and then the width value that performs best on the validation set is selected. This can be confirmed more directly by computing the MAD by computing the relative bias values of all of the comparison models, with the proposed model returning the best values for the training set and the test set.
Gaussian function center: the gaussian function center represents the position of each hidden unit (radial basis function) in the input space. Each hidden unit has a corresponding gaussian function center. These center points determine how responsive the network is to different features and samples in the input space, thereby affecting the representation ability of the network and the fitting ability of the model.
Width of gaussian function: the width of the gaussian function determines the coverage of the radial basis function in the input space. It controls the degree of activation of the radial basis function and the rate of amplitude decay. Smaller widths result in sharper activation function curves, sensitive to local changes in input space; a larger width results in a flatter activation function curve that is more sensitive to the overall characteristics of the input space. The choice of width of the gaussian typically requires tuning during training so that the network can fit the data better and with proper generalization capability.
Weights of the concealment unit and the output unit: the weights of the concealment unit and the output unit are used to pass and convert the input signal to the next layer. The weights of the hidden units determine the weighted response of the radial basis function to the input data and the weights of the output units determine the calculation of the output values. These weights are adjusted during training by an optimization algorithm (e.g., differential evolution) to minimize the prediction error or loss function of the network. By adjusting the weights, the network can learn the characteristics and patterns of the data for accurate prediction and classification.
In this embodiment, the gaussian center and width determine the radial basis function position and activation range in RBFNN, while the weights of the hidden units and output units determine the network connection and signal transfer. The selection and adjustment of these parameters has a significant impact on the modeling capabilities and performance of RBFNN.
S105, predicting the load of the future time point by using the DE-RBFNN model with the parameters adjusted. After parameters such as the Gaussian function center, the Gaussian function width and weights of the hiding unit and the output unit are adjusted, wavelet coefficients of different frequency bands are input into a DE-RBFNN model after parameter adjustment, load values of a future continuous time sequence are output, and load of a future time point is predicted.
According to the embodiment of the invention, a short-term load prediction method is provided, which effectively processes high-frequency volatility and nonlinear problems in load data and improves prediction accuracy. The method is suitable for load demand prediction above the plant level, and has better universality and applicability.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are alternative embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
The above description of the method embodiments further describes the solution of the present invention by means of device embodiments.
As shown in fig. 3, the apparatus 300 includes:
a decomposition module 310, configured to obtain power demand time series data, and decompose the power demand time series data into a high-pass coefficient and a low-pass coefficient;
the convolution module 320 is configured to convolve the high pass coefficient and the low pass coefficient with wavelet functions respectively to obtain wavelet coefficients of different frequency bands, and sequentially arrange the wavelet coefficients to obtain an input feature vector;
the optimizing module 330 is configured to optimize the wavelet coefficient by using the input feature vector through a differential evolution algorithm to obtain a DE-RBFNN model;
the calculating module 340 is configured to calculate a gaussian function center, a gaussian function width, and weights of the hidden unit and the output unit of the DE-RBFNN model, and perform parameter adjustment on the DE-RBFNN model;
a prediction module 350 for predicting the load at a future point in time using the adapted DE-RBFNN model.
In this embodiment, the decomposition module 310 performs N times of filtering on the original time sequence of the power demand time sequence data, and obtains an approximation coefficient and a detail coefficient after each filtering, so as to obtain N approximation coefficients and N detail coefficients in total; the N approximation coefficients are used as high-pass coefficients; the N detail coefficients are used as low pass coefficients.
In this embodiment, the optimizing module 330 is specifically configured to:
initializing a population;
randomly selecting a plurality of different individuals from the population, scaling the vector difference of the selected individuals, and carrying out vector synthesis with the individuals to be mutated to obtain mutation vectors;
performing cross operation on the reference vector and the variation vector;
selecting optimal individuals in the population by utilizing a differential evolution algorithm to obtain offspring individuals;
carrying out adaptability evaluation on the offspring individuals, and screening according to adaptability to obtain elite individuals;
the elite individuals when the iteration condition is reached are used as DE-RBFNN models.
In this embodiment, the calculating module 340 is specifically configured to:
clustering the wavelet coefficients of different frequency bands through a K-means clustering algorithm, and taking a clustering center as the center of a Gaussian function; using the mean square error as a loss function, and calculating to obtain weights of the hidden unit and the output unit through a back propagation algorithm; the optimal width is selected as the width of the gaussian function by cross-validation.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the described modules may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In the technical scheme of the invention, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to an embodiment of the present invention, the present invention also provides an electronic device and a readable storage medium.
Fig. 4 shows a schematic block diagram of an electronic device 400 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
The device 400 comprises a computing unit 401 that may perform various suitable actions and processes in accordance with a computer program stored in a Read Only Memory (ROM) 402 or loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In RAM 403, various programs and data required for the operation of device 400 may also be stored. The computing unit 401, ROM 402, and RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Various components in device 400 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, etc.; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408, such as a magnetic disk, optical disk, etc.; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 401 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 401 performs the respective methods and processes described above, for example, the methods S101 to S105. For example, in some embodiments, methods S101-S105 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 400 via the ROM 402 and/or the communication unit 409. When the computer program is loaded into RAM 403 and executed by computing unit 401, one or more steps of methods S101 to S105 described above may be performed. Alternatively, in other embodiments, the computing unit 401 may be configured to perform the methods S101-S105 by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present invention may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A short-term power load prediction method, comprising:
acquiring power demand time series data, and decomposing the power demand time series data into a high-pass coefficient and a low-pass coefficient;
convolving the high-pass coefficient and the low-pass coefficient with wavelet functions respectively to obtain wavelet coefficients of different frequency bands, and arranging the wavelet coefficients in sequence to obtain an input feature vector;
optimizing the wavelet coefficient by the input feature vector through a differential evolution algorithm, and constructing a radial basis function neural network model based on the differential evolution optimization algorithm;
calculating the weights of a Gaussian function center, a Gaussian function width, a hidden unit and an output unit of the radial basis function neural network model based on the differential evolution optimization algorithm, and carrying out parameter adjustment on the radial basis function neural network model based on the differential evolution optimization algorithm;
and predicting the load of a future time point by using the radial basis function neural network model based on the differential evolution optimization algorithm after parameter adjustment.
2. The method of claim 1, wherein said decomposing the power demand time series data into a high pass coefficient and a low pass coefficient comprises:
filtering the original time sequence of the power demand time sequence data for N times to obtain an approximate coefficient and a detail coefficient after each time of filtering, and obtaining N approximate coefficients and N detail coefficients in total; the N approximation coefficients are used as high-pass coefficients; the N detail coefficients are used as low pass coefficients.
3. The method according to claim 1, wherein optimizing the wavelet coefficients by the input feature vector through a differential evolution algorithm to obtain a radial basis function neural network model based on the differential evolution optimization algorithm comprises:
initializing a population;
randomly selecting a plurality of different individuals from the population, scaling the vector difference of the selected individuals, and carrying out vector synthesis with the individuals to be mutated to obtain mutation vectors;
performing cross operation on the reference vector and the variation vector;
selecting optimal individuals in the population by utilizing a differential evolution algorithm to obtain offspring individuals;
carrying out adaptability evaluation on the offspring individuals, and screening according to adaptability to obtain elite individuals;
and taking the elite individuals reaching the iteration condition as a radial basis function neural network model based on a differential evolution optimization algorithm.
4. The method of claim 1, wherein the calculating the gaussian function center, gaussian function width, and weights of hidden units and output units of the radial basis function neural network model based on the differential evolution optimization algorithm comprises:
clustering the wavelet coefficients of different frequency bands through a K-means clustering algorithm, and taking a clustering center as the center of a Gaussian function;
using the mean square error as a loss function, and calculating to obtain weights of the hidden unit and the output unit through a back propagation algorithm;
the optimal width is selected as the width of the gaussian function by cross-validation.
5. The method of claim 1, wherein predicting the load at the future point in time using the parametric-tuned radial basis function neural network model based on a differential evolutionary optimization algorithm comprises:
and inputting the wavelet coefficients of different frequency bands into a radial basis function neural network model based on a differential evolution optimization algorithm after parameter adjustment, and outputting a load value of a continuous time sequence in the future.
6. A short-term power load prediction apparatus, comprising:
the decomposition module is used for acquiring the power demand time series data and decomposing the power demand time series data into a high-pass coefficient and a low-pass coefficient;
the convolution module is used for respectively convolving the high-pass coefficient and the low-pass coefficient with wavelet functions to obtain wavelet coefficients of different frequency bands, and sequentially arranging the wavelet coefficients to obtain input feature vectors;
the optimizing module is used for optimizing the wavelet coefficient through the differential evolution algorithm to obtain a radial basis function neural network model based on the differential evolution optimizing algorithm;
the computing module is used for computing the Gaussian function center, the Gaussian function width and the weights of the hidden units and the output units of the radial basis function neural network model based on the differential evolution optimization algorithm;
and the prediction module is used for predicting the load at a future time point by using the radial basis function neural network model based on the differential evolution optimization algorithm.
7. The apparatus of claim 6, wherein said decomposing said power demand time series data into a high pass coefficient and a low pass coefficient comprises:
filtering the original time sequence of the power demand time sequence data for N times to obtain an approximate coefficient and a detail coefficient after each time of filtering, and obtaining N approximate coefficients and N detail coefficients in total; the N approximation coefficients are used as high-pass coefficients; the N detail coefficients are used as low pass coefficients.
8. The apparatus of claim 6, wherein optimizing the input feature vector on the wavelet coefficients by a differential evolution algorithm to obtain a radial basis function neural network model based on the differential evolution optimization algorithm comprises:
initializing a population;
randomly selecting a plurality of different individuals from the population, scaling the vector difference of the selected individuals, and carrying out vector synthesis with the individuals to be mutated to obtain mutation vectors;
performing cross operation on the reference vector and the variation vector;
selecting optimal individuals in the population by utilizing a differential evolution algorithm to obtain offspring individuals;
carrying out adaptability evaluation on the offspring individuals, and screening according to adaptability to obtain elite individuals;
and taking the elite individuals reaching the iteration condition as a radial basis function neural network model based on a differential evolution optimization algorithm.
9. An electronic device comprising at least one processor; and
a memory communicatively coupled to the at least one processor; it is characterized in that the method comprises the steps of,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
10. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-5.
CN202310724331.9A 2023-06-19 2023-06-19 Short-term power load prediction method, device and equipment Pending CN116796639A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310724331.9A CN116796639A (en) 2023-06-19 2023-06-19 Short-term power load prediction method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310724331.9A CN116796639A (en) 2023-06-19 2023-06-19 Short-term power load prediction method, device and equipment

Publications (1)

Publication Number Publication Date
CN116796639A true CN116796639A (en) 2023-09-22

Family

ID=88043363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310724331.9A Pending CN116796639A (en) 2023-06-19 2023-06-19 Short-term power load prediction method, device and equipment

Country Status (1)

Country Link
CN (1) CN116796639A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118194137A (en) * 2024-05-16 2024-06-14 国网江西省电力有限公司南昌供电分公司 Block chain-based carbon emission monitoring method
CN118246639A (en) * 2024-05-27 2024-06-25 广东大爱天下能源集团有限公司 Electric power intelligent management method and system based on artificial intelligence

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118194137A (en) * 2024-05-16 2024-06-14 国网江西省电力有限公司南昌供电分公司 Block chain-based carbon emission monitoring method
CN118194137B (en) * 2024-05-16 2024-09-13 国网江西省电力有限公司南昌供电分公司 Block chain-based carbon emission monitoring method
CN118246639A (en) * 2024-05-27 2024-06-25 广东大爱天下能源集团有限公司 Electric power intelligent management method and system based on artificial intelligence
CN118246639B (en) * 2024-05-27 2024-07-30 广东大爱天下能源集团有限公司 Electric power intelligent management method and system based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN111860982A (en) Wind power plant short-term wind power prediction method based on VMD-FCM-GRU
CN116796639A (en) Short-term power load prediction method, device and equipment
CN113361803A (en) Ultra-short-term photovoltaic power prediction method based on generation countermeasure network
CN111461445A (en) Short-term wind speed prediction method and device, computer equipment and storage medium
CN116303786B (en) Block chain financial big data management system based on multidimensional data fusion algorithm
CN117744916A (en) Method and device for predicting energy storage capacity, computer equipment and readable storage medium
CN113342474A (en) Method, device and storage medium for forecasting customer flow and training model
CN115222046A (en) Neural network structure searching method and device, electronic equipment and storage medium
CN110516792A (en) Non-stable time series forecasting method based on wavelet decomposition and shallow-layer neural network
CN113962874A (en) Bus load model training method, device, equipment and storage medium
CN114066250A (en) Method, device, equipment and storage medium for measuring and calculating repair cost of power transmission project
CN117057258B (en) Black-start overvoltage prediction method and system based on weight distribution correlation coefficient
CN117370913A (en) Method, device and equipment for detecting abnormal data in photovoltaic system
CN116885711A (en) Wind power prediction method, device, equipment and readable storage medium
CN116702598A (en) Training method, device, equipment and storage medium for building achievement prediction model
CN115759373A (en) Gas daily load prediction method, device and equipment
CN116306266A (en) Data prediction method, training method, device and equipment of prediction model
CN115630979A (en) Day-ahead electricity price prediction method and device, storage medium and computer equipment
CN111582632B (en) Multi-factor safety stage prediction method for whole process of underground large space construction
Abotaleb PROVING OPTIMAL MODEL SELECTION AND ZERO COEFFICIENT CASES IN TIME SERIES FORECASTING WITH THE GENERALIZED LEAST DEVIATION ALGORITHM.
CN113485986B (en) Electric power data restoration method
CN118469045B (en) Permafrost upper limit prediction method, device, equipment and storage medium
CN118133214B (en) Account classification method, device, equipment and program product
CN117875467A (en) Power system payload prediction method, device, electronic equipment and storage medium
CN115423211A (en) Method, device, equipment and medium for predicting block random power

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination