CN111697621B - Short-term wind power prediction method based on EWT-PDBN combination - Google Patents

Short-term wind power prediction method based on EWT-PDBN combination Download PDF

Info

Publication number
CN111697621B
CN111697621B CN202010581545.1A CN202010581545A CN111697621B CN 111697621 B CN111697621 B CN 111697621B CN 202010581545 A CN202010581545 A CN 202010581545A CN 111697621 B CN111697621 B CN 111697621B
Authority
CN
China
Prior art keywords
data
wind power
prediction
historical
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010581545.1A
Other languages
Chinese (zh)
Other versions
CN111697621A (en
Inventor
王硕禾
张嘉姗
郭威
常宇健
蔡承才
刘晗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shijiazhuang Tiedao University
Original Assignee
Shijiazhuang Tiedao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shijiazhuang Tiedao University filed Critical Shijiazhuang Tiedao University
Priority to CN202010581545.1A priority Critical patent/CN111697621B/en
Publication of CN111697621A publication Critical patent/CN111697621A/en
Application granted granted Critical
Publication of CN111697621B publication Critical patent/CN111697621B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks
    • H02J3/38Arrangements for parallely feeding a single network by two or more generators, converters or transformers
    • H02J3/381Dispersed generators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J2300/00Systems for supplying or distributing electric power characterised by decentralized, dispersed, or local generation
    • H02J2300/20The dispersed energy generation being of renewable origin
    • H02J2300/28The renewable source being wind energy
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/70Wind energy
    • Y02E10/76Power conversion electric or electronic aspects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Evolutionary Biology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Power Engineering (AREA)
  • Game Theory and Decision Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a short-term wind power prediction method based on an EWT-PDBN combination, which comprises the following steps: A. collecting numerical weather forecast data and historical wind power data of a wind power plant; B. preprocessing and normalizing all collected data; C. decomposing the normalized historical average wind power data by using an empirical wavelet transform signal decomposition technology; D. performing correlation screening on the decomposed different eigenmode component function subsequences, then respectively taking the screened group subsequences and other data subjected to normalization processing as input data together, inputting the input data into a particle swarm optimized deep belief network model for prediction, and obtaining group prediction data; E. and superposing the group of prediction data to reconstruct a group of data, and then carrying out inverse normalization processing on the group of data to obtain a result serving as a final wind power prediction result. According to the method, the wind power prediction result with high precision and small error is obtained through EWT-PDBN combined prediction.

Description

Short-term wind power prediction method based on EWT-PDBN combination
Technical Field
The invention relates to the technical field of wind power prediction, in particular to a short-term wind power prediction method based on an EWT-PDBN combination.
Background
Compared with the traditional energy, the wind energy is one of intermittent new energy with wide development prospect, but the wind speed and the wind power generation power show the characteristics of volatility, randomness and the like due to the continuous change of the climate environment. In recent years, the installed capacity of wind power in China is continuously increased, the controllability of a power system containing wind power on electric energy is weakened due to the instability of the wind power, meanwhile, many problems are brought to the safe and stable operation of the power system after large-scale wind power integration, and the further development of the wind power is restricted. Although some methods are available to solve these problems, most methods sacrifice the economy of system operation, such as adding system spare capacity. Therefore, the wind speed or the wind power can be accurately predicted, so that the problems caused by inherent characteristics of wind power can be solved to a great extent, the impact force of wind power non-stationarity on a power grid can be reduced, and the operation cost of a power system is saved.
The conventional wind power plant power prediction is divided into physical, statistical and intelligent methods according to a modeling method; according to the prediction time scale, the method comprises the steps of long-term prediction, medium-term prediction, short-term prediction and ultra-short-term prediction; the prediction mode is divided into direct prediction and indirect prediction. Due to the continuous development of the field of artificial intelligence, a neural network has shown a good prediction effect in time series data prediction as a machine learning technology. However, the method can be regarded as a shallow prediction model, and although the method has strong adaptability, the deep characteristics and the internal rules of the wind power data cannot be completely extracted, so that the problems of low prediction precision and easy error generation exist.
Disclosure of Invention
The invention aims to provide a short-term wind power prediction method based on an EWT-PDBN combination, and the method is used for solving the problems of low prediction precision and easy error generation in the prior art.
The invention is realized by the following steps: a short-term wind power prediction method based on an EWT-PDBN combination comprises the following steps:
A. collecting numerical weather forecast data and historical wind power data of a wind field, wherein the numerical weather forecast data comprises wind speed, temperature, air pressure, and atmospheric pressure and relative humidity of an average sea level, and the historical wind power data comprises historical maximum wind power data, historical minimum wind power data and historical average wind power data;
B. preprocessing all collected data, and then normalizing all the preprocessed data;
C. decomposing the normalized historical average wind power data by using an empirical wavelet transform signal decomposition technology, and carrying out stabilization treatment to obtain a plurality of groups of subsequences with different characteristic frequencies;
D. performing correlation screening on the different decomposed subsequences to screen out n groups of subsequences, then respectively using the screened n groups of subsequences and numerical weather forecast data, historical maximum wind power data and historical minimum wind power data which are subjected to normalization processing as input data together, and predicting in a particle swarm optimization deep belief network model to obtain n groups of prediction data;
E. and superposing the n groups of prediction data to reconstruct a group of data, then carrying out inverse normalization processing on the group of data, taking the obtained result as a final wind power prediction result, and then carrying out prediction result analysis according to the error evaluation index.
In the step A, preprocessing the collected numerical weather forecast data and the collected historical wind power data, namely performing abnormal value processing, wherein the specific processing method comprises the following steps:
setting the time resolution of numerical weather forecast data and historical wind power data to be 15 min;
(II) supplementing the missing data by using the data at the previous moment;
(III) replacing the historical power data smaller than 0 with 0;
fourthly, replacing abnormal data with average values of adjacent moments for interpolation;
and (V) replacing the historical power data value exceeding the threshold value with the rated power value of the fan.
In step B, all data values are normalized, and the values are reduced to the interval [ -1, 1], wherein the normalized calculation formula is as follows:
Figure BDA0002553372770000021
wherein x isgNormalizing the data; x, xmin、xmaxRespectively a data value, a minimum value in the data, and a maximum value in the data;
in step E, a set of data formed by the superposition weights is denormalized to have a physical meaning, and the denormalization calculation formula is:
xf=x′(xmax-xmin)+xmin (2)
wherein x' is an output value of the EWT-PDBN prediction model; x is the number offAnd obtaining a wind power data prediction value for inverse normalization.
In step C, decomposing the normalized historical average wind power data with non-stationarity by using an EWT algorithm to obtain n subsequences of different modal components, where each subsequence is defined as a group of amplitude and frequency modulated signals, and the implementation of the EWT algorithm includes the following steps:
(C1) 0, pi to Fourier axis]Adaptive partitioning into N successive cells, ωnThe width of each small interval is expressed by lambdan=[ωn-1n]Denotes, τnFor at each omeganA transition region as a center, the width of the transition region being 2 taun
(C2) Within each cell anIn the above, according to the Littlewood-Paley and Meyer wavelet transform method, the definitions of the empirical wavelet function and the empirical wavelet scale function in the frequency domain are constructed as formula (3) and formula (4):
Figure BDA0002553372770000031
Figure BDA0002553372770000032
wherein, taun=γωn
Figure BDA0002553372770000033
The function β (x) satisfies formula (5) and formula (6):
Figure BDA0002553372770000034
Figure BDA0002553372770000035
there are many functions that satisfy the above properties, and most commonly used is β (x) ═ x4(35-84x+70x2-20x3) For arbitrary n>0, further reducing the empirical wavelet function and the scale function to formula (7) and formula (8):
Figure BDA0002553372770000041
Figure BDA0002553372770000042
(C3) determining detail coefficients of the empirical wavelet function according to the empirical wavelet function and the scale function
Figure BDA0002553372770000043
And approximation coefficient
Figure BDA0002553372770000044
The Fourier transform and the inverse Fourier transform are denoted as Fg, respectively]And F-1[g]Detail coefficients of empirical wavelet transform
Figure BDA0002553372770000045
Can be obtained by the inner product of empirical wavelet functions:
Figure BDA0002553372770000046
wherein "<g>"is inner product operation; psin(t) is an empirical wavelet function;
Figure BDA0002553372770000047
for empirical wavelet function psin(t) Fourier transform;
Figure BDA0002553372770000048
for empirical wavelet function psin(t) complex conjugation;
approximation coefficient
Figure BDA0002553372770000049
Can be obtained by the inner product of the scale function:
Figure BDA00025533727700000410
wherein "<g>"is inner product operation; phi is a1(t) is an empirical scale function;
Figure BDA00025533727700000411
as a function of scale phi1(t) Fourier transform;
Figure BDA00025533727700000412
as a function of scale phi1(t) complex conjugation;
(C4) detail coefficient based on empirical wavelet function
Figure BDA00025533727700000413
And approximation coefficient
Figure BDA00025533727700000414
To reconstruct the original signal, the formula is:
Figure BDA00025533727700000415
wherein "+" is a convolution operation;
Figure BDA00025533727700000416
and
Figure BDA00025533727700000417
respectively empirical wavelet transform detail coefficients
Figure BDA00025533727700000418
And approximation coefficient
Figure BDA00025533727700000419
The original signal is decomposed into empirical mode components according to the following formula:
Figure BDA0002553372770000051
the implementation of the particle group optimized deep belief network (PDBN) model in step D comprises the following steps:
a. searching a node number parameter of an optimal hidden layer in a limited Boltzmann machine by utilizing a PSO optimization algorithm, then initializing DBN network parameters and particle population numbers, then calculating a particle fitness function value, and updating a particle optimal value and a population optimal value;
b. checking whether the system reaches the iteration times, if so, outputting the parameter value, and establishing an RBM network corresponding to the parameter value, otherwise, returning to the step 1 to perform iteration again;
c. screening subsequences with large correlation degree with the historical average wind power data sequence, inputting the subsequences into unsupervised RBMs with optimized PSOs for pre-training, and then adjusting according to a BPNN fine-tuning phase method to form PDBN network prediction models corresponding to the subsequences;
d. decomposing the test data of the historical average wind power according to an EWT algorithm, screening n groups of subsequences, respectively using the n groups of subsequences and the numerical weather forecast data, the historical maximum wind power data and the historical minimum wind power data which are subjected to normalization processing as input data together, and predicting in a particle swarm optimization deep belief network (PDBN) model to obtain n groups of predicted data.
In step a, the velocity and position information updating formula of the particle individual is as follows:
vi(k+1)=ωvi(k)+c1r1[Pbesti(k)-xi(k)]+c2r2[Gbesti(k)-xi(k)] (13)
xi(k+1)=xi(k)+φvi(k+1) (14)
wherein, in each iterative optimization process, the position of the ith particle is assumed to be X ═ X1,x2,…,xi,…,xn]Velocity V ═ V1,v2,…,vi,…,vn]The particles continuously update the speed and the position of the particles by iteratively comparing the fitness value with the two extreme values to find the individual optimal solution P of the particlesbest=[Pbest1,Pbest2,…,Pbesti,…,Pbestn]And the best solution G currently found for the whole populationbest=[Gbest1,Gbest2,…,Gbesti,…,Gbestn](ii) a k is the number of iterations; x is the number ofi(k) Is the position of the particle at k iterations; v. ofi(k) The velocity of the ith particle at k iterations; pbesti(k) Historical optimal positions of the particles i are shown; gbesti(k) Historical optimal positions for the groups; c. C1,c2Is a cognition factor; r is1,r2Are uniformly distributed random numbers; omega is the inertial weight; phi is a contraction factor used to keep the velocity within a certain range.
In the step b, the RBM network establishment step is as follows:
b1, a DBN is a multi-hidden-layer deep learning network formed by stacking a plurality of limited Boltzmann machines, an RBM is composed of a visible layer and a hidden layer and is a special neural network model based on energy, namely the ideal state of the network model is that the energy is minimized, the RBM learns the probability distribution from the visible layer to the hidden layer, and the joint energy function can be expressed as a formula (15):
Figure BDA0002553372770000061
wherein v ═ v (v)1,v2,L,vi,L,vn)T、h=(h1,h2,L,hj,L,hm)TState vectors of a visible layer and a hidden layer; v. ofi、hjRespectively showing the states of the ith and the j th neurons of the visual layer and the hidden layer; a ═ a1,a2,L,ai,L,an)T、b=(b1,b2,L,bj,L,bm)TBias vectors of a visible layer and a hidden layer; a isi、bjBiasing of ith and j neurons of the visual layer and the hidden layer respectively; θ ═ ωij,ai,bjIs the training parameter of the RBM; ω is a weight matrix connecting v and h, ωij∈Rm×n(ii) a n is the number of visible layer neurons; m is the number of hidden layer neurons;
b2, given an energy function, the joint probability distribution can be expressed as
Figure BDA0002553372770000062
Wherein Z (theta) is a normalization factor
Figure BDA0002553372770000063
b3, for simplifying the calculation process of RBM, assuming that all nodes are binary nodes, i.e. v ∈ {0,1}, h ∈ {0,1}, and specifying that the nodes are independent of each other, the activation rates of neurons of given v, h and h, v can be given by formula (17) and formula (18), respectively:
Figure BDA0002553372770000064
Figure BDA0002553372770000066
wherein, Pθ(v i1| h) is known hiProbability of 1, Pθ(h j1| v) is given vjThe probability of 1 is given as the probability of,
Figure BDA0002553372770000065
representing a sigmoid function;
b4, after the training sample S is given, training the RBM to determine the solution update parameter theta ═ { a, b, omega }, i.e. the goal of training the RBM is to maximize the log-likelihood function
Figure BDA0002553372770000071
The maximum likelihood function log P (v) is obtained by a gradient ascent method, and is calculated as follows:
Figure BDA0002553372770000072
Figure BDA0002553372770000073
Figure BDA0002553372770000074
wherein EPAnd
Figure BDA0002553372770000075
expectation of probability distribution after raw data and reconstructed data, respectively, and EP[hv]=P(h|v)vT
Figure BDA0002553372770000076
b5 due to
Figure BDA0002553372770000077
The RBM adopts a k-step contrast divergence learning algorithm to quickly and effectively train RBM parameters in order to solve the problem of long time for calculating Z (theta), and the main idea is to initialize a visual layer by using training data and then execute Gibbs sampling;
the objective of the CD learning algorithm is to obtain the approximate values of partial derivatives Δ ω, Δ a, Δ b, and update the formula as
Figure BDA0002553372770000078
Figure BDA0002553372770000079
Figure BDA00025533727700000710
Wherein ω isk、ak、bkThe weight matrix, the visible layer offset and the hidden layer offset of the kth sampling are respectively, and eta is the learning rate.
In step c, the DBN network is composed of multiple layers of stacked RBMs and 1 layer of BPNN, the DBN training process includes a layered pre-training process and a fine tuning process, the RBMs are responsible for the pre-training of the network, the BPNN is responsible for the fine tuning part of the network, and the DBN network establishment specifically includes the steps of:
c1, a layered pre-training process, wherein sample data is input into a visible layer of a first RBM network, after training, the output of a hidden layer is used as the input of a visible layer of a second RBM network, and pre-training is carried out layer by layer in this way until all four layers of RBM networks are trained;
c2, fine tuning, wherein the purpose of fine tuning is to make the output value close to the input value, and since the parameters are obtained by training sample learning, the network can avoid falling into local optimum to achieve global optimum, and the prediction effect is better. In step EIn the method, n groups of prediction data are subjected to superposition reconstruction and inverse normalization to serve as a final wind power prediction result, prediction result analysis is performed according to an error evaluation index, in order to visually evaluate the accuracy of each wind power prediction model, the quality states among the models are analyzed and compared in detail, and the average relative error, the root mean square error and the accuracy R are selected2Pearson correlation coefficient ePRThe error analysis index is used as an error analysis index of the wind power prediction model;
the average relative error is formulated as:
Figure BDA0002553372770000081
the root mean square error formula is:
Figure BDA0002553372770000082
the accuracy formula is:
Figure BDA0002553372770000083
the Pearson correlation coefficient formula is:
Figure BDA0002553372770000084
wherein, yi
Figure BDA0002553372770000085
Respectively the true value of the test sample and the average value of the true values of the test sample;
Figure BDA0002553372770000086
respectively a predicted value of the test sample and an average value of the predicted values of the test samples; n is the number of samples of the test set, and for the four error evaluation indexes, MRE and RMSE refer to error indexes, and the smaller the value of the error evaluation indexes, the higher the prediction precision is; and R is2And ePRRefers to the prediction accuracy, and a larger value indicates a higher prediction accuracy.
According to the method, numerical weather forecast data and historical wind power data of a wind field are collected, preprocessing and normalization processing are carried out on all data, empirical wavelet transformation decomposition is carried out on historical average wind power data subjected to normalization processing to obtain a plurality of groups of subsequences, screening is carried out, the screened subsequences and other data subjected to normalization processing are jointly used as input data, a plurality of groups of prediction data are obtained through prediction in a particle swarm optimization deep belief network (PDBN) model, the prediction data are superposed and reconstructed into a group of data, then reverse normalization processing is carried out on the group of data, and an obtained result is used as a final wind power prediction result.
The invention adopts an Empirical Wavelet Transform (EWT) signal processing method, and the EWT can decompose intrinsic mode components (IMF) with different scales in a self-adaptive mode, thereby solving the aliasing phenomenon existing in the decomposition result in the prior art, avoiding the defect of difficult selection of wavelet transform basis functions and being capable of well processing non-stationary signals.
The Particle Swarm Optimization (PSO) algorithm is an intelligent optimization algorithm which is provided for simulating the flight foraging behavior of a bird swarm. In PSO, a mass-free particle is designed to simulate the birds in a flock, all of which are assigned random velocities and positions that represent the solution to the problem to be optimized. Each particle searches an independent optimal solution (the best position passed) in the space, then shares the optimal solution with other particles in the whole particle swarm, and finds the optimal individual optimal solution as the current global optimal solution of the whole particle swarm, so that the speed and the position of the particle are adjusted. The PSO is used as an algorithm for random search and parallel optimization, has the characteristics of simple principle, good robustness, high convergence speed, better global search capability and the like, and can find out the global optimal solution of the problem with higher probability. The particle swarm optimization algorithm is used for signal denoising, signal decomposition and signal reconstruction, the important characteristics of an original signal are effectively extracted, PSO wind power data are processed by the Particle Swarm Optimization (PSO) algorithm, the uncertainty of the original power is reduced, and the power is decomposed into subsequences with different characteristics or frequencies, so that the prediction effect is good. The invention adopts a Deep Belief Network (DBN) as a network combining multi-type machine learning and each hierarchy structure, and uses a multi-layer nonlinear information processing method to carry out feature classification on wind power prediction, thereby being obviously superior to a time sequence, a Support Vector Machine (SVM) and a traditional shallow neural network prediction method in the prior art in the aspect of prediction performance.
Drawings
FIG. 1 is a flow chart of the EWT-PDBN prediction model of the present invention.
Fig. 2 is a schematic diagram of the fourier axis division of the present invention.
FIG. 3 is a diagram illustrating the PSO algorithm optimizing DBN network parameters according to the present invention.
Fig. 4 is a schematic view of the RBM structure of the present invention.
Fig. 5 is a schematic diagram of the DBN structure of the present invention.
FIG. 6 is a schematic diagram of an EWT decomposition of the EWT-PDBN prediction model of the present invention.
FIG. 7 is a graph of the EWT-PDBN model wind power prediction results versus actual power results of the present invention.
FIG. 8 is a schematic diagram of the wind power prediction results for the six models.
Detailed Description
As shown in FIG. 1, the short-term wind power prediction method based on the EWT-PDBN combination comprises the following steps:
A. collecting numerical weather forecast data and historical wind power data of a wind field, wherein the numerical weather forecast data comprises five groups of data of wind speed, temperature, air pressure, atmospheric pressure of average sea level and relative humidity, and the historical wind power data comprises three groups of data of historical maximum wind power data, historical minimum wind power data and historical average wind power data;
B. preprocessing all collected data, and then normalizing all the preprocessed data;
C. decomposing the normalized historical average wind power data by using an Empirical Wavelet Transform (EWT) signal decomposition technology, and carrying out stabilization treatment to obtain a plurality of groups of subsequences with different characteristic frequencies;
D. performing correlation screening on the different decomposed subsequences to screen n groups of subsequences, then respectively using the screened n groups of subsequences and numerical weather forecast data, historical maximum wind power data and historical minimum wind power data which are subjected to normalization processing as input data together, and predicting in a particle swarm optimization deep belief network (PDBN) model to obtain n groups of predicted data;
E. and superposing the n groups of prediction data to reconstruct a group of data, then carrying out inverse normalization processing on the group of data, taking the obtained result as a final wind power prediction result, and then carrying out prediction result analysis according to the error evaluation index.
In the step A, preprocessing the collected numerical weather forecast data and the collected historical wind power data, namely performing abnormal value processing, wherein the specific processing method comprises the following steps:
setting the time resolution of numerical weather forecast data and historical wind power data to be 15 min;
(II) supplementing the missing data by using the data at the previous moment;
(III) replacing the historical power data smaller than 0 with 0;
fourthly, replacing abnormal data with average values of adjacent moments for interpolation;
and (V) replacing the historical power data value exceeding the threshold value with the rated power value of the fan.
In step B, all data values are normalized, and the values are reduced to the interval [ -1, 1], wherein the normalized calculation formula is as follows:
Figure BDA0002553372770000101
wherein x isgNormalizing the data; x, xmin、xmaxRespectively a data value, a minimum value in the data, and a maximum value in the data;
in step E, a set of data formed by the superposition weights is denormalized to have a physical meaning, and the denormalization calculation formula is:
xf=x′(xmax-xmin)+xmin (2)
wherein x' is an output value of the EWT-PDBN prediction model; x is the number offAnd obtaining a wind power data prediction value for inverse normalization.
In step C, decomposing the normalized historical average wind power data with non-stationarity by using an EWT algorithm to obtain n subsequences of different modal components, where each subsequence is defined as a group of amplitude and frequency modulated signals, and as shown in fig. 2, the implementation of the EWT algorithm includes the following steps:
(C1) 0, pi to Fourier axis]Adaptive partitioning into N successive cells, ωnThe width of each small interval is expressed by lambdan=[ωn-1n]Denotes, τnFor at each omeganA transition region as a center, the width of the transition region being 2 taun
(C2) Within each cell anIn the above, according to the Littlewood-Paley and Meyer wavelet transform method, the definitions of the empirical wavelet function and the empirical wavelet scale function in the frequency domain are constructed as formula (3) and formula (4):
Figure BDA0002553372770000111
Figure BDA0002553372770000112
wherein, taun=γωn
Figure BDA0002553372770000113
The function β (x) satisfies formula (5) and formula (6):
Figure BDA0002553372770000114
Figure BDA0002553372770000115
there are many functions that satisfy the above properties, and most commonly used is β (x) ═ x4(35-84x+70x2-20x3) For arbitrary n>0, further reducing the empirical wavelet function and the scale function to formula (7) and formula (8):
Figure BDA0002553372770000121
Figure BDA0002553372770000122
(C3) determining detail coefficients of the empirical wavelet function according to the empirical wavelet function and the scale function
Figure BDA0002553372770000123
And approximation coefficient
Figure BDA0002553372770000124
Let F [ g ] denote Fourier transform and inverse Fourier transform, respectively]And F-1[g]Detail coefficients of empirical wavelet transform
Figure BDA0002553372770000125
Can be obtained by the inner product of empirical wavelet functions:
Figure BDA0002553372770000126
wherein "<g>"is inner product operation; psin(t) is an empirical wavelet function;
Figure BDA0002553372770000127
for empirical wavelet function psin(t) Fourier transform;
Figure BDA0002553372770000128
for empirical wavelet function psin(t) complex conjugation;
approximation coefficient
Figure BDA0002553372770000129
Can be obtained by the inner product of the scale function:
Figure BDA00025533727700001210
wherein "<g>"is inner product operation; phi is a1(t) is an empirical scale function;
Figure BDA00025533727700001211
as a function of scale phi1(t) Fourier transform;
Figure BDA00025533727700001212
as a function of scale phi1(t) complex conjugation;
(C4) detail coefficient based on empirical wavelet function
Figure BDA00025533727700001213
And approximation coefficient
Figure BDA00025533727700001214
To reconstruct the original signal, the formula is:
Figure BDA00025533727700001215
wherein "+" is a convolution operation;
Figure BDA00025533727700001216
and
Figure BDA00025533727700001217
respectively empirical wavelet transform detail coefficients
Figure BDA00025533727700001218
And approximation coefficient
Figure BDA00025533727700001219
The original signal is decomposed into empirical mode components according to the following formula:
Figure BDA0002553372770000131
as shown in fig. 4, the implementation of the particle group optimized deep belief network (PDBN) model in step D includes the following steps:
a. searching node number parameters of an optimal hidden layer in a Restricted Boltzmann Machine (RBM) by utilizing a PSO optimization algorithm, then initializing DBN network parameters and particle population numbers, then calculating a particle fitness function value, and updating a particle optimal value and a population optimal value;
b. checking whether the system reaches the iteration times, if so, outputting the parameter value, and establishing an RBM network corresponding to the parameter value, otherwise, returning to the step 1 to perform iteration again;
c. screening subsequences with large correlation degree with the historical average wind power data sequence, inputting the subsequences into unsupervised RBMs with optimized PSO for pre-training, and then adjusting according to a BPNN fine-tuning phase method to form PSO-DBN network prediction models corresponding to the subsequences;
d. decomposing the test data of the historical average wind power according to an EWT algorithm, screening out n groups of subsequences, respectively using the n groups of subsequences and the numerical weather forecast data, the historical maximum wind power data and the historical minimum wind power data which are subjected to normalization processing as input data, and predicting in a particle swarm optimization deep belief network (PDBN) model to obtain n groups of predicted data.
As shown in fig. 3, in step a, the velocity and position information updating formula of the particle individual is as follows:
vi(k+1)=ωvi(k)+c1r1[Pbesti(k)-xi(k)]+c2r2[Gbesti(k)-xi(k)] (13)
xi(k+1)=xi(k)+φvi(k+1) (14)
wherein, in each iterative optimization process, the position of the ith particle is assumed to be X ═ X1,x2,…,xi,…,xn]The velocity (i.e., the rate of change of position) is V ═ V1,v2,…,vi,…,vn]The particles continuously update their speed and position by iteratively comparing the fitness value with two extreme values to find the individual optimal solution (individual extreme value) P of the particles themselvesbest=[Pbest1,Pbest2,…,Pbesti,…,Pbestn]And the best solution (global extremum) G currently found for the whole populationbest=[Gbest1,Gbest2,…,Gbesti,…,Gbestn](ii) a k is the number of iterations; x is the number ofi(k) Is the position of the particle at k iterations; v. ofi(k) The velocity of the ith particle at k iterations; pbesti(k) Historical optimal positions of the particles i are shown; gbesti(k) Historical optimal positions for the groups; c. C1,c2Is a cognition factor; r is1,r2Are uniformly distributed random numbers; omega is the inertial weight; phi is a contraction factor used to keep the velocity within a certain range.
In step b, the RBM network establishment step is:
b1, a multi-hidden-layer deep learning network formed by stacking a plurality of Restricted Boltzmann Machines (RBMs), wherein the RBMs are composed of visible layers and hidden layers and are special neural network models based on energy, namely the ideal state of the network models is that the energy is minimized, the RBMs learn the probability distribution from the visible layers to the hidden layers, and the joint energy function can be expressed as a formula (15):
Figure BDA0002553372770000141
wherein v ═ v (v)1,v2,…,vi,…,vn)T、h=(h1,h2,…,hj,…,hm)TState vectors of a visible layer and a hidden layer; v. ofi、hjRespectively showing the states of the ith and the j th neurons of the visual layer and the hidden layer; a ═ a1,a2,…,ai,…,an)T、b=(b1,b2,…,bj,…,bm)TBias vectors of a visible layer and a hidden layer; a isi、bjBiasing of ith and j neurons of the visual layer and the hidden layer respectively; θ ═ ωij,ai,bjIs the training parameter of the RBM; ω is a weight matrix connecting v and h, ωij∈Rm×n(ii) a n is the number of visible layer neurons; m is the number of hidden layer neurons.
b2, given an energy function, the joint probability distribution can be expressed as
Figure BDA0002553372770000142
Wherein Z (theta) is a normalization factor
Figure BDA0002553372770000143
b3, for simplifying the calculation process of RBM, assuming that all nodes are binary nodes, i.e. v ∈ {0,1}, h ∈ {0,1}, and specifying that the nodes are independent of each other, the activation rates of neurons of given v, h and h, v can be given by formula (17) and formula (18), respectively:
Figure BDA0002553372770000144
Figure BDA0002553372770000145
wherein, Pθ(v i1| h) is known hiProbability of 1, Pθ(h j1| v) is given vjThe probability of 1 is given as the probability of,
Figure BDA0002553372770000151
representing a sigmoid function;
b4, after the training sample S is given, training the RBM to determine the solution update parameter theta ═ { a, b, omega }, i.e. the goal of training the RBM is to maximize the log-likelihood function
Figure BDA0002553372770000152
The maximum likelihood function log P (v) is obtained by a gradient ascent method, and is calculated as follows:
Figure BDA0002553372770000153
Figure BDA0002553372770000154
Figure BDA0002553372770000155
wherein EPAnd
Figure BDA00025533727700001511
expectation of probability distribution after raw data and reconstructed data, respectively, and EP[hv]=P(h|v)vT
Figure BDA0002553372770000156
b5 due to
Figure BDA0002553372770000157
The RBM adopts a k-step contrast divergence (CD-k) learning algorithm to quickly and effectively train RBM parameters, and the main idea is to initialize a visual layer by using training data and then execute Gibbs sampling;
the objective of the CD learning algorithm is to obtain the approximate values of partial derivatives Δ ω, Δ a, Δ b, and update the formula as
Figure BDA0002553372770000158
Figure BDA0002553372770000159
Figure BDA00025533727700001510
Wherein ω isk、ak、bkThe weight matrix, the visible layer offset and the hidden layer offset of the kth sampling are respectively, and eta is the learning rate.
As shown in fig. 5, in step c, the DBN network is formed by multiple layers of stacked RBMs and 1 layer of BPNN, the DBN training process includes a layered pre-training process and a fine tuning process, the RBMs are responsible for the pre-training of the network, the BPNN is responsible for the fine tuning of the network, and the DBN network establishment specifically includes the following steps:
c1, performing layered pre-training process (the network is trained layer by layer from low to high in an unsupervised mode), inputting sample data into a visible layer of a first RBM network, after training, using the output of a hidden layer as the input of a second RBM visible layer, and performing pre-training layer by layer in the mode until all four layers of RBM networks are trained;
c2, fine tuning process (the network adopts BP neural network to fine tune the parameters obtained by pre-training from high to low in a supervision mode), the purpose of fine tuning is to make the output value approach the input value, because the parameters are obtained by training sample learning, the network can avoid falling into local optimum to achieve global optimum, and the prediction effect is better.
In the step E, n groups of prediction data are subjected to superposition reconstruction and inverse normalization to serve as a final wind power prediction result, the prediction result is analyzed according to error evaluation indexes, and in order to visually evaluate each wind power prediction modelThe accuracy of the model is analyzed and compared in detail, the good and bad states among the models are compared, and the average relative error (MRE), the Root Mean Square Error (RMSE) and the precision R are selected2Pearson correlation coefficient ePRThe error analysis index is used as an error analysis index of the wind power prediction model;
the average relative error is formulated as:
Figure BDA0002553372770000161
the root mean square error formula is:
Figure BDA0002553372770000162
the accuracy formula is:
Figure BDA0002553372770000163
the Pearson correlation coefficient formula is:
Figure BDA0002553372770000164
wherein, yi
Figure BDA0002553372770000165
Respectively the true value of the test sample and the average value of the true values of the test sample;
Figure BDA0002553372770000166
respectively a predicted value of the test sample and an average value of the predicted values of the test samples; n is the number of samples of the test set, and for the four error evaluation indexes, MRE and RMSE refer to error indexes, and the smaller the value of the error evaluation indexes, the higher the prediction precision is; and R is2And ePRRefers to the prediction accuracy, and a larger value indicates a higher prediction accuracy.
In order to verify the effectiveness of the short-term wind power prediction method based on the EWT-PDBN combination, numerical weather forecast data (including wind speed, temperature, air pressure, average sea level atmospheric pressure and relative humidity) and historical wind power data (historical maximum wind power data, historical minimum wind power data and historical average wind power data) of a certain wind power plant generator set in the Tianjin coastal area from 04 months and 10 days in 2019 and 05 months and 10 days (totally 31 days) are selected as input data of an EWT-PDBN model, a wind power predicted value is used as output, modeling analysis is carried out, and the data sampling time interval is 15 min. 2784 data in the first 29 days are selected as a training set for training the EWT-PDBN model, 192 data in the last two days are selected as a test set for the model to carry out tests, and the wind power of 48h in the future is predicted.
In the EWT-PDBN method, historical average wind power data is decomposed using an EWT signal processing technique to obtain a series of subsequences with different characteristic frequency scales, 8 subsequences with a large correlation with the original sequence are selected using a sample entropy algorithm, and the decomposition result is shown in fig. 6.
And (3) respectively taking numerical weather forecast data, historical maximum wind power data and historical minimum wind power data which are subjected to normalization processing on each subsequence as input data, predicting through a PSO-DBN prediction model, and determining that the RBM network of the DBN model has 4 layers, wherein the number of the input layers is 8, the number of nodes of hidden layers of the 4 layers is respectively 15, 30, 20 and 10, and the number of nodes of hidden layers of the 4 layers is 1. The final prediction result is obtained by superposing the prediction results of the components, and the prediction results are shown in fig. 7. Besides small errors of individual points with violent mutation, the method can realize effective prediction of the wind power.
The method is compared and analyzed with the prediction results of a direct Deep Belief Network (DBN) method, a particle swarm optimization deep belief network (PDBN) method, a wavelet transformation combined deep belief network (WT-DBN) method, an empirical wavelet transformation combined deep belief network (EWT-DBN) method and a wavelet transformation combined particle swarm optimization deep belief network (WT-PDBN) method. The six methods are shown in the figure 8. It can be observed that:
the EWT-PDBN model (EWT-PSO-DBN model) has a very accurate prediction trend in predicting wind power. In addition, of the six involved models, the EWT-PDBN model performs best in wind power prediction; when the wind power changes suddenly, the EWT-PDBN model can have better prediction performance than other models.
Table 1 shows the prediction error evaluation index of the EWT-PDBN prediction method. Table 2 shows the prediction error evaluation indexes of the six prediction methods, and it can be seen from tables 1 and 2 that:
(1) compared with a direct DBN model, the average relative error of the PDBN model after particle swarm optimization is smaller, MRE is reduced by about 14.31%, the accuracy is improved by about 1.13 percentage points, and the Pearson correlation coefficient is improved by about 0.81%. This shows that the DBN network optimized by the particle swarm optimization can improve the prediction accuracy.
(2) Compared with the DBN model, the average relative error of the wavelet DBN model and the empirical wavelet DBN model is in a decreasing trend, and the accuracy is in an increasing trend. Therefore, the noise of the original wind power sequence can be reduced through a signal decomposition technology, the relative error is reduced, and the prediction effect of the combination of the signal decomposition technology and the DBN network is better than that of a single DBN network.
(3) Compared with a WT-DBN model combining wavelet transformation with DBN, the EWT-DBN model combining empirical wavelet decomposition with DBN has higher prediction accuracy, MRE is reduced by about 11.37%, accuracy is increased by about 0.93 percentage point, and Pearson correlation coefficient is also increased by about 0.58%.
(4) The prediction accuracy of the EWT-PDBN model with the EWT combined with the PSO optimized DBN and the prediction accuracy of the WT-PDBN model with the WT combined with the PSO optimized DBN are higher than those of the EWT-PDBN model without the particle swarm optimization and the WT-PDBN model without the particle swarm optimization, and the average error is small.
(5) Compared with other five model methods, the EWT-PDBN method provided by the invention is a model with the minimum prediction error change, not only has the highest prediction precision, but also has the largest correlation between the prediction sequence and the actual sequence, the best result and the best effect.
Example analysis and verification show that the EWT-PDBN model adopted by the method is better in prediction effect compared with the traditional DBN model, the WT-DBN model, the PDBN model, the EWT-DBN model and the WT-PDBN model, the prediction accuracy is obviously higher than that of the other five methods, and the method is more suitable for wind power prediction of a distributed wind power plant including Tianjin coastal areas.
TABLE 1
Figure BDA0002553372770000181
TABLE 2
Figure BDA0002553372770000182

Claims (5)

1. A short-term wind power prediction method based on EWT-PDBN combination is characterized by comprising the following steps:
A. collecting numerical weather forecast data and historical wind power data of a wind field, wherein the numerical weather forecast data comprises wind speed, temperature, air pressure, and atmospheric pressure and relative humidity of an average sea level, and the historical wind power data comprises historical maximum wind power data, historical minimum wind power data and historical average wind power data;
B. preprocessing all collected data, and then normalizing all the preprocessed data;
C. decomposing the normalized historical average wind power data by using an empirical wavelet transform signal decomposition technology, and carrying out stabilization treatment to obtain a plurality of groups of subsequences with different characteristic frequencies;
D. performing correlation screening on the different decomposed subsequences to screen n groups of subsequences, then respectively using the screened n groups of subsequences and numerical weather forecast data, historical maximum wind power data and historical minimum wind power data which are subjected to normalization processing as input data together, and predicting in a particle swarm optimization deep belief network model to obtain n groups of prediction data;
E. superposing n groups of prediction data to reconstruct a group of data, then carrying out inverse normalization processing on the group of data to obtain a result serving as a final wind power prediction result, and then carrying out prediction result analysis according to an error evaluation index;
the implementation of the depth belief network model for particle group optimization in step D comprises the following steps:
a. searching a node number parameter of an optimal hidden layer in a limited Boltzmann machine by utilizing a PSO optimization algorithm, then initializing DBN network parameters and particle population numbers, then calculating a particle fitness function value, and updating a particle optimal value and a population optimal value;
b. checking whether the system reaches the iteration times, if so, outputting the node number parameter, and establishing an RBM network corresponding to the node number parameter, otherwise, returning to the step a for iteration again;
c. screening subsequences with large correlation degree with the historical average wind power data sequence, inputting the subsequences into unsupervised RBMs with optimized PSOs for pre-training, and then adjusting according to a BPNN fine-tuning phase method to form PDBN network prediction models corresponding to the subsequences;
d. decomposing the test data of historical average wind power according to an EWT algorithm, screening out n groups of subsequences, respectively using the n groups of subsequences and numerical weather forecast data, historical maximum wind power data and historical minimum wind power data which are subjected to normalization processing as input data together, and predicting in a particle swarm optimization deep belief network model to obtain n groups of prediction data;
in step a, the velocity and position information updating formula of the particle individual is as follows:
vi(k+1)=ωvi(k)+c1r1[Pbest(k)-xi(k)]+c2r2[Gbesti(k)-xi(k)] (13)
xi(k+1)=xi(k)+φvi(k+1) (14)
wherein, in each iterative optimization process, the position of the ith particle is assumed to be X ═ X1,x2,…,xi,…,xn]Velocity V ═ V1,v2,…,vi,…,vn]The particles continuously update the speed and the position of the particles by iteratively comparing the fitness value with the two extreme values to find the individual optimal solution P of the particlesbest=[Pbest1,Pbest2,…,Pbesti,…,Pbestn]And the wholeThe best solution G found by the population at presentbest=[Gbest1,Gbest2,…,Gbesti,…,Gbestn](ii) a k is the number of iterations; x is the number ofi(k) Is the position of the particle at k iterations; v. ofi(k) The velocity of the ith particle at k iterations; pbesti(k) Historical optimal positions of the particles i are shown; gbesti(k) Historical optimal positions for the groups; c. C1,c2Is a cognition factor; r is1,r2Are uniformly distributed random numbers; omega is the inertial weight; phi is a contraction factor used to keep the speed within a certain range;
in the step b, the RBM network establishment step is as follows:
b1, a DBN is a multi-hidden-layer deep learning network formed by stacking a plurality of limited Boltzmann machines, an RBM is composed of a visible layer and a hidden layer and is a special neural network model based on energy, namely the ideal state of the network model is that the energy is minimized, the RBM learns the probability distribution from the visible layer to the hidden layer, and the joint energy function is expressed as a formula (15):
Figure FDA0003160260950000021
wherein v ═ v (v)1,v2,…,vi,…,vn)T、h=(h1,h2,…,hj,…,hm)TState vectors of a visible layer and a hidden layer; v. ofi、hjRespectively showing the states of the ith and the j th neurons of the visual layer and the hidden layer; a ═ a1,a2,…,ai,…,an)T、b=(b1,b2,…,bj,…,bm)TBias vectors of a visible layer and a hidden layer; a isi、bjBiasing of ith and j neurons of the visual layer and the hidden layer respectively; θ ═ ωij,ai,bjIs the training parameter of the RBM; ω is a weight matrix connecting v and h, ωij∈Rm×n(ii) a n is a visible layer spiritThe number of warp elements; m is the number of hidden layer neurons;
b2, given an energy function, the joint probability distribution is represented as
Figure FDA0003160260950000031
Wherein Z (theta) is a normalization factor
Figure FDA0003160260950000032
b3, in order to simplify the calculation process of the RBM, assuming that all nodes are binary nodes, i.e., v belongs to {0,1}, h belongs to {0,1}, and the nodes are specified to be independent from each other, the activation rates of the neurons of given v, h and h, v are respectively formula (17) and formula (18):
Figure FDA0003160260950000033
Figure FDA0003160260950000034
wherein, Pθ(vi1| h) is known hiProbability of 1, Pθ(hj1| v) is given vjThe probability of 1 is given as the probability of,
Figure FDA0003160260950000035
representing a sigmoid function;
b4, after the training sample S is given, training the RBM to determine the solution update parameter theta ═ { a, b, omega }, i.e. the goal of training the RBM is to maximize the log-likelihood function
Figure FDA0003160260950000036
Obtaining the maximum likelihood function logP (v) by gradient ascent method, and calculatingThe following:
Figure FDA0003160260950000037
Figure FDA0003160260950000038
Figure FDA0003160260950000039
wherein EPAnd
Figure FDA00031602609500000310
expectation of probability distribution after raw data and reconstructed data, respectively, and Ep[hv]=P(h|v)vT
Figure FDA0003160260950000041
b5, training RBM parameters by adopting a k-step contrast and divergence learning algorithm, and the main idea is to initialize a visual layer by using training data and then execute Gibbs sampling;
the objective of the CD learning algorithm is to obtain the approximate values of the partial derivatives Δ ω, Δ a, Δ b, which are updated by the formula
Figure FDA0003160260950000042
Figure FDA0003160260950000043
Figure FDA0003160260950000044
Wherein ω isk、ak、bkRespectively a weight matrix, a visible layer offset and a hidden layer offset of the kth sampling, wherein eta is a learning rate;
in step c, the DBN network is composed of multiple layers of stacked RBMs and 1 layer of BPNN, the DBN training process includes a layered pre-training process and a fine tuning process, the RBMs are responsible for the pre-training of the network, the BPNN is responsible for the fine tuning part of the network, and the DBN network establishment specifically includes the steps of:
c1, a layered pre-training process, wherein sample data is input into a visible layer of a first RBM network, after training, the output of a hidden layer is used as the input of a visible layer of a second RBM network, and pre-training is carried out layer by layer in this way until all four layers of RBM networks are trained;
c2, fine tuning, wherein the purpose of fine tuning is to make the output value close to the input value, and since the parameters are obtained by training sample learning, the network is prevented from falling into local optimum to achieve global optimum, and the prediction effect is better.
2. The EWT-PDBN combination-based short-term wind power prediction method as claimed in claim 1, wherein in step A, collected numerical weather forecast data and historical wind power data are preprocessed, namely, abnormal value processing is performed, and the specific processing method comprises the following steps:
setting the time resolution of numerical weather forecast data and historical wind power data to be 15 min;
(II) supplementing the missing data by using the data at the previous moment;
(III) replacing the historical power data smaller than 0 with 0;
fourthly, replacing abnormal data with average values of adjacent moments for interpolation;
and (V) replacing the historical power data value exceeding the threshold value with the rated power value of the fan.
3. The EWT-PDBN combination-based short-term wind power prediction method as claimed in claim 1, wherein in step B, all data values are normalized and their values are reduced to the range [ -1, 1], and the normalized calculation formula is:
Figure FDA0003160260950000051
wherein x isgNormalizing the data; x, xmin、xmaxRespectively a data value, a minimum value in the data, and a maximum value in the data;
in step E, a set of data formed by the superposition weights is denormalized to have a physical meaning, and the denormalization calculation formula is:
xf=x′(xmax-xmin)+xmin (2)
wherein x' is an output value of the EWT-PDBN prediction model; x is the number offAnd obtaining a wind power data prediction value for inverse normalization.
4. The EWT-PDBN combination-based short-term wind power prediction method according to claim 1, wherein in step C, the normalized historical mean wind power data with non-stationarity is decomposed by an EWT algorithm to obtain n subsequences of different modal components, each subsequence is defined as a group of amplitude and frequency modulation signals, and the EWT algorithm is implemented by the following steps:
(C1) 0, pi to Fourier axis]Adaptive partitioning into N successive cells, ωnThe width of each small interval is expressed by lambdan=[ωn-1,ωn]Denotes, τnFor at each omeganA transition region as a center, the width of the transition region being 2 taun
(C2) Within each cell anIn the above, according to the Littlewood-Paley and Meyer wavelet transform method, the definitions of the empirical wavelet function and the empirical wavelet scale function in the frequency domain are constructed as formula (3) and formula (4):
Figure FDA0003160260950000052
Figure FDA0003160260950000061
wherein, taun=γωn
Figure FDA0003160260950000062
The function β (x) satisfies formula (5) and formula (6):
Figure FDA0003160260950000063
Figure FDA0003160260950000064
the function satisfying the above property is β (x) ═ x4(35-84x+70x2-20x3) For any n > 0, the empirical wavelet function and the scale function are further reduced to formula (7) and formula (8):
Figure FDA0003160260950000065
Figure FDA0003160260950000066
(C3) determining detail coefficients of the empirical wavelet function according to the empirical wavelet function and the scale function
Figure FDA0003160260950000067
And approximation coefficient
Figure FDA0003160260950000068
Fourier transform and Fourier inverse transformThe transformations are denoted F [. respectively]And F-1[·]Detail coefficients of empirical wavelet transform
Figure FDA0003160260950000069
Obtained by the inner product of empirical wavelet functions:
Figure FDA00031602609500000610
wherein "<·>"is inner product operation; psin(t) is an empirical wavelet function;
Figure FDA0003160260950000071
for empirical wavelet function psin(t) Fourier transform;
Figure FDA0003160260950000072
for empirical wavelet function psin(t) complex conjugation;
approximation coefficient
Figure FDA0003160260950000073
Obtained by inner product of the scale function:
Figure FDA0003160260950000074
wherein "<·>"is inner product operation; phi is a1(t) is an empirical scale function;
Figure FDA0003160260950000075
as a function of scale phi1(t) Fourier transform;
Figure FDA0003160260950000076
as a function of scale phi1(t) complex conjugation;
(C4) detail coefficient based on empirical wavelet function
Figure FDA0003160260950000077
And approximation coefficient
Figure FDA0003160260950000078
To reconstruct the original signal, the formula is:
Figure FDA0003160260950000079
wherein "+" is a convolution operation;
Figure FDA00031602609500000710
and
Figure FDA00031602609500000711
respectively empirical wavelet transform detail coefficients
Figure FDA00031602609500000712
And approximation coefficient
Figure FDA00031602609500000713
The original signal is decomposed into empirical mode components according to the following formula:
Figure FDA00031602609500000714
5. the EWT-PDBN combination-based short-term wind power prediction method as claimed in claim 1, wherein in step E, n groups of prediction data are subjected to superposition reconstruction and inverse normalization to serve as a final wind power prediction result, then prediction result analysis is performed according to error evaluation indexes, in order to visually evaluate the accuracy of each wind power prediction model, the quality state among the models is analyzed and compared in detail, and the average relative error, the root mean square error and the precision R are selected2Phase of PearsonCoefficient of correlation ePRThe error analysis index is used as an error analysis index of the wind power prediction model;
the average relative error is formulated as:
Figure FDA00031602609500000715
the root mean square error formula is:
Figure FDA00031602609500000716
the accuracy formula is:
Figure FDA0003160260950000081
the Pearson correlation coefficient formula is:
Figure FDA0003160260950000082
wherein, yi
Figure FDA0003160260950000083
Respectively the true value of the test sample and the average value of the true values of the test sample;
Figure FDA0003160260950000084
respectively a predicted value of the test sample and an average value of the predicted values of the test samples; n is the number of samples of the test set, and for the four error evaluation indexes, MRE and RMSE refer to error indexes, and the smaller the value of the error evaluation indexes, the higher the prediction precision is; and R is2And ePRRefers to the prediction accuracy, and a larger value indicates a higher prediction accuracy.
CN202010581545.1A 2020-06-23 2020-06-23 Short-term wind power prediction method based on EWT-PDBN combination Active CN111697621B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010581545.1A CN111697621B (en) 2020-06-23 2020-06-23 Short-term wind power prediction method based on EWT-PDBN combination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010581545.1A CN111697621B (en) 2020-06-23 2020-06-23 Short-term wind power prediction method based on EWT-PDBN combination

Publications (2)

Publication Number Publication Date
CN111697621A CN111697621A (en) 2020-09-22
CN111697621B true CN111697621B (en) 2021-08-24

Family

ID=72483408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010581545.1A Active CN111697621B (en) 2020-06-23 2020-06-23 Short-term wind power prediction method based on EWT-PDBN combination

Country Status (1)

Country Link
CN (1) CN111697621B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263971A (en) * 2019-05-14 2019-09-20 西安理工大学 Super short-period wind power combination forecasting method based on support vector machines
CN112329979A (en) * 2020-09-23 2021-02-05 燕山大学 Ultra-short-term wind power prediction method based on self-adaptive depth residual error network
CN112200384B (en) * 2020-10-28 2024-05-17 宁波立新科技股份有限公司 EWT neural network-based short-time prediction method for power load
CN112434848B (en) * 2020-11-19 2023-06-16 西安理工大学 Nonlinear weighted combination wind power prediction method based on deep belief network
CN112733692B (en) * 2021-01-04 2021-11-30 润联智慧科技(西安)有限公司 Fault prediction method and device based on integrated hybrid model and related equipment
CN113033904B (en) * 2021-04-02 2022-09-13 合肥工业大学 Wind power prediction error analysis and classification method based on S transformation
CN113507118B (en) * 2021-07-11 2022-05-13 湘潭大学 Wind power prediction method and system
CN113591382B (en) * 2021-08-02 2023-08-04 东北电力大学 Ultra-short-term rolling prediction method based on WT-TCN wind power
CN114493051A (en) * 2022-04-08 2022-05-13 南方电网数字电网研究院有限公司 Photovoltaic power prediction method and device for improving precision based on combined prediction
CN115081705A (en) * 2022-06-16 2022-09-20 华能扎赉特旗太阳能光伏发电有限公司科右中旗分公司 Wind power prediction method based on ITD-ELM
CN116979533B (en) * 2023-09-25 2023-12-08 西南石油大学 Self-attention wind farm power prediction method integrating adaptive wavelet
CN118296468B (en) * 2024-05-30 2024-08-09 厦门锋元机器人有限公司 Aluminum welding defect detection method and system based on artificial intelligence

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100802402B1 (en) * 2006-06-15 2008-02-14 한국에너지기술연구원 Forecasting method of wind power generation by classification of wind speed patterns
CN103400052A (en) * 2013-08-22 2013-11-20 武汉大学 Combined method for predicting short-term wind speed in wind power plant
CN104899665A (en) * 2015-06-19 2015-09-09 国网四川省电力公司经济技术研究院 Wind power short-term prediction method
CN106846173A (en) * 2016-12-30 2017-06-13 国网新疆电力公司电力科学研究院 Short-term wind power forecast method based on EWT ESN
CN107632258A (en) * 2017-09-12 2018-01-26 重庆大学 A kind of fan converter method for diagnosing faults based on wavelet transformation and DBN
CN109063915A (en) * 2018-08-10 2018-12-21 广东工业大学 Short-term wind speed forecasting method, device, equipment, system and storage medium
CN111162551A (en) * 2020-01-15 2020-05-15 国网内蒙古东部电力有限公司 Storage battery charging and discharging control method based on wind power ultra-short term prediction

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100802402B1 (en) * 2006-06-15 2008-02-14 한국에너지기술연구원 Forecasting method of wind power generation by classification of wind speed patterns
CN103400052A (en) * 2013-08-22 2013-11-20 武汉大学 Combined method for predicting short-term wind speed in wind power plant
CN104899665A (en) * 2015-06-19 2015-09-09 国网四川省电力公司经济技术研究院 Wind power short-term prediction method
CN106846173A (en) * 2016-12-30 2017-06-13 国网新疆电力公司电力科学研究院 Short-term wind power forecast method based on EWT ESN
CN107632258A (en) * 2017-09-12 2018-01-26 重庆大学 A kind of fan converter method for diagnosing faults based on wavelet transformation and DBN
CN109063915A (en) * 2018-08-10 2018-12-21 广东工业大学 Short-term wind speed forecasting method, device, equipment, system and storage medium
CN111162551A (en) * 2020-01-15 2020-05-15 国网内蒙古东部电力有限公司 Storage battery charging and discharging control method based on wind power ultra-short term prediction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于小波变换和深度信念网络的风电场短期风速预测方法;李刚强;《中国优秀硕士学位论文全文数据库 工程科技II辑(电子期刊)》;20170715(第7期);第8-27页 *
基于改进深度置信网络的风电场短期功率组合预测研究;陈祖成;《中国优秀硕士学位论文全文数据库 工程科技II辑(电子期刊)》;20200115(第1期);第9-37页 *
基于经验小波变换和多核学习的风电功率短期预测;李军 等;《信息与控制》;20180830;第47卷(第4期);第437-447页 *

Also Published As

Publication number Publication date
CN111697621A (en) 2020-09-22

Similar Documents

Publication Publication Date Title
CN111697621B (en) Short-term wind power prediction method based on EWT-PDBN combination
CN111860982B (en) VMD-FCM-GRU-based wind power plant short-term wind power prediction method
CN109214575B (en) Ultrashort-term wind power prediction method based on small-wavelength short-term memory network
CN111680446B (en) Rolling bearing residual life prediction method based on improved multi-granularity cascade forest
CN110175386B (en) Method for predicting temperature of electrical equipment of transformer substation
CN109886464B (en) Low-information-loss short-term wind speed prediction method based on optimized singular value decomposition generated feature set
CN110991721A (en) Short-term wind speed prediction method based on improved empirical mode decomposition and support vector machine
CN109583588B (en) Short-term wind speed prediction method and system
CN115511177A (en) Ultra-short-term wind speed prediction method based on INGO-SWGMN hybrid model
CN106897794A (en) A kind of wind speed forecasting method based on complete overall experience mode decomposition and extreme learning machine
Chen et al. Research on wind power prediction method based on convolutional neural network and genetic algorithm
CN115561005A (en) Chemical process fault diagnosis method based on EEMD decomposition and lightweight neural network
CN113850438A (en) Public building energy consumption prediction method, system, equipment and medium
CN117592593A (en) Short-term power load prediction method based on improved quadratic modal decomposition and WOA optimization BILSTM-intent
Qiu et al. Fault diagnosis of analog circuits based on wavelet packet energy entropy and DBN
Chen et al. Pseudo-label guided sparse deep belief network learning method for fault diagnosis of radar critical components
CN116960978A (en) Offshore wind power prediction method based on wind speed-power combination decomposition reconstruction
CN117239722A (en) System wind load short-term prediction method considering multi-element load influence
CN116341717A (en) Wind speed prediction method based on error compensation
CN117407660B (en) Regional sea wave forecasting method based on deep learning
CN113361782B (en) Photovoltaic power generation power short-term rolling prediction method based on improved MKPLS
CN113433514B (en) Parameter self-learning interference suppression method based on expanded deep network
CN117689082A (en) Short-term wind power probability prediction method, system and storage medium
Xu et al. The hidden-layers topology analysis of deep learning models in survey for forecasting and generation of the wind power and photovoltaic energy
Zhang et al. Combined wind speed prediction model considering the spatio-temporal features of wind farm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant