CN110009160A - A kind of power price prediction technique based on improved deepness belief network - Google Patents

A kind of power price prediction technique based on improved deepness belief network Download PDF

Info

Publication number
CN110009160A
CN110009160A CN201910289389.9A CN201910289389A CN110009160A CN 110009160 A CN110009160 A CN 110009160A CN 201910289389 A CN201910289389 A CN 201910289389A CN 110009160 A CN110009160 A CN 110009160A
Authority
CN
China
Prior art keywords
network
dbn
data
model
svr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910289389.9A
Other languages
Chinese (zh)
Inventor
翟莹莹
李艾玲
郭志
吕振辽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201910289389.9A priority Critical patent/CN110009160A/en
Publication of CN110009160A publication Critical patent/CN110009160A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0206Price or cost determination based on market factors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0283Price estimation or determination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Data Mining & Analysis (AREA)
  • Human Resources & Organizations (AREA)
  • General Health & Medical Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Operations Research (AREA)
  • Primary Health Care (AREA)
  • Quality & Reliability (AREA)
  • Water Supply & Treatment (AREA)
  • Public Health (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a kind of power price prediction technique based on improved deepness belief network, steps are as follows: according to electricity price data characteristics and the influence factor of electricity price, divides data set and determines network data input, carry out data prediction to using data set;The number of plies of model RBM is determined using second order reconstruction error calculation network error for pretreated data set;Utilize the neuron node number in " three+two " the lookup algorithm optimization network for combining trichotomy and dichotomy;Be utilized respectively the recurrence layer of BP neural network and SVR support vector regression as DBN network, in conjunction with RBM the number of plies and optimization after neuron node number, structural texture optimization DBN-BP model and DBN-SVR model, Spot Price data are predicted.The present invention establishes the DBN model of structure optimization, and carries out different combinations to the recurrence layer of network and improve, and improves the precision of prediction of DBN, has a good application prospect.

Description

A kind of power price prediction technique based on improved deepness belief network
Technical field
The present invention relates to a kind of power price Predicting Technique, specially a kind of electric power based on improved deepness belief network Price expectation method.
Background technique
With the development of energy internet, gradually deep, a large amount of middle-size and small-size electric power enterprises and the electric power of China's electric Power Reform User floods the market, and the highly desirable information that electricity market can be faster and more accurately got from internet provides more square Just service.In the Power Market of competition, electricity is considered as commodity and equally trades, and electricity price is unit of electrical energy commodity The general name of price.On the market during electricity transaction, transaction participant includes Power Generation, sale of electricity side and power purchase side, he Focus more on the determination of power price, electricity price is vital influence factor in electricity transaction, determines electric energy in electric power Flow direction and dispensing in market can react the supply-demand relationship in electricity market, while be also control power market transaction Economic wind vane.Nowadays the reform in electricity market is occurred in countries in the world, so that acquirement of the electricity price in electricity market The status of ever more important.Researcher both domestic and external and electric power enterprise are dedicated to the research to links in the industry, wherein Research on electricity price prediction receives more concerns.
So-called Research on electricity price prediction, using different mathematical tools, prediction technique, considers to influence that is, under Power Market A variety of different factors of electricity price, are analyzed and summarized history electricity price and influence factor, the inner link between heuristic data And changing rule, reasonable anticipation is made to electricity price tendency in future electrical energy market.The electricity price predicted has real-time, can The transaction in electricity market in the accuracy rating and speed of receiving, for instructing certain period of time.
In Competitive Electricity Market, compete more and more fierce between different things electric power enterprise, action organisation frequency is got over Come higher, while the range of main market players expands, in the increase of geometry number, this will affect in power market transaction and trades user volume The selection of main body.By taking the bilateral transaction in electricity market as an example, by power exchange tissue electricity transaction, in government control Under trading rules, both parties voluntarily confer electricity price, electricity, after reaching bilateral transaction agreement intention, submit electricity transaction Center, trade center complete transaction according to the Transaction Information summarized, organization security verification and execution.In this course, accurately Research on electricity price prediction by be determine deal participant select transaction important indicator, understand the trend of electricity price, facilitate both parties Benefit, reduce the cost of bidding of both parties, bring for participant in the market and stablize considerable income.
Research on electricity price prediction, always by the concern of people, can be directly acquainted with electric power city by Research on electricity price prediction in electricity market The operation tendency of field is the important indicator for evaluating market competition efficiency, can influence the transaction judgement of participant in the market, optimization money Source configuration, improves the economic benefit of enterprise, to promote the economic benefit of entire society, ensures the sustainable development of society, promotees Into the general advance of society.
Common Price Forecasting is not bound with deep learning and is inquired into existing market.With the hair of deep learning It opens up, the increase of electricity price data in market, can with due regard to combine deep learning theory propose a kind of new Price Forecasting, It is applied in Research on electricity price prediction field using deepness belief network.But deepness belief network is special due to the structure of itself Point, different nervous layers and neural network node number will all have an impact model result, therefore, how determine appropriate mind Become technological difficulties through the number of plies and neuron node number.
Summary of the invention
The appropriate neural number of plies and neuron node number how are determined for power price prediction model in the prior art Mesh become technological difficulties the deficiencies of, the problem to be solved in the present invention is to provide it is a kind of can be improved DBN precision of prediction based on improvement Deepness belief network power price prediction technique.
In order to solve the above technical problems, the technical solution adopted by the present invention is that:
A kind of power price prediction technique based on improved deepness belief network of the present invention, comprising the following steps:
1) it according to electricity price data characteristics and the influence factor of electricity price, divides data set and determines network data input, it is right Data prediction is carried out using data set;
2) layer of model RBM is determined using second order reconstruction error calculation network error for pretreated data set Number;
3) the neuron node number in " three+two " the lookup algorithm optimization network for combining trichotomy and dichotomy is utilized;
4) it is utilized respectively the recurrence layer of BP neural network and SVR support vector regression as DBN network, in conjunction with RBM's Neuron node number after the number of plies and optimization, the DBN-BP model and DBN-SVR model of structural texture optimization, to Spot Price Data are predicted.
In step 3), the neuron section in " three+two " the lookup algorithm optimization network for combining trichotomy and dichotomy is utilized Point number is to be optimized according to traditional binary chop and improved three minutes lookup methods to its calculating process, specifically:
301) in three points of lookup algorithms for shortening section, the value range that data value is arranged is [a, d], chooses four numbers It is worth point xa, xb, xc, xd, data field is divided into three sections, wherein xaTake boundary value point a, xdTake boundary value point d, xbIt is the area [a, d] Between 1/3 at point, xdPoint at the 2/3 of the section section [a, d], then calculate separately divide after three minizones first~ Three slope ks1, k2, k3, value rule it is as follows:
First slope k1< 0, third slope k3> 0, and the second slope k2Absolute value it is minimum;
302) dichotomy is utilized, there are minimum points in the smallest section of absolute value [b, c], to there are the areas of minimum value Between [b, c] carry out binary chop lookup analysis is carried out to one group of orderly data, then iteration using binary chop it is straight To minimum point is found, hidden layer interstitial content is determined.
In step 301), when being unfavorable for the judgement of slope if there is data and curves concussion situation, incremental method is selected to calculate, It successively increases neuron number and then is judged;In the general shape for judging data and curves, using three slope over 10, according to oblique There are following several situations in the various combination of rate:
301A)k1< 0&&k2> 0&&k3< 0;
Slope shows that error declines with the increase of neuron number in this section less than 0, conversely, slope is greater than 0 Show that error is increasing, is that error declines in first data segment, rises in second data segment, in third data segment Interior decline, data and curves are shaken;
301B)k1< 0&&k2> 0&&k3> 0
Show the error calculated as the growth of neuron number is fallen before and then is being increased, before error minimum value is present in In two segments, need to compare the absolute value of slope;
301C)k1< 0&&k2< 0&&k3> 0
Show that error change is fallen before rising, at this time minimum point may exist in latter two segment, with 301B) the case where, is identical, and the absolute value for comparing slope calculates section where minimum value;
301D)k1< 0&&k2< 0&&k3< 0
When calculated three segment datas Interval Slope is both less than 0, calculated data error with neural unit increasing Add monotone decreasing, minimum point is present in the last one data interval;
301E)k1> 0&&k2> 0&&k3> 0
Show when three segment data slopes of calculating are all larger than 0, data error is in the trend of monotonic increase, therefore error is most Small value point is present in first section.
Further include step 301F), according to the different permutation and combination situations of three slopes, there are eight kinds for the slope in three sections of sections Different situations is calculated in remaining several situations using incremental method;
Alternatively, when optimal value point is present in interval border, being added in the algorithm during the selection of value interval Regulatory factor m changes the value range in section.
In step 4), the DBN-SVR model of structural texture optimization are as follows:
Method is determined to hidden layers numbers according to used in traditional neural network, in conjunction with DBN feature, is carried out in RBM Hidden layers numbers are improved optimization using reconstructed error method by experiment, determine DBN network mind using the calculated value of reconstructed error It is as follows through unit number of plies rule:
401) predictive data set pre-processes, and extracts corresponding data set features as training data;
402) DBN model is constructed, initialization network model parameter, cycle of training and learning rate utilize what is handled well Training data successively trains RBM network, utilizes BP back-propagation algorithm trim network parameter;
403) optimize DBN network structure, according to structural optimization algorithm, modify DBN network depth and neuron node Number;
404) SVR model is trained and optimizes, initialization returns the network parameter and SVR model kernel functional parameter of layer, will SVR model is inputted by the good data of the DBN network training of structure optimization, model parameter is adjusted, constructs the DBN- of structure optimization SVR prediction model.
405) test model verifies DBN-SVR model using test set, and input test data set passes through model After training, learning data is exported, finally obtains final prediction result by renormalization.
In step 404), the DBN-SVR prediction model for constructing structure optimization is using SVR network as the DBN of structure optimization The recurrence layer of model recycles SVR network to be fitted the data after the DBN training by structure optimization, then finely tunes SVR Parameter in network model constructs the DBN-SVR model of structure optimization.
In step 405), test model are as follows:
In the DBN-SVR network of structure optimization, training dataset is extracted into characteristic value by DBN model first, according to Stochastic gradient descent algorithm optimizes DBN network structure, then uses the DBN model learning training data of structure optimization, its is defeated Then input as SVR model out, the i.e. input of DBN net regression layer optimize SVR model parameter using kernel function It improves;Finally the parameter model of the DBN model of structure optimization and SVR training is combined, constructs the DBN-SVR of structure optimization Model is predicted for data.
In step 403), optimize DBN network structure, specifically:
403A) stochastic parameter in network is initialized, enables input sample data forward-propagating using CD-1 algorithm, so far Complete the pre-training process of network;
Small parameter perturbations 403B) are carried out to network, using formula (1.15) as the loss function of error transfer factor, calculate loss The local derviation reverse train DBN network parameter of function:
Wherein, y is input sample,For the output of DBN-BP network training, | | | |2For the reconstructed error of second order normal form;
And network parameter is updated:
Wherein, ε is model learning efficiency, and η={ w, b, β } is network parameter, and w, b, β are respectively network weight weight values, input Layer and the bias vector of output layer and the weighted value of recurrence layer, e is network error;
403C) second order reconstruction ERROR ALGORITHM and " three+two " lookup method is combined to determine optimal network depth and each layer Neuron node number, construct the DBN-BP model H of structure optimization1, so as to the application that model is final.
The invention has the following beneficial effects and advantage:
1. exploring depth conviction net the invention proposes a kind of Electricity price forecasting solution based on improved deepness belief network Network optimizes the network structure of DBN in the application in Research on electricity price prediction field, establishes the DBN model of structure optimization, and to network Recurrence layer carry out different combination and improve, improve the precision of prediction of DBN.
2. the invention proposes the DBN-SVR model of the DBN-BP model of structure optimization and structure optimization, using the big benefit of Australia Five regional real time datas carry out verifying analysis in sub- electricity market, it was demonstrated that the superiority of model refinement, in the real-time valence of electric power There is good application prospect in lattice prediction.
Detailed description of the invention
Fig. 1 is DBN hidden layer number training flow chart in the present invention;
Fig. 2 is three points of lookup algorithm data graphs involved in the present invention;
Fig. 3 is the DBN-SVR model flow figure of structure optimization in the present invention;
Fig. 4 is Spot Price data comparison figure in the present invention;
Fig. 5 is that error comparison diagram is predicted in the present invention;
Fig. 6 is fitting data on June 1 comparison in different regions in the present invention;
Fig. 7 is different regions prediction result comparison diagram in the present invention.
Specific embodiment
The present invention is further elaborated with reference to the accompanying drawings of the specification.
A kind of power price prediction technique based on improved deepness belief network of the present invention, comprising the following steps:
1) it according to electricity price data characteristics and the influence factor of electricity price, divides data set and determines network data input, it is right Data prediction is carried out using data set;
2) layer of model RBM is determined using second order reconstruction error calculation network error for pretreated data set Number;
3) the neuron node number in " three+two " the lookup algorithm optimization network for combining trichotomy and dichotomy is utilized;
4) it is utilized respectively the recurrence layer of BP neural network and SVR support vector regression as DBN network, in conjunction with RBM's Neuron node number after the number of plies and optimization, the DBN-BP model and DBN-SVR model of structural texture optimization are to Spot Price Data are predicted.
The present invention improves optimization to the network structure and recurrence layer of deepness belief network, and constructs improved depth Belief network Price Forecasting.
The network number of plies of deepness belief network is optimized, the invention proposes the DBN models of number of plies optimization, utilize two Rank reconstructed error and network export, and optimize determination to the RBM number of plies, so that DBN model has the optimal network number of plies, mention Rise model prediction accuracy.
Neuron node number in deepness belief network is optimized, the invention proposes the optimizations of neuron node number DBN model.The present invention proposes a kind of " three+two " lookup algorithm, determines optimal neuron node number in network, so that DBN model has appropriate neuron node number, the precision of prediction of further lift scheme.
For DBN model, the neural network of last output layer is modified, uses BP neural network and support vector regression respectively Machine (SVR) carries out data tuning as top layer neural net layer, and model is made more to meet true rule.
For the artificial neural network structure of more hidden layers, trial and error procedure is mostly used in traditional calculation method or according to warp It tests formula and directly determines the hidden layer number of plies.This way is based on previous experimental result, required for can conveniently determining Hiding number of layers, but cannot be guaranteed that selected numerical value is exactly optimal result.Using trial and error procedure be exactly first be arranged it is lower Hidden layer initial value, such as the number of plies is first then gradually increased using single hidden layer, be determined by experiment it is more appropriate it is hiding layer by layer Number.This way needs to test multi-group data, larger workload.The present invention according to it is previous in traditional neural network used in Method is determined to hidden layers numbers, in conjunction with DBN feature, is tested in RBM.Mainly using reconstructed error method to hidden layers numbers Improve optimization.
Reconstructed error (Reconstruction Error, RE) method is a kind of method common in deep learning, RE Purpose be that the actual value for meeting given threshold is found by the error of variation.The present invention is using a kind of based on reconstructed error DBN depth determination method optimizes the number of plies of RBM in DBN.
In DBN training process, initial training is carried out using treated data set as input data first, is then passed through A Gibbs of RBM is sampled, and trains error transfer data of the output of first layer as reconstructed error, will be by primary instruction Data and practical true value after white silk seek the measures of dispersion of two data as parameter.In some documents, single order model is usually used Formula calculates reconstructed error value, and the single order normal form formula of use is as follows:
In formula (1.1), n represents input sample number i.e. input layer number, and it is i.e. defeated that m represents hidden layer neuron Layer neuron --- p outi,jTo pass through the calculated end value of network model, di,jFor real data true value, pxFor data Value number or range.
In the calculating of reconstructed error, the calculated value of RE is smaller, illustrates that the effect to be obtained is better.It applies it to In the determination of the DBN model hidden layer number of plies, so that it may think, when the value of RE is smaller, illustrate the quantity choosing of the RBM in DBN It selects more appropriate.The expression of single order normal form illustrates that reconstructed error and network energy have positive correlation in formula (1.1).But In the application of DBN network, the network parameter of RBM is that random initializtion generates in given section, the single order in calculating process Formula is unfavorable for the statistics of calculated result it is possible that negative value.Therefore the present invention uses the second order normal form of reconstructed error, passes through The square value of error calculates reconstructed error between training result and actual value.The formula of second order normal form is as follows:
In formula (1.2), yi,jRepresent the output valve after network training;xi,jRepresent data true value.With single order model Formula is identical, and n and m respectively represents input layer neural unit number and output layer neural unit number in formula, and N is number of samples.
Assuming that R is the true value of data, and has in the calculated value that C is DBN network
C=P (v), R=P (v0) (1.3)
According to DBN network query function
C=P (v)=P (v0)P(h|v0)P(v|h) (1.4)
Utilize total probability formulaCalculating formula (1.4) can obtain
Formula (1.5) progress abbreviation is obtained
To P (v0, h) and condition probability formula is reused, it can obtain
C and R is substituted into known to second order reconstruction error formula
Formula (1.3) and formula (1.7), which are substituted into formula (1.8), to be obtained
RE'=(P (v0|h)P(v,h)-P(v0))2=P (v0)2(P(v,h)-1)2 (1.9)
According to the energy function of formula (1.3) and DBN
RE'∝R2(E(v,h)-1)2∝(E(v,h)-1)2∝E(v,h) (1.10)
It can be seen that second order reconstruction error and DBN network energy positive correlation, DBN network energy from formula (1.10) Represent the feature that training pattern is extracted according to input data set.Positively related relationship can illustrate, in DBN model training, The characteristic value that network obtains more accurately can reduce the value of second order reconstruction error, therefore, when network model has selected appropriately hidden When hiding counts layer by layer, the second order reconstruction error amount of model can be made minimum.According to this theory, second order reconstruction error can be made It is more convincing to optimize network layer counting method.
In step 3), the neuron section in " three+two " the lookup algorithm optimization network for combining trichotomy and dichotomy is utilized Point number is to be optimized according to traditional binary chop and improved three minutes lookup methods to its calculating process, such as Fig. 2 institute Show, specifically:
301) in three points of lookup algorithms for shortening section, the value range that data value is arranged is [a, d], chooses four numbers It is worth point xa, xb, xc, xd, data field is divided into three sections, wherein xaTake boundary value point a, xdTake boundary value point d, xbIt is the area [a, d] Between 1/3 at point, xdPoint at the 2/3 of the section section [a, d], then calculate separately divide after three minizones first~ Three slope ks1, k2, k3, value rule it is as follows:
As can be seen from FIG. 2, three groups of slope ks after calculating1< 0, k3> 0, and k2Absolute value it is minimum.Therefore in absolute value There are minimum points in the smallest section [b, c].In order to be quickly found out minimum point, at this time to there are the section of minimum value [b, C] binary search is carried out, then the use binary search of iteration is until find minimum point.
302) dichotomy is utilized, there are minimum points in the smallest section of absolute value [b, c], to there are the areas of minimum value Between [b, c] carry out binary chop lookup analysis is carried out to one group of orderly data, then iteration using binary chop it is straight To minimum point is found, hidden layer interstitial content is determined.
In a neural network structure, in addition to determining the neural network number of plies, the determination of every layer of neural unit number of nodes Also it is very important.Interstitial content selection is too small, and network training result may can not restrain;Interstitial content selection is got over Greatly, the mapping ability of network structure is stronger, and model training result is easier to converge to global minima point, the accuracy rate of model training It is higher;But when interstitial content selection it is excessive when, be easy to cause the over-fitting of network training result, i.e. whole network model pair The monitoring for transmitting data can be very careful, and when there is abnormal point (noise) to occur in data, model can be non-to the perception of abnormal point Often sensitive, the training result of model is easy to be influenced by data noise, and network training result can be biased to extremum, lead to model Fault-tolerance reduce.
Therefore, appropriate neural unit number is selected, optimizes network structure, is conducive to the further learning training of model. But the determination for node layer number each in neural network, all the time all ununified calculation method.With neuronal layers Number determines that method is similar, and lot of domestic and international researchers study the determination method of neuron number, the main side of proposition Method has incremental method, trial and error procedure, genetic algorithm, empirical equation etc..Incremental method exactly sets 1 for initial neuron node number, Then it successively increases 1 neural unit to be trained, eventually finds optimal network architecture;Trial and error procedure is relative to incremental method For there is no a too many calculating rule, simple random selection neuron number leads to more experiments and finds most appropriate neuron number Amount;Genetic algorithm is to calculate network optimal solution by reconstruct natural evolution process;Using empirical equation method, it is more Follow the rule of forefathers' summary.Generally, the empirical equation of the number of nodes of hidden layer in one three layers of network structure is determined are as follows:
H=log2N(1.12)
K is sample number in formula (1.11), and H is hidden layer node number, and N is input layer number, and M is output node layer Number, a ∈ [1,10] represent the constant between 1~10.
Above-mentioned several method is widely used in traditional artificial neural network, and there is no widely answered in DBN With.Still have some problems for these methods: calculation amount is excessive, needs to carry out multiple groups Experimental comparison's verifying;Elapsed time It is too long, even if can still waste long time when data volume is excessive in the case where reducing comparative experiments;The complexity of algorithm Excessively high, for some optimization algorithms for introducing new concept, complexity may be more than the complexity of network structure itself, make It is lost at more times and memory.Therefore, it is necessary to consider some relatively simple effective algorithms to neural unit number of nodes It optimizes.Some scholars determine hidden layer interstitial content using dichotomy.So-called binary chop is to utilize binary search Lookup analysis is carried out to one group of orderly data, the time complexity for effectively reducing algorithm is o (log2n).Have later Person proposes a kind of three points of lookup methods using the slope between data as judgment criterion according to the characteristic of dichotomy.
In trichotomy search procedure, it is first determined then the value range of data chooses 1/3 He of value range every time Data are divided into three small data segments at 2/3 point, calculate separately the slope of three data slots and are compared.It selects appropriate A data segment, iteration is using trichotomy until finding globally optimal solution in data.The present invention analyzes two methods Calculation features optimize its calculating process, design according to traditional binary chop and improved three points of lookups method A kind of " three+two " lookup algorithm of combination trichotomy and dichotomy.
In step 301), " three+two " lookup algorithm is utilized, the minimum point completed in data determines, true in this method It is parabolically regular to determine data and curves.When shaking situation if there is data and curves, it is unfavorable for the judgement of slope, it should which selection is passed Increasing method calculates, that is, successively increases neuron number and then judged.In the general shape for judging data and curves, can use Three slope over 10, according to the various combination of slope, it may appear that following several situations:
301A)k1< 0&&k2> 0&&k30 (&& of < expression " and ", while meeting these conditions)
Slope illustrates that error declines with the increase of neuron number in this section less than 0, conversely, slope is greater than 0 Specification error is increasing.Such case is that error declines in first data segment, is risen in second data segment, in third Decline, data and curves are shaken in a data segment.
301B)k1< 0&&k2> 0&&k3> 0
The error that such case calculates is as the growth of neuron number is fallen before and then is being increased.Therefore error minimum value It is likely to be present in the first two segment, needs to compare the absolute value of slope at this time.
301C)k1< 0&&k2< 0&&k3> 0
Error change, which is fallen before, to be risen, at this time minimum point may exist in latter two segment, and it is above-mentioned 301B) absolute value of the identical relatively slope of situation can calculate section where minimum value.
301D)k1< 0&&k2< 0&&k3< 0
When calculated three segment datas Interval Slope is both less than 0, calculated data error with neural unit increasing Add monotone decreasing, thus minimum point there may be with the last one data interval.
301E)k1> 0&&k2> 0&&k3> 0
Similar to situation in above-mentioned 301D), when three segment data slopes of calculating are all larger than 0, data error is passed in dullness The trend of increasing, therefore error minimum point is likely to be present in first section.
Further include step 301F), other situations, according to the different permutation and combination situations of three slopes, three sections of sections it is oblique It is different that rate should have eight kinds, it is contemplated that error should subtract with the increase of neural unit in practical problem It is few, there may be increase by a small margin.Therefore in remaining several situations, it is unfavorable for missing using the discovery of " three+two " lookup algorithm Incremental method calculating can be used in the minimum point of difference.
During the selection of value interval, optimal value point be may reside in interval border, in order to avoid section meter The best value of boundary value point is lost during calculation, and regulatory factor m can be added in the algorithm, the appropriate value model for changing section It encloses.
As shown in figure 3, in step 4), the DBN-SVR model of structural texture optimization are as follows:
Method is determined to hidden layers numbers according to used in traditional neural network, in conjunction with DBN feature, is carried out in RBM Hidden layers numbers are improved optimization using reconstructed error method by experiment, determine DBN network mind using the calculated value of reconstructed error It is as follows through unit number of plies rule:
402) predictive data set pre-processes, and extracts corresponding data set features as training data;
402) DBN model is constructed, initialization network model parameter, cycle of training and learning rate utilize what is handled well Training data successively trains RBM network, utilizes BP back-propagation algorithm trim network parameter;
403) optimize DBN network structure, according to structural optimization algorithm, modify DBN network depth and neuron node Number;
404) SVR model is trained and optimizes, initialization returns the network parameter and SVR model kernel functional parameter of layer, will SVR model is inputted by the good data of the DBN network training of structure optimization, model parameter is adjusted, constructs the DBN- of structure optimization SVR prediction model.
405) test model verifies DBN-SVR model using test set, and input test data set passes through model After training, learning data is exported, finally obtains final prediction result by renormalization.
In step 404), the present invention is similar to the building of DBN-BP model to the building of the DBN-SVR model of structure optimization, The DBN-SVR model of building is equally to recycle SVR network pair using SVR network as the recurrence layer of the DBN model of structure optimization The data after DBN training by structure optimization are fitted, and are then finely tuned the parameter in SVR network model, are finally constructed The DBN-SVR model of structure optimization.
In step 405), test model is first to pass through training dataset in the DBN-SVR network of structure optimization DBN model extracts characteristic value, optimizes DBN network structure according to stochastic gradient descent algorithm, then uses the DBN mould of structure optimization Type learning training data output it the input as SVR model, the i.e. input of DBN net regression layer, then utilize kernel function SVR model parameter is optimized;Finally the parameter model of the DBN model of structure optimization and SVR training is combined, structure The DBN-SVR model for building out structure optimization is predicted for data.The present invention chooses RBF kernel function as the core letter applied in SVR Number.
In step 403), optimization DBN network structure be to divide data into training set and test set two parts, training set with The ratio of data set is 3:1, utilizes the training set D of inputi, optimize DBN structure, the accuracy rate of model verified using test set, Specifically:
403A) stochastic parameter in network is initialized, enables input sample number using CD-1 algorithm (contrast divergence algorithm) According to forward-propagating, the pre-training process of network is so far completed;
Small parameter perturbations 403B) are carried out to network, using formula (1.15) as the loss function of error transfer factor, calculate loss The local derviation reverse train DBN network parameter of function:
Wherein, y is input sample,For the output of DBN-BP network training, | | | |2For the reconstructed error of second order normal form;
And network parameter is updated:
Wherein, ε is model learning efficiency, and η={ w, b, β } is network parameter, and w, b, β are respectively network weight weight values, input Layer and the bias vector of output layer and the weighted value of recurrence layer, e is network error.
403C) second order reconstruction ERROR ALGORITHM and " three+two " lookup method is combined to determine optimal network depth and each layer Neuron node number, construct the DBN-BP model H of structure optimization1, so as to the application that model is final.
Determine that DBN network neural unit number of plies rule is as follows using the calculated value of reconstructed error:
Wherein, NRBMFor the neural unit number of plies existing in DBN, ε is the preset acceptable thresholds of network model.It is logical In normal situation, it is known that input sample to be used and the true value that should be exported are needed in network model training.In addition, The preset acceptable accuracy rate of network model can be 90% or more.It is as shown in Figure 1 that DBN hides number of layers training process.
It is established rules really then according to reconstructed error to the network model number of plies, when the error of calculating is greater than acceptable thresholds, is DBN model increases by one group of RBM network structure, that is, increases a neural unit layer, then calculates again and increases one layer of neural unit newly DBN network reconstructed error;When the error of calculating is less than or equal to acceptable thresholds, DBN network can enter fine tuning In the stage, the number of plies of neural unit may be considered optimal neural unit number in the network in network at this time.
In the present embodiment, need to divide data into training set and test set in the DBN-BP model training of structure optimization The ratio of two parts, training set and data set is 3:1, utilizes the training set D of inputi, optimize DBN structure, tested using test set The accuracy rate of model of a syndrome.The DBN-BP algorithm of structure optimization is accomplished by
This example is by taking the prediction of Australian electricity market real time data as an example, in March, 2018 to June in essential record market Real time data tested.It is main to consider history electricity price and historical load according to the electricity price influence factor mentioned in chapter 2 Influence to Research on electricity price prediction, therefore extract the electricity price of corresponding period and load data in market and analyzed.It utilizes 2018 3 The data in the moon to May to predict the Spot Price in June, 2018, and are compared as training set with truthful data.
Step 1) data prediction
It include some AFR controls, noise data and error information in the acquisition of actual data.Carrying out, reality is pre- These data are cleaned before survey.Common method has similar day completion method, clustering method etc..Similar day filling Method refers to the periodic characteristics having within a short period of time using electricity price data, can be according to the historical data of same time period Modification is filled to abnormal data.Clustering method refers to and electricity price data classified, by define sample data it Between distance, divide sample data, closely located data point be divided into one kind, finds the central point in data.This example Data set is modified using similar day completion method.
Consider actual demand, needs the real time data of forecasted electricity market price, therefore to divide to electricity price data.Due in Australia In big Leah electricity market, electricity price data per half an hour is once recorded, therefore one day electricity price data is divided by this example 48 periods are recorded the electric power data of per half an hour respectively, and are indicated with 0-47, i.e., 0 indicates daily 00:00, small by half later When be superimposed.
By discussion to electricity price influence factor and combine the real data in sampling market, this example is mainly to directly affecting The two factor history electricity prices and historical load of electricity price variation are analyzed.According to the mean reversion feature of electricity price, select respectively The data for taking historical data to concentrate same time period are analyzed.Data attribute employed in this example has:
(1) previous moment, the history electricity price at preceding two moment, first three moment: t_p1m, t_p2m, t_p3m;
(2) previous moment, the historical load at preceding two moment, first three moment: t_d1m, t_d2m, t_d3m;
(3) the previous day, a few days ago, the history electricity price of first three days: t_p24h, t_p48h, t_p72h;
(4) the previous day, a few days ago, the historical load of first three days: t_d24h, t_d48h, t_d72h;
(5) the last week history electricity price, the last week historical load: t_p1w, t_d1w.
Above-mentioned 14 features and current date, moment, electricity price data and load data are collectively formed into prediction The input feature vector of model.Since input feature vector represents meaning difference, the value range in sampling is also not quite similar, in order to guarantee All features all play preferable control action in a model, all data to be normalized, after normalized Data the output of one group of same range value can be generated by model training, by carrying out anti-normalization processing to output result It can be clearly seen that output result.Minimax method for normalizing is used for the normalized of data, calculation formula is such as Under:
Wherein, x and x*It respectively represents and normalizes forward and backward data value, min is the minimum of value in current data feature Value, max are the maximum value of value in current data feature.
It is as follows for the renormalization formula of model output:
Wherein, y*The output predicted value of the forward and backward model of renormalization is respectively represented with y,For model training output In maximum value,For the minimum value in model training output.
Real time data in market is pre-processed according to the above method, obtains the input of model, input layer section The number value 18 of point.For example, the area New South Wales (New South Wales, NSW) is on April 1st, 2018 in market Input data when noon 7 is as shown in table 1:
Corresponding input data when table 1NSW regional 1 day 7 April in 2018
2. example of calculation is analyzed compared with
The Spot Price data and reality on May 31, in Australian market 1 day to 2018 March in 2018 are selected in this example When data set of the load data as training sample, to predict the Spot Price data in June, 2018.For data set, first Data are pre-processed using described above, then respectively using traditional BP algorithm, SVR algorithm, structure optimization DBN-BP algorithm, structure optimization DBN-SVR algorithm carry out Experimental comparison.It is carried out by the data of different regions in electricity market Comprehensive analysis.
In order to which the comparative experimental data of algorithms of different is better described, using the area NSW June 1 in 2018 in electricity market The data of 48 predicted time points of this day of day are individually enumerated one by one, and prediction result is as shown in table 2:
2 48 moment prediction result comparison sheets of table
Table 2 illustrates one day 48 time point and uses the predicted value of distinct methods and the percentage error of data.According to Real time data prediction result in the table of comparisons is as shown in Figure 4.It can intuitively find out from Fig. 4 and utilize traditional BP nerve net The fitting of network, SVR model, the DBN-BP model of structure optimization and the DBN-SVR model of structure optimization in Spot Price prediction Situation.It is shown by data as can be seen that traditional BP algorithm differs most with data true value during Spot Price prediction Greatly, the DBN-BP of structure optimization and the DBN-SVR fitting data of structure optimization differ smaller with real data, therefore, Ke Yichu The explanation of step actual prediction application in structure optimization DBN-BP and structure optimization DBN-SVR model better than BP network with SVR model.In addition, the image for being depicted as being easy observation for the percentage error that four kinds compare algorithm is as shown in Figure 5.
Prediction error comparison by observing four kinds of algorithms in Fig. 5 can be seen that the prediction of traditional BP neural network algorithm Accidentally absolute value of the difference is between 0.01 to 0.8.The peak times of power consumption on the day of, due to the influence of the uncertain factors such as external environment, Predict that error is higher, several times more than 0.7;The prediction Error Absolute Value of SVR model is between 0.01 to 0.76, morning and evening peak of power consumption Phase predicts that error is more than 0.6;The prediction Error Absolute Value of the DBN-BP model of structure optimization is maximum pre- between 0.01 to 0.79 Surveying error is 0.79;The prediction error of the DBN-SVR model of structure optimization is between 0.01 to 0.53, individual larger prediction errors Value more than 0.5, the overall trend of the DBN-BP and DBN-SVR Relative Error of structure optimization significantly lower than BP networks and The prediction result of SVR model.
According to model prediction as a result, calculating separately mean error that four kinds of models are fitted data and model from building To the response time of realization, as shown in table 3.
The comparison of 3 model of table
From table 3 it is observed that in identical network depth and the network structure of neuron number, the DBN- of structure optimization The precision of prediction of BP model and DBN-SVR model is higher than individual BP network and SVR model.Therefore, in conjunction with Fig. 4, Fig. 5 and table 3, it can be deduced that conclusion: being better than in the DBN-SVR model that the accuracy rate of Spot Price prediction domain model prediction is structure optimization The DBN-BP model of structure optimization is better than traditional BP neural network better than SVR model.On the other hand, to different models in electricity price Efficiency of algorithm in prediction field is compared, by statistics response time of four kinds of models during prediction it is found that with The promotion of data predictablity rate, consumed predicted time also accordingly increase.
Secondly, it is several that this example acquires other in Australian electricity market simultaneously in order to eliminate the randomness of experimental result The real time data in a area carries out prediction comparison, and the area of selection has New South Wales (NSW), Queensland (Queensland, QLD), South Australia (South Australia, SA), Tasmania (Tasmania, TAS) and Victoria (Victoria, VIC).Use the real time data in five areas in March, 2018 to Mays as training data, respectively to 5 areas The Spot Price data in June are predicted that for convenience of explanation, this example is right respectively by the DBN-BP prediction model of structure optimization The Spot Price in 5 regional June 1 is predicted that three regional prediction data fitting results are as shown in Figure 6 and Figure 7.
From Fig. 6, it can be seen that the DBN-BP prediction model that different regions are used with structure optimization, for corresponding area Fitting effect be not much different, in addition, compare the Spot Price data in five regional June 1, correlation data as shown in fig. 7, from In Fig. 7 as can be seen that the electricity price data of different regions over the same period of time be it is different, it is high for electricity consumption present in one day The peak phase, the Spot Price of different regions is not also identical, but substantially in rising trend.Thus, it will be seen that the structure of this example building Application of the DBN-BP model of optimization in Spot Price data does not have area specificity, can be commonly used.
It is tested further for the DBN model predicated response time of different neural network depth, experimental result such as 5 institute of table Show.
The DBN predicated response time of 5 different depth of table
As can be seen from Table 5, with the increase of DBN network model depth, needed for prediction model is in actual application The time to be consumed increases therewith.For this reason, it may be necessary to think deeply under the requirement for improving precision of prediction, shorten prediction duration as far as possible.
The invention proposes a kind of Electricity price forecasting solutions based on improved deepness belief network, it is therefore intended that explores depth Application of the belief network in Research on electricity price prediction field.In order to improve the precision of prediction of DBN, the network structure of DBN is optimized, is built The DBN model of vertical structure optimization, and different combinations is carried out to the recurrence layer of network and is improved.The invention proposes structure optimizations The DBN-SVR model of DBN-BP model and structure optimization.In order to prove the superiority of model refinement, using Australian electric power city Five regional real time datas carry out verifying analysis in.

Claims (8)

1. a kind of power price prediction technique based on improved deepness belief network, it is characterised in that the following steps are included:
1) it according to electricity price data characteristics and the influence factor of electricity price, divides data set and determines network data input, to use Data set carries out data prediction;
2) number of plies of model RBM is determined using second order reconstruction error calculation network error for pretreated data set;
3) the neuron node number in " three+two " the lookup algorithm optimization network for combining trichotomy and dichotomy is utilized;
4) it is utilized respectively the recurrence layer of BP neural network and SVR support vector regression as DBN network, in conjunction with the number of plies of RBM With the neuron node number after optimization, the DBN-BP model and DBN-SVR model of structural texture optimization, to Spot Price data It is predicted.
2. the power price prediction technique according to claim 1 based on improved deepness belief network, it is characterised in that: In step 3), it is using the neuron node number in " three+two " the lookup algorithm optimization network for combining trichotomy and dichotomy According to traditional binary chop and improved three points of lookups method, its calculating process is optimized, specifically:
301) in three points of lookup algorithms for shortening section, the value range that data value is arranged is [a, d], chooses four numerical points xa, xb, xc, xd, data field is divided into three sections, wherein xaTake boundary value point a, xdTake boundary value point d, xbIt is the section [a, d] Point, x at 1/3dIt is point at the 2/3 of the section section [a, d], then calculate separately three minizones after dividing the first~tri- is oblique Rate k1, k2, k3, value rule it is as follows:
First slope k1< 0, third slope k3> 0, and the second slope k2Absolute value it is minimum;
302) dichotomy is utilized, there are minimum points in the smallest section of absolute value [b, c], to there are the sections of minimum value [b, c] carries out binary chop and carries out lookup analysis to one group of orderly data, then iteration using binary chop until Minimum point is found, determines hidden layer interstitial content.
3. the power price prediction technique according to claim 2 based on improved deepness belief network, it is characterised in that: In step 301), when being unfavorable for the judgement of slope if there is data and curves concussion situation, selects incremental method to calculate, successively increase Then neuron number is judged;In the general shape for judging data and curves, using three slope over 10, according to the difference of slope There are following several situations in combination:
301A)k1< 0&&k2> 0&&k3< 0;
Slope shows that error declines with the increase of neuron number in this section less than 0, shows conversely, slope is greater than 0 Error is increasing, and is that error declines in first data segment, rises in second data segment, in third data segment under Drop, data and curves are shaken;
301B)k1< 0&&k2> 0&&k3> 0
Show the error calculated as the growth of neuron number is fallen before and then is being increased, error minimum value is present in the first two In segment, need to compare the absolute value of slope;
301C)k1< 0&&k2< 0&&k3> 0
Show that error change is fallen before rising, at this time minimum point may exist in latter two segment, with 301B's) Situation is identical, and the absolute value for comparing slope calculates section where minimum value;
301D)k1< 0&&k2< 0&&k3< 0
When calculated three segment datas Interval Slope is both less than 0, calculated data error with neural unit increase list Tune successively decreases, and minimum point is present in the last one data interval;
301E)k1> 0&&k2> 0&&k3> 0
Show when three segment data slopes of calculating are all larger than 0, data error is in the trend of monotonic increase, therefore error minimum value Point is present in first section.
4. the power price prediction technique according to claim 3 based on improved deepness belief network, it is characterised in that: Further include step 301F), according to the different permutation and combination situations of three slopes, there are eight kinds of different feelings for the slope in three sections of sections Condition is calculated in remaining several situations using incremental method;
Alternatively, when optimal value point is present in interval border, being added adjust in the algorithm during the selection of value interval Factor m changes the value range in section.
5. the power price prediction technique according to claim 1 based on improved deepness belief network, it is characterised in that: In step 4), the DBN-SVR model of structural texture optimization are as follows:
Method is determined to hidden layers numbers according to used in traditional neural network, in conjunction with DBN feature, is tested in RBM, Optimization is improved to hidden layers numbers using reconstructed error method, determines DBN network neural list using the calculated value of reconstructed error First number of plies rule is as follows:
401) predictive data set pre-processes, and extracts corresponding data set features as training data;
402) DBN model is constructed, initialization network model parameter, cycle of training and learning rate utilize the training handled well Data successively train RBM network, utilize BP back-propagation algorithm trim network parameter;
403) optimize DBN network structure, according to structural optimization algorithm, modify DBN network depth and neuron node number;
404) SVR model is trained and optimizes, initialization returns the network parameter and SVR model kernel functional parameter of layer, will pass through The good data of the DBN network training of structure optimization input SVR model, adjust model parameter, and the DBN-SVR for constructing structure optimization is pre- Survey model;
405) test model verifies DBN-SVR model using test set, and input test data set passes through model training Afterwards, learning data is exported, finally obtains final prediction result by renormalization.
6. the power price prediction technique according to claim 5 based on improved deepness belief network, it is characterised in that: In step 404), the DBN-SVR prediction model for constructing structure optimization is returning using SVR network as the DBN model of structure optimization Return layer, recycles SVR network to be fitted the data after the DBN training by structure optimization, then finely tune SVR network model In parameter, construct the DBN-SVR model of structure optimization.
7. the power price prediction technique according to claim 5 based on improved deepness belief network, it is characterised in that: In step 405), test model are as follows:
In the DBN-SVR network of structure optimization, training dataset is extracted into characteristic value by DBN model first, according to random Gradient descent algorithm optimizes DBN network structure, then uses the DBN model learning training data of structure optimization, outputs it work For the input of SVR model, the i.e. input of DBN net regression layer, then SVR model parameter is optimized using kernel function and is changed Into;Finally the parameter model of the DBN model of structure optimization and SVR training is combined, constructs the DBN-SVR mould of structure optimization Type is predicted for data.
8. the power price prediction technique according to claim 5 based on improved deepness belief network, it is characterised in that: In step 403), optimize DBN network structure, specifically:
403A) stochastic parameter in network is initialized, input sample data forward-propagating is enabled using CD-1 algorithm, so far completes The pre-training process of network;
Small parameter perturbations 403B) are carried out to network, using formula (1.15) as the loss function of error transfer factor, calculate loss function Local derviation reverse train DBN network parameter:
Wherein, y is input sample,For the output of DBN-BP network training, | | | |2For the reconstructed error of second order normal form;
And network parameter is updated:
Wherein, ε be model learning efficiency, η={ w, b, β } be network parameter, w, b, β be respectively network weight weight values, input layer with The bias vector of output layer and the weighted value for returning layer, e is network error;
Second order reconstruction ERROR ALGORITHM and " three+two " lookup method 403C) is combined to determine the mind of optimal network depth and each layer Through first node number, the DBN-BP model H of structure optimization is constructed1, so as to the application that model is final.
CN201910289389.9A 2019-04-11 2019-04-11 A kind of power price prediction technique based on improved deepness belief network Withdrawn CN110009160A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910289389.9A CN110009160A (en) 2019-04-11 2019-04-11 A kind of power price prediction technique based on improved deepness belief network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910289389.9A CN110009160A (en) 2019-04-11 2019-04-11 A kind of power price prediction technique based on improved deepness belief network

Publications (1)

Publication Number Publication Date
CN110009160A true CN110009160A (en) 2019-07-12

Family

ID=67171128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910289389.9A Withdrawn CN110009160A (en) 2019-04-11 2019-04-11 A kind of power price prediction technique based on improved deepness belief network

Country Status (1)

Country Link
CN (1) CN110009160A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458606A (en) * 2019-07-19 2019-11-15 西北工业大学 Nonstandard cutter price expectation method based on deep learning
CN110580543A (en) * 2019-08-06 2019-12-17 天津大学 Power load prediction method and system based on deep belief network
CN110595535A (en) * 2019-08-19 2019-12-20 湖南强智科技发展有限公司 Monitoring method, device and storage medium
CN110765074A (en) * 2019-09-20 2020-02-07 国网山东省电力公司青岛供电公司 Method and system for quickly accessing electric load curve data of acquisition terminal
CN110837493A (en) * 2019-10-11 2020-02-25 苏宁云计算有限公司 Price issuing control method and device, computer equipment and storage medium
CN111899122A (en) * 2020-07-03 2020-11-06 国网江苏省电力有限公司镇江供电分公司 User decentralized clearing method based on energy storage control
CN112529684A (en) * 2020-11-27 2021-03-19 百维金科(上海)信息科技有限公司 Customer credit assessment method and system based on FWA _ DBN
CN113177355A (en) * 2021-04-28 2021-07-27 南方电网科学研究院有限责任公司 Power load prediction method

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458606A (en) * 2019-07-19 2019-11-15 西北工业大学 Nonstandard cutter price expectation method based on deep learning
CN110580543A (en) * 2019-08-06 2019-12-17 天津大学 Power load prediction method and system based on deep belief network
CN110595535A (en) * 2019-08-19 2019-12-20 湖南强智科技发展有限公司 Monitoring method, device and storage medium
CN110765074A (en) * 2019-09-20 2020-02-07 国网山东省电力公司青岛供电公司 Method and system for quickly accessing electric load curve data of acquisition terminal
CN110837493A (en) * 2019-10-11 2020-02-25 苏宁云计算有限公司 Price issuing control method and device, computer equipment and storage medium
CN110837493B (en) * 2019-10-11 2022-12-27 苏宁云计算有限公司 Price issuing control method and device, computer equipment and storage medium
CN111899122A (en) * 2020-07-03 2020-11-06 国网江苏省电力有限公司镇江供电分公司 User decentralized clearing method based on energy storage control
CN111899122B (en) * 2020-07-03 2024-01-02 国网江苏省电力有限公司镇江供电分公司 User decentralized clearing method based on energy storage control
CN112529684A (en) * 2020-11-27 2021-03-19 百维金科(上海)信息科技有限公司 Customer credit assessment method and system based on FWA _ DBN
CN113177355A (en) * 2021-04-28 2021-07-27 南方电网科学研究院有限责任公司 Power load prediction method
CN113177355B (en) * 2021-04-28 2024-01-12 南方电网科学研究院有限责任公司 Power load prediction method

Similar Documents

Publication Publication Date Title
CN110009160A (en) A kind of power price prediction technique based on improved deepness belief network
Lobato et al. Multi-objective genetic algorithm for missing data imputation
CN105354646B (en) Power load forecasting method for hybrid particle swarm optimization and extreme learning machine
Yu et al. Evolutionary fuzzy neural networks for hybrid financial prediction
Hao et al. Forecasting the real prices of crude oil using robust regression models with regularization constraints
CN110163433A (en) A kind of ship method for predicting
Yan et al. Weight optimization for case-based reasoning using membrane computing
CN110276679A (en) A kind of network individual credit fraud detection method towards deep learning
CN106529503A (en) Method for recognizing face emotion by using integrated convolutional neural network
KR101508361B1 (en) Method for prediction of future stock price using analysis of aggregate market value of listed stock
CN110212528A (en) Reconstructing method is lacked based on the power distribution network metric data for generating confrontation and dual Semantic Aware
Alkawaz et al. Day-ahead electricity price forecasting based on hybrid regression model
CN108898259A (en) Adaptive Evolutionary planning Methods of electric load forecasting and system based on multi-factor comprehensive
CN109214546A (en) A kind of Power Short-Term Load Forecasting method based on improved HS-NARX neural network
CN108229750A (en) A kind of stock yield Forecasting Methodology
CN108537581B (en) Energy consumption time series prediction method and device based on GMDH selective combination
Yi et al. Model-free economic dispatch for virtual power plants: An adversarial safe reinforcement learning approach
Hafiz et al. Co-evolution of neural architectures and features for stock market forecasting: A multi-objective decision perspective
CN109086941A (en) A kind of energy-consuming prediction technique
Cheong et al. Unveiling the relationship between economic growth and equality for developing countries
Puiu et al. Principled data completion of network constraints for day ahead auctions in power markets
Liu The prediction model and system of stock rise and fall based on BP neural network
Liu et al. Predicting stock trend using multi-objective diversified Echo State Network
Zhang et al. Integrating harmony search algorithm and deep belief network for stock price prediction model
Campbell et al. A stochastic graph grammar algorithm for interactive search

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20190712