CN108108842A - A kind of anti-benefit deviation neural net prediction method and device based on cost association - Google Patents

A kind of anti-benefit deviation neural net prediction method and device based on cost association Download PDF

Info

Publication number
CN108108842A
CN108108842A CN201711408089.5A CN201711408089A CN108108842A CN 108108842 A CN108108842 A CN 108108842A CN 201711408089 A CN201711408089 A CN 201711408089A CN 108108842 A CN108108842 A CN 108108842A
Authority
CN
China
Prior art keywords
mrow
msub
mtr
mtd
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711408089.5A
Other languages
Chinese (zh)
Other versions
CN108108842B (en
Inventor
郑厚清
王广辉
李伟阳
贾德香
王智敏
柳占杰
于灏
陈�光
陈睿欣
王玓
刘素蔚
钱仲文
王锋华
夏洪涛
成敬周
宋国超
石惠承
仲立军
袁骏
周小明
王大维
李伟
施明泰
李浩松
许中平
李金�
康泰峰
寸馨
黄柏富
晏梦璇
许方园
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Netstone Accenture Information Technology Co Ltd
National Grid Energy Research Institute Co Ltd
State Grid Zhejiang Electric Power Co Ltd
State Grid Energy Research Institute Co Ltd
State Grid Liaoning Electric Power Co Ltd
Original Assignee
Beijing Netstone Accenture Information Technology Co Ltd
National Grid Energy Research Institute Co Ltd
State Grid Zhejiang Electric Power Co Ltd
State Grid Liaoning Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Netstone Accenture Information Technology Co Ltd, National Grid Energy Research Institute Co Ltd, State Grid Zhejiang Electric Power Co Ltd, State Grid Liaoning Electric Power Co Ltd filed Critical Beijing Netstone Accenture Information Technology Co Ltd
Priority to CN201711408089.5A priority Critical patent/CN108108842B/en
Publication of CN108108842A publication Critical patent/CN108108842A/en
Application granted granted Critical
Publication of CN108108842B publication Critical patent/CN108108842B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06313Resource planning in a project environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Game Theory and Decision Science (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Primary Health Care (AREA)
  • Water Supply & Treatment (AREA)
  • Public Health (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The present invention provides a kind of anti-benefit deviation neural net prediction methods and device based on cost association, are associated with cost factor in the optimization training of neural network model, and are trained using the gradient descent method having modified.The present invention has taken into account accuracy and the Optimum cost value that power system load is declared, ensure that application method and apparatus provided by the invention carry out the prediction result that electric load declares prediction and both have ideal precision, it can effectively reduce cost again, Improve Efficiency, each load, which declares policymaker and formulates its market load, in auxiliary power market declares strategy.

Description

A kind of anti-benefit deviation neural net prediction method and device based on cost association
Technical field
The invention belongs to Load Prediction In Power Systems technical field more particularly to a kind of anti-benefit based on cost association are inclined Poor neural net prediction method and device.
Background technology
The load prediction of electric system is the important component of Power System Planning and electric system economic system fortune Capable basis, the planning and operation to electric system are all of crucial importance.It is electric system pair reasonably to carry out load forecast The precondition that electric power resource is scheduled and plans, can be with reasonable arrangement Unit Commitment and inspection by accurate load prediction The plan of repairing reduces spinning reserve capacity, reduces cost of electricity-generating.With going deep into for technological progress and intelligent grid, Electric Load Forecasting It surveys theory and technology and has very big development, but existing Predicting Technique is for the purpose of improving precision of prediction, there is no simultaneous Care for cost factor.
The content of the invention
In order to solve the above technical problem, the present invention provides a kind of anti-benefit deviation neutral net based on cost association is pre- Survey method, including:
S1. neural network model is established;
S2. set and trained optimization aim is optimized to neural network model;The optimization aim includes precision target It is specific as follows shown with cost objective two parts:
Wherein, Er represents precision target, and Cos represents cost objective, and term coefficient is penalized in β expressions;
S3. training is optimized to neural network model, obtains trained neural network model;
S4. the correlated characteristic data for declaring target day are inputted into trained neural network model, you can output is declared The load prediction results of target day.
Further, the cost objective is calculated according to the following formula:
Wherein, Y is the output matrix of neural metwork training collection, and the element of matrix o rows kth row is denoted as yok;PDA is higher level The electricity price matrix in market, the element of matrix o rows kth row are denoted as pdaok;PRT be subordinate market electricity price matrix, matrix o rows The element of kth row is denoted as prtok;T is the true matrix of loadings of neural metwork training collection, and the element of matrix o rows kth row is denoted as tok;Bias vectors of the Ac between Y and true load;ε (x) represents the excitation function of neuron;δ (x) approaches for a step Function.
Further, the optimization training process in the step S3 also needs to meet constraint requirements as follows:
Wherein, C represents that load declares the error threshold of policymaker's setting.
Further, the step S3 is specifically included:
S31. the optimization thought declined according to gradient establishes optimization training method, and calculation optimization is trained according to the following formula Independent variable:
Wherein:WihRepresent the weight matrix between input layer and interlayer;BhRepresent the bias variable matrix in interlayer;Who Represent the weight matrix between interlayer and output layer;BoRepresent the bias variable matrix of output layer;λ is iteration step length coefficient, Policymaker is declared by load to provide, the value of λ is bigger, and iterative convergence speed is faster, but error is also bigger simultaneously;
S32. to the independent variable W of optimization trainingho(q)、Bo(q)、Wih(q)、Bh(q) initialized;
S33. the independent variable W of calculation optimization trainingho(q+1)、Bo(q+1)、Wih(q+1)、Bh(q+1);
S34. the numerical value of Er and Cos is calculated, judges whether to meet terminal condition;If so, iteration ends, optimize training knot Beam;If it is not, then making q=q+1, and go to step S33.
Further, the criterion of described " whether meeting terminal condition " specifically includes:Sentenced by the numerical value of Er and Cos Whether disconnected error between optimum results and actual value is less than given threshold;If so, meet terminal condition;If it is not, then it is discontented with Sufficient terminal condition.
Further, the method further includes:To prevent the iterative process of step S34 from becoming endless loop, pre-set and stop Only time or stopping cycle-index;If the error between optimum results and actual value is more than given threshold always, iteration is followed Ring reaches dwell time or meets terminal condition automatically after stopping cycle-index.
Further, temperature of the correlated characteristic data including the object time, humidity, weather conditions and history is negative Lotus data.
The present invention also provides a kind of anti-benefit deviation neural network prediction device based on cost association, including:
Data input module:For input data;
Neural network model module:For establishing neural network model;
Optimization aim module:Trained optimization aim is optimized to neural network model for setting;
Optimize training module:For optimizing training to neural network model, trained neutral net mould is obtained Type;
As a result output module:For output load prediction result.
Compared with prior art, the beneficial effects of the present invention are:
Anti- benefit deviation neural net prediction method and device provided by the invention based on cost association have taken into account electric power The accuracy and Optimum cost value that system loading is declared are associated with cost factor in the optimization training of neural network model, and It is trained using the gradient descent method having modified, ensures that application method and apparatus provided by the invention carry out electric load and declare The prediction result of prediction not only has ideal precision, but also can effectively reduce cost, Improve Efficiency, in auxiliary power market Each load, which declares policymaker and formulates its market load, declares strategy.
Description of the drawings
Fig. 1 is the structure of the neural network model of the present invention and its schematic diagram for optimizing training process;
Fig. 2 is the flow chart of the neural network model optimization training of the present invention;
Fig. 3 is the structure diagram of the anti-benefit deviation neural network prediction device based on cost association of the present invention.
Specific embodiment
Embodiment 1
A kind of anti-benefit deviation neural net prediction method based on cost association, as Fig. 1 includes:
S1. neural network model is established;
The neural network model established in the present invention can select a variety of existing neural network models, such as traditional BP nerve Network model, multilayer perceptron (MLP), adaptive neural network model etc.;The present embodiment is only with traditional BP neural network model Exemplified by be described in detail;
The neural network model that step S1 is established is divided into three layers of input layer, interlayer and hidden layer;Wherein, input layer by The feature space for declaring the correlated characteristic composition of load is formed;Interlayer includes multiple neurons;Output layer is neutral net Output, for output layer equally comprising multiple neurons, neuronal structure and interlayer are completely the same;
In the practical application of method provided in this embodiment, when declaring the correlated characteristic of load generally including at least target Between temperature, humidity, the load data of weather conditions and history;
Wherein, the output in interlayer is as follows:
Wherein, fh[] represents the transmission function in the neuron of interlayer;Q represents the step number sequence number of iteration;Wih T(q) represent Weight matrix in the q times iteration between input layer and interlayer;X (q) represents the input of input layer during the q times iteration;Bh(q) Represent the bias variable matrix in interlayer during the q times iteration;S (q) represents the output in interlayer during the q times iteration;
The output of output layer is as follows:
Wherein, fo[] represents the transmission function in output layer neuron;Who T(q) represent the q times iteration in interlayer with it is defeated Go out the weight matrix between layer;Bo(q) the bias variable matrix of output layer during the q times iteration is represented;Y (q) represents the q times iteration When output layer output;
S2. set and trained optimization aim is optimized to neural network model;
The optimization aim of optimization training includes precision target and cost objective two parts, as follows:
Wherein, Er represents the precision target of optimization training, and Cos represents the cost objective of optimization training, and term system is penalized in β expressions Number;
In the present embodiment, the calculating of precision target is as the accuracy computation method in traditional BP neural metwork training, Using F- norm calculations;
The calculating of cost objective considers the electricity price of higher level's electricity market and subordinate's electricity market, while is built with virtual god It is specific as follows shown through member:
Wherein, Y is the output matrix of neural metwork training collection, and the element of matrix o rows kth row is denoted as yok;PDA is higher level The electricity price matrix in market, the ahead market in typical higher level market such as American Electric Power market, wherein, matrix o rows kth row Element is denoted as pdaok;PRT is the electricity price matrix in subordinate market, the real-time city in typical subordinate market such as American Electric Power market , wherein, the element of matrix o rows kth row is denoted as prtok;T is the true matrix of loadings of neural metwork training collection, also referred to as mesh Matrix is marked, wherein, the element of matrix o rows kth row is denoted as tok;Bias vectors of the Ac between Y and true load;ε (x) is represented The excitation function of virtual neuron;δ (x) is a step approximating function;
It declares the permanent positivity of load in order to limit and ensures lowest accuracy, optimization training process also needs to meet following institute The constraint requirements shown:
Wherein, C represents that load declares the error threshold of policymaker's setting, which represents averagely to miss in optimization process Difference is no more than C;
S3. the optimization aim determined according to step S2 optimizes neural network model training, detailed process such as Fig. 2 It is shown, it specifically includes:
S31. optimization training method is established;
The optimization thought that the optimization training process of the present embodiment declines according to gradient, will based on optimization aim and optimization constraint It asks and training is optimized to neural network model;Compared with traditional BP neural network training process, due to increasing in optimization aim Cost objective promotes the thought that gradient declines to need to carry out cost amendment, therefore the optimization of the present embodiment in formation algorithm The formula expression and the optimization of result of calculation and traditional BP neural network that gradient in trained backpropagation calculates are trained not It is identical, it is specific as follows shown:
θ=PDA-PRT δ (T-Y)
-(T-Y)·PRT·δ(T-Y)′
Wherein, Y ' represents the Jacobian matrix of Y, representing matrix dot product, × representing matrix multiplication;
Therefore, optimize trained independent variable to be calculated according to equation below:
Wherein, λ is iteration step length coefficient, and declaring policymaker by load provides, and the value of λ is bigger, and iterative convergence speed is faster, But error is also bigger simultaneously;
S32. to the independent variable W of optimization trainingho(q)、Bo(q)、Wih(q)、Bh(q) initialized, in the present embodiment, Who(q)、Bo(q)、Wih(q)、Bh(q) initialization value can take real number value at random;
S33. the independent variable W that the optimization training method calculation optimization determined according to step S31 is trainedho(q+1)、Bo(q+1)、 Wih(q+1)、Bh(q+1);
S34. the numerical value of Er and Cos is calculated according to the result of step S33, judges whether to meet terminal condition;If so, repeatedly In generation, terminates, and optimization training terminates;If it is not, then making q=q+1, and go to step S33;
In the present embodiment, if the criterion for meeting terminal condition specifically includes:Judged by the numerical value of Er and Cos Whether the error between optimum results and actual value is less than given threshold;If so, meet terminal condition;If it is not, it is then unsatisfactory for Terminal condition;
In the present embodiment, to prevent the iterative process of step S34 from becoming endless loop, dwell time or stopping are pre-set Cycle-index;If the error between optimum results and actual value is more than given threshold always, when iterative cycles reach stopping Between or stop cycle-index after i.e. meet terminal condition automatically;
S4. the correlated characteristic data for declaring target day are inputted into trained neural network model, you can output is declared The load prediction results of target day.
Embodiment 2
A kind of anti-benefit deviation neural network prediction device based on cost association, if Fig. 3 shows, including:
Data input module:For input data;In the present embodiment, the data inputted by data input module include The optimization inputted when optimization training starts trains the initialization value of independent variable and carries out the Shen inputted when load declares prediction Report the correlated characteristic data of target day;
Neural network model module:For establishing neural network model;
Optimization aim module:Trained optimization aim is optimized to neural network model for setting;
Optimize training module:For optimizing training to neural network model, trained neutral net mould is obtained Type;
As a result output module:For output load prediction result.
Power train is carried out using the anti-benefit deviation neural network prediction device provided in this embodiment based on cost association During the load prediction of system, neural network model module first establishes neural network model;Optimization aim module is set to nerve net Network model optimizes trained optimization aim;Then optimization training independent variable W is inputted by data input moduleho(q)、Bo (q)、Wih(q)、Bh(q) initialization value;Optimize training module and call neural network model module, neural network model is carried out Optimization training, until optimization training terminates to obtain trained neural network model;Shen is inputted by data input module again The correlated characteristic data of target day are reported, substitute into trained neural network model, you can export the power train of load to be declared The load prediction results of system.
In the present embodiment, the neural network model that neural network model module is established is traditional BP neural network, is divided into For three layers of input layer, interlayer and hidden layer;
Wherein, the output in interlayer is as follows:
Wherein, fh[] represents the transmission function in the neuron of interlayer;Q represents the step number sequence number of iteration;Wih T(q) represent Weight matrix in the q times iteration between input layer and interlayer;X (q) represents the input of input layer during the q times iteration;Bh(q) Represent the bias variable matrix in interlayer during the q times iteration;S (q) represents the output in interlayer during the q times iteration;
The output of output layer is as follows:
Wherein, fo[] represents the transmission function in output layer neuron;Who T(q) represent the q times iteration in interlayer with it is defeated Go out the weight matrix between layer;Bo(q) the bias variable matrix of output layer during the q times iteration is represented;Y (q) represents the q times iteration When output layer output.
In the present embodiment, the optimization aim of the optimization training of optimization aim module setting includes precision target Er and cost Target Cos two parts, it is specific as follows shown:
Wherein, Er represents the precision target of optimization training, and Cos represents the cost objective of optimization training, and term system is penalized in β expressions Number;
In the present embodiment, the calculating of precision target uses as the accuracy computation in traditional BP neural metwork training F- norm calculations;
The calculating of cost objective considers the electricity price of higher level's electricity market and subordinate's electricity market, while is built with virtual god It is specific as follows shown through member:
Wherein, Y is the output matrix of neural metwork training collection, and the element of matrix o rows kth row is denoted as yok;PDA is higher level The electricity price matrix in market, the ahead market in typical higher level market such as American Electric Power market, wherein, matrix o rows kth row Element is denoted as pdaok;PRT is the electricity price matrix in subordinate market, the real-time city in typical subordinate market such as American Electric Power market , wherein, the element of matrix o rows kth row is denoted as prtok;T is true matrix of loadings (the also referred to as mesh of neural metwork training collection Mark matrix), wherein, the element of matrix o rows kth row is denoted as tok;Bias vectors of the Ac between Y and true load;ε (x) tables Show the excitation function of virtual neuron;δ (x) is a step approximating function.
It declares the permanent positivity of load in order to limit and ensures lowest accuracy, optimization training process also needs to meet as shown below Constraint requirements:
Wherein, C represents that load declares the error threshold of policymaker's setting, which represents averagely to miss in optimization process Difference is no more than C.
In the present embodiment, data input module input optimization training independent variable Who(q)、Bo(q)、Wih(q)、Bh(q) It can random value during initialization value.
In the present embodiment, optimization training module, which optimizes neural network model trained detailed process, includes:
By the optimization inputted by data input module training independent variable Who(q)、Bo(q)、Wih(q)、Bh(q) initialization Value is substituted into following equation:
Calculate Who(q+1)、Bo(q+1)、Wih(q+1)、Bh(q+1);And then the numerical value of Er and Cos are calculated, judge to optimize As a result whether the error between actual value is less than given threshold;If so, optimization training terminates;If it is not, then make q=q+1, And W is calculated againho(q+1)、Bo(q+1)、Wih(q+1)、Bh(q+1), the numerical value of Er and Cos are recalculated, is iterated, until Error between optimum results and actual value is less than given threshold, iteration ends, and optimization training terminates.
In the present embodiment, to prevent above-mentioned iterative process from becoming endless loop, dwell time or stopping cycle being pre-set Number;If error between optimum results and actual value is more than given threshold always, iterative cycles reach dwell time or I.e. automatic Iterative terminates after stopping cycle-index, and optimization training terminates.
In the present embodiment, the calculation formula for optimizing training independent variable is obtained by following process:
The optimization thought that optimization training process in the present embodiment declines according to gradient, is constrained based on optimization aim and optimization It is required that training is optimized to neural network module;Compared with traditional BP neural network training process, due to increasing in optimization aim Cost objective is added, the thought that gradient declines has been promoted to need to carry out cost amendment in formation algorithm, therefore the present embodiment is excellent Change the formula expression of the gradient calculating in trained backpropagation and the optimization of result of calculation and traditional BP neural network is trained simultaneously It differs, it is specific as follows shown:
θ=PDA-PRT δ (T-Y)
-(T-Y)·PRT·δ(T-Y)′
Wherein, Y ' represents the Jacobian matrix of Y, representing matrix dot product, × representing matrix multiplication;
Thus, the result of calculation of the independent variable of optimization training is as follows:
Finally it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention and it is unrestricted, although reference The present invention is described in detail in preferred embodiment, it will be understood by those of ordinary skill in the art that, it can be to the present invention's Technical solution is modified or replaced equivalently, and without departing from the spirit and scope of technical solution of the present invention, should all be covered Among scope of the presently claimed invention.

Claims (8)

1. a kind of anti-benefit deviation neural net prediction method based on cost association, which is characterized in that the described method includes:
S1. neural network model is established;
S2. set and trained optimization aim is optimized to neural network model;The optimization aim include precision target and into This target two parts, it is specific as follows shown:
<mrow> <mi>min</mi> <mo>&amp;DoubleRightArrow;</mo> <mi>G</mi> <mo>=</mo> <mi>E</mi> <mi>r</mi> <mo>+</mo> <mi>&amp;beta;</mi> <mo>&amp;CenterDot;</mo> <mi>C</mi> <mi>o</mi> <mi>s</mi> </mrow>
Wherein, Er represents precision target, and Cos represents cost objective, and term coefficient is penalized in β expressions;
S3. training is optimized to neural network model, obtains trained neural network model;
S4. the correlated characteristic data for declaring target day are inputted into trained neural network model, you can target is declared in output The load prediction results of day.
2. according to the method described in claim 1, it is characterized in that, the cost objective is calculated according to the following formula:
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>C</mi> <mi>o</mi> <mi>s</mi> <mo>=</mo> <mi>s</mi> <mi>u</mi> <mi>m</mi> <mrow> <mo>(</mo> <mi>Y</mi> <mo>&amp;CenterDot;</mo> <mi>P</mi> <mi>D</mi> <mi>A</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>s</mi> <mi>u</mi> <mi>m</mi> <mrow> <mo>(</mo> <mi>A</mi> <mi>c</mi> <mo>&amp;CenterDot;</mo> <mi>P</mi> <mi>R</mi> <mi>T</mi> <mo>&amp;CenterDot;</mo> <mi>&amp;epsiv;</mi> <mo>(</mo> <mrow> <mi>A</mi> <mi>c</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>s</mi> <mi>u</mi> <mi>m</mi> <mrow> <mo>(</mo> <mi>Y</mi> <mo>&amp;CenterDot;</mo> <mi>P</mi> <mi>D</mi> <mi>A</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>o</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>O</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <msub> <mi>y</mi> <mrow> <mi>o</mi> <mi>k</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>pda</mi> <mrow> <mi>o</mi> <mi>k</mi> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>s</mi> <mi>u</mi> <mi>m</mi> <mrow> <mo>(</mo> <mi>A</mi> <mi>c</mi> <mo>&amp;CenterDot;</mo> <mi>P</mi> <mi>R</mi> <mi>T</mi> <mo>&amp;CenterDot;</mo> <mi>&amp;delta;</mi> <mo>(</mo> <mrow> <mi>A</mi> <mi>c</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>o</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>O</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>K</mi> </munderover> <mo>&amp;lsqb;</mo> <mrow> <mo>(</mo> <msub> <mi>t</mi> <mrow> <mi>o</mi> <mi>k</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>y</mi> <mrow> <mi>o</mi> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <msub> <mi>prt</mi> <mrow> <mi>o</mi> <mi>k</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <mi>&amp;delta;</mi> <mrow> <mo>(</mo> <msub> <mi>t</mi> <mrow> <mi>o</mi> <mi>k</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>y</mi> <mrow> <mi>o</mi> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>tanh</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msup> <mi>e</mi> <mi>x</mi> </msup> <mo>-</mo> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mi>x</mi> </mrow> </msup> </mrow> <mrow> <msup> <mi>e</mi> <mi>x</mi> </msup> <mo>+</mo> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mi>x</mi> </mrow> </msup> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>&amp;epsiv;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>&amp;ap;</mo> <mi>&amp;delta;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>tanh</mi> <mrow> <mo>(</mo> <mi>&amp;alpha;</mi> <mo>&amp;CenterDot;</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>+</mo> <mn>1</mn> </mrow> <mn>2</mn> </mfrac> <mo>,</mo> <mi>&amp;alpha;</mi> <mo>&gt;</mo> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mfenced>
Wherein, Y is the output matrix of neural metwork training collection, and the element of matrix o rows kth row is denoted as yok;PDA is higher level market Electricity price matrix, matrix o rows kth row element be denoted as pdaok;PRT be subordinate market electricity price matrix, matrix o row kth The element of row is denoted as prtok;T is the true matrix of loadings of neural metwork training collection, and the element of matrix o rows kth row is denoted as tok; Bias vectors of the Ac between Y and true load;ε (x) represents the excitation function of neuron;δ (x) approaches letter for a step Number.
3. according to the method described in claim 2, it is characterized in that, the optimization training process in the step S3 also needs to meet Constraint requirements as follows:
<mrow> <mi>C</mi> <mi>o</mi> <mi>n</mi> <mi>s</mi> <mi>t</mi> <mo>:</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>Y</mi> <mo>&gt;</mo> <mn>0</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mover> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>A</mi> <mi>c</mi> </mrow> <mi>T</mi> </mfrac> <mo>)</mo> </mrow> <mo>&amp;OverBar;</mo> </mover> <mo>&lt;</mo> <mo>&lt;</mo> <mi>C</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein, C represents that load declares the error threshold of policymaker's setting.
4. according to the method described in claim 3, it is characterized in that, the step S3 is specifically included:
S31. the optimization thought that foundation gradient declines establishes optimization training method, and what calculation optimization was trained according to the following formula becomes certainly Amount:
<mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>W</mi> <mrow> <mi>h</mi> <mi>o</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>W</mi> <mrow> <mi>h</mi> <mi>o</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>&amp;lambda;</mi> <mo>&amp;CenterDot;</mo> <mo>&amp;dtri;</mo> <msub> <mi>G</mi> <msub> <mi>W</mi> <mrow> <mi>h</mi> <mi>o</mi> </mrow> </msub> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>B</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>B</mi> <mi>o</mi> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>&amp;lambda;</mi> <mo>&amp;CenterDot;</mo> <mo>&amp;dtri;</mo> <msub> <mi>G</mi> <msub> <mi>B</mi> <mi>o</mi> </msub> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>W</mi> <mrow> <mi>i</mi> <mi>h</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>W</mi> <mrow> <mi>i</mi> <mi>h</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>&amp;lambda;</mi> <mo>&amp;CenterDot;</mo> <mo>&amp;dtri;</mo> <msub> <mi>G</mi> <msub> <mi>W</mi> <mrow> <mi>i</mi> <mi>h</mi> </mrow> </msub> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>B</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>B</mi> <mi>h</mi> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>&amp;lambda;</mi> <mo>&amp;CenterDot;</mo> <mo>&amp;dtri;</mo> <msub> <mi>G</mi> <msub> <mi>B</mi> <mi>h</mi> </msub> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced>
Wherein:WihRepresent the weight matrix between input layer and interlayer;BhRepresent the bias variable matrix in interlayer;WhoIt represents Weight matrix between interlayer and output layer;BoRepresent the bias variable matrix of output layer;λ is iteration step length coefficient, by bearing Lotus declares policymaker and provides, and the value of λ is bigger, and iterative convergence speed is faster, but error is also bigger simultaneously;
S32. to the independent variable W of optimization trainingho(q)、Bo(q)、Wih(q)、Bh(q) initialized;
S33. the independent variable W of calculation optimization trainingho(q+1)、Bo(q+1)、Wih(q+1)、Bh(q+1);
S34. the numerical value of Er and Cos is calculated, judges whether to meet terminal condition;If so, iteration ends, optimization training terminates; If it is not, then making q=q+1, and go to step S33.
5. according to the method described in claim 4, it is characterized in that, the criterion of described " whether meeting terminal condition " is specific Including:Judge whether the error between optimum results and actual value is less than given threshold by the numerical value of Er and Cos;If so, Meet terminal condition;If it is not, then it is unsatisfactory for terminal condition.
6. according to the method described in claim 5, it is characterized in that, the method further includes:To prevent the iteration mistake of step S34 Cheng Chengwei endless loops pre-set dwell time or stop cycle-index;If the error one between optimum results and actual value Directly it is more than given threshold, then iterative cycles reach dwell time or meet terminal condition automatically after stopping cycle-index.
7. according to the method described in claim 1, it is characterized in that, the temperature of correlated characteristic data including object time, The load data of humidity, weather conditions and history.
8. a kind of anti-benefit deviation neural network prediction device based on cost association, which is characterized in that described device includes:
Data input module:For input data;
Neural network model module:For establishing neural network model;
Optimization aim module:Trained optimization aim is optimized to neural network model for setting;
Optimize training module:For optimizing training to neural network model, trained neural network model is obtained;
As a result output module:For output load prediction result.
CN201711408089.5A 2017-12-22 2017-12-22 Cost association-based benefit deviation resistant neural network prediction method and device Active CN108108842B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711408089.5A CN108108842B (en) 2017-12-22 2017-12-22 Cost association-based benefit deviation resistant neural network prediction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711408089.5A CN108108842B (en) 2017-12-22 2017-12-22 Cost association-based benefit deviation resistant neural network prediction method and device

Publications (2)

Publication Number Publication Date
CN108108842A true CN108108842A (en) 2018-06-01
CN108108842B CN108108842B (en) 2022-04-05

Family

ID=62212491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711408089.5A Active CN108108842B (en) 2017-12-22 2017-12-22 Cost association-based benefit deviation resistant neural network prediction method and device

Country Status (1)

Country Link
CN (1) CN108108842B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070185823A1 (en) * 2005-07-28 2007-08-09 Dingguo Chen Load prediction based on-line and off-line training of neural networks
CN103295081A (en) * 2013-07-02 2013-09-11 上海电机学院 Electrical power system load prediction method based on back propagation (BP) neural network
CN104484727A (en) * 2015-01-12 2015-04-01 江南大学 Short-term load prediction method based on interconnected fuzzy neural network and vortex search
CN107370170A (en) * 2017-06-23 2017-11-21 浙江大学 A kind of energy storage system capacity collocation method for considering capacity price of electricity and load prediction error

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070185823A1 (en) * 2005-07-28 2007-08-09 Dingguo Chen Load prediction based on-line and off-line training of neural networks
CN103295081A (en) * 2013-07-02 2013-09-11 上海电机学院 Electrical power system load prediction method based on back propagation (BP) neural network
CN104484727A (en) * 2015-01-12 2015-04-01 江南大学 Short-term load prediction method based on interconnected fuzzy neural network and vortex search
CN107370170A (en) * 2017-06-23 2017-11-21 浙江大学 A kind of energy storage system capacity collocation method for considering capacity price of electricity and load prediction error

Also Published As

Publication number Publication date
CN108108842B (en) 2022-04-05

Similar Documents

Publication Publication Date Title
Wang et al. Stochastic combined heat and power dispatch based on multi-objective particle swarm optimization
CN103942461B (en) Water quality parameter Forecasting Methodology based on online online-sequential extreme learning machine
CN110458443A (en) A kind of wisdom home energy management method and system based on deeply study
CN107067121A (en) A kind of improvement grey wolf optimized algorithm based on multiple target
CN104636823B (en) A kind of wind power forecasting method
CN107578124A (en) The Short-Term Load Forecasting Method of GRU neutral nets is improved based on multilayer
CN105678407A (en) Daily electricity consumption prediction method based on artificial neural network
CN106815782A (en) A kind of real estate estimation method and system based on neutral net statistical models
CN107067190A (en) The micro-capacitance sensor power trade method learnt based on deeply
CN104636801A (en) Transmission line audible noise prediction method based on BP neural network optimization
CN103593719B (en) A kind of rolling power-economizing method based on slab Yu contract Optimized Matching
CN105631517A (en) Photovoltaic power generation power short term prediction method based on mind evolution Elman neural network
CN106203683A (en) A kind of modeling method of power customer electro-load forecast system
CN110287509A (en) Flexibility analysis and the fault of construction diagnosis of municipal heating systems and localization method and system
CN106779253A (en) The term load forecasting for distribution and device of a kind of meter and photovoltaic
CN115940294B (en) Multi-stage power grid real-time scheduling strategy adjustment method, system, equipment and storage medium
CN107590570A (en) A kind of bearing power Forecasting Methodology and system
Shang et al. Production scheduling optimization method based on hybrid particle swarm optimization algorithm
CN107145968A (en) Photovoltaic apparatus life cycle cost Forecasting Methodology and system based on BP neural network
CN106355980A (en) Power grid regulation capability predication method based on limited memory extreme learning machine
Kutschenreiter-Praszkiewicz Application of artificial neural network for determination of standard time in machining
CN108108842A (en) A kind of anti-benefit deviation neural net prediction method and device based on cost association
CN103489037B (en) A kind of Forecasting Methodology that can power generating wind resource
CN104850918A (en) Node load prediction method taking power grid topology constraints into consideration
Wu et al. Applications of AI techniques to generation planning and investment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant