CN113624998A - Electric boiler heat supplementing and heat storing cost optimization method and device based on electric power big data - Google Patents

Electric boiler heat supplementing and heat storing cost optimization method and device based on electric power big data Download PDF

Info

Publication number
CN113624998A
CN113624998A CN202111092993.6A CN202111092993A CN113624998A CN 113624998 A CN113624998 A CN 113624998A CN 202111092993 A CN202111092993 A CN 202111092993A CN 113624998 A CN113624998 A CN 113624998A
Authority
CN
China
Prior art keywords
data
model
missing
layer
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111092993.6A
Other languages
Chinese (zh)
Inventor
李伟
胡博
王丽霞
齐智刚
王南
张智儒
高强
刘育博
杨晶
孟凡尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Xuneng Technology Co Ltd
State Grid Liaoning Electric Power Co Ltd
Original Assignee
Liaoning Xuneng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Xuneng Technology Co ltd filed Critical Liaoning Xuneng Technology Co ltd
Priority to CN202111092993.6A priority Critical patent/CN113624998A/en
Publication of CN113624998A publication Critical patent/CN113624998A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01QSCANNING-PROBE TECHNIQUES OR APPARATUS; APPLICATIONS OF SCANNING-PROBE TECHNIQUES, e.g. SCANNING PROBE MICROSCOPY [SPM]
    • G01Q10/00Scanning or positioning arrangements, i.e. arrangements for actively controlling the movement or position of the probe
    • G01Q10/04Fine scanning or positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a method and a device for optimizing electric boiler heat supplementing and heat storing cost based on electric power big data. The method comprises the steps of obtaining weather forecast data, equipment operation data and power consumption data, and preprocessing the obtained data; extracting cost characteristics and operating efficiency characteristics, performing dimension reduction processing on the extracted characteristics, and dividing the extracted characteristics into training set data and test set data; establishing a BP neural network model, inputting training set data into the BP neural network model for training, and taking the trained neural network model as a heat supplementing and storing strategy model; and inputting the test set data into the heat supplementing and heat storing strategy model, and outputting the energy cost. In this way, the operation strategy and the energy-saving and consumption-reducing effects of the heat exchange station can be fitted, the fitting efficiency is obviously improved, and the energy utilization efficiency of the heat exchange station is improved.

Description

Electric boiler heat supplementing and heat storing cost optimization method and device based on electric power big data
Technical Field
The invention relates to the field of electric power big data, in particular to an electric boiler heat supplementing and heat storing cost optimization method and device based on electric power big data.
Background
The heating mode in winter in northern areas is mainly centralized heating, and because the heating area is continuously enlarged, the loss in the transmission process of a heat supply network is continuously increased, so that the problem that the heating capacity of a heat exchange station at the tail end of the heat supply is insufficient is generally caused. In order to solve the problem, electric energy heat supplementing devices are additionally arranged in a plurality of heat exchange stations, and electric energy is used for supplementing heating in the heat exchange stations, so that the problem of insufficient heat supply capacity of the heat exchange stations at the heat supply tail ends is solved. For example, the flow rate is automatically adjusted according to the return water temperature, and when the return water temperature is high, the flow rate of the water valve is reduced; when the temperature of the return water is low, the flow rate of the water valve is increased. Therefore, when the flow rate is high, the hot water in the primary network flows through the plate heat exchanger at a high speed, more heat energy is exchanged to the secondary network, and the water supply temperature of the secondary network can be ensured.
However, due to the complex system, the number of devices to be controlled is large, the manual control effect is poor, the algorithm model is not used for assistance, the operation efficiency of the heat compensation system is low, and the heat loss is caused by the overhigh flow rate. And because the return water temperature is sensed to be reduced and then is adjusted, the adjustment time has hysteresis, and the adjustment strategy cannot be pre-judged according to the real-time weather condition.
Disclosure of Invention
According to the embodiment of the invention, an electric boiler heat supplementing and heat storing cost optimization scheme based on electric power big data is provided. The scheme provides an operation strategy for optimizing the cost of heat compensation and heat storage of the electric boiler for the heat exchange station, and improves the energy utilization efficiency of the heat exchange station.
In a first aspect of the invention, a method for optimizing the heat supplementing and storing cost of an electric boiler based on electric power big data is provided. The method comprises the following steps:
acquiring weather forecast data, equipment operation data and electric power acquisition data, and preprocessing the acquired data;
carrying out cost characteristic extraction and operation efficiency characteristic extraction on the preprocessed data, carrying out dimensionality reduction on the extracted characteristics by using a PCA (principal component analysis) algorithm, and dividing the data subjected to dimensionality reduction into training set data and test set data;
establishing a BP neural network model, inputting the training set data into the BP neural network model for training, and taking the trained neural network model as a heat supplementing and storing strategy model;
and inputting the test set data into the heat supplementing and heat storing strategy model, and outputting the energy cost.
Further, the preprocessing the acquired data includes:
identifying missing data from the acquired data, and supplementing the missing data;
and carrying out normalization processing on the data subjected to the filling-up to obtain preprocessed data.
Further, the identifying missing data from the acquired data, and performing a gap filling on the missing data, includes:
identifying a missing data group object corresponding to the missing data to obtain a missing data group object set;
calculating a similarity class for each missing dataset object in the missing dataset object set from the complete dataset object set;
if the complete dataset object set does not have the similarity of the missing dataset object, deleting the missing dataset object from the missing dataset object set;
if only one data group object exists in the similar class of the missing data group object, filling up the missing data of the missing data group object according to the numerical value in the similar class;
if the similarity class of the missing data group object has a plurality of data group objects, calculating the mode of the numerical values in the similarity class corresponding to the missing data, and supplementing the missing data of the missing data group object according to the mode.
Further, the using the PCA algorithm to perform dimensionality reduction processing on the extracted features includes:
converting the preprocessed data into a one-dimensional array according to the sequence of weather forecast data, equipment operation data and power consumption data;
calculating a variance matrix, and decomposing the eigenvalues of the cost characteristic and the operation efficiency characteristic; arranging the eigenvectors of the cost characteristic and the operation efficiency characteristic in a descending order according to the magnitude of the corresponding eigenvalue;
and forming a projection matrix by using the eigenvectors corresponding to the maximum eigenvalue, and projecting the one-dimensional array to the projection matrix to obtain the dimensionality reduction representation of the one-dimensional array.
Further, the BP neural network model adopts an M-P neuron structure, an activation function is a Logistic function, and an optimizer is lbfgs.
Further, inputting the training set data into the BP neural network model for training, including:
inputting the training set data into an input layer of the BP neural network model, respectively calculating the input and output of a hidden layer and an output layer through the forward propagation of signals, and outputting the energy consumption cost by the output layer;
calculating the output error of each layer of neuron of the BP neural network model in a reverse direction from the output layer, and correcting the weight and the threshold of the hidden layer and the output layer once by an error gradient descent method;
and carrying out secondary correction on the weight and the threshold of the hidden layer and the output layer after the primary adjustment to obtain a corrected BP neural network model.
Further, the performing of the secondary correction on the weight and the threshold of the hidden layer and the output layer after the primary adjustment includes:
Δwij(k+1)=(1-mc)ηδipj+mcΔwij(k)
Δθi(k+1)=(1-mc)ηδi+mcΔθi(k)
wherein k is the training times; mc is a momentum factor, and 0.95 is taken; p is a radical ofjIs the output of the previous layer node j; delta thetai(k) The result of the first correction of the threshold value of the hidden layer is obtained; Δ wij(k) The result of the first correction of the weight of the hidden layer is obtained; Δ wij(k +1) is the secondary correction result of the weight of the hidden layer; delta thetai(k +1) is the secondary correction result of the hidden layer threshold; eta is the learning rate; deltaiFor coefficients, the calculation formula is as follows:
δi=yi(1-yi)(di-yi)
wherein d isiIs the target output of neuron node i, yiIs the actual output of neuron node i.
Further, the method further comprises:
storing the heat supplementing and heat storing strategy model through a model graph data structure and a model variable value data structure; wherein the model graph data structure comprises:
the first part is used for storing layer information of the model, wherein each layer of information occupies a data block, and a layer serial number, the number of neurons of the current layer and neuron description information are sequentially recorded in the data block;
a second part for storing parameter propagation direction data of the model; the parameter propagation direction data of the model is described through the communication relation among the front layer neurons, the target neurons and the neurons;
the model variable value data structure comprising:
describing each neuron by neuron numbers, bias and weight arrays; the weight array is an object array, and the object array comprises the preposed neurons and the weights of all signal transmission directions.
In a second aspect of the invention, an electric boiler heat supplementing and heat storing cost optimizing device based on electric power big data is provided. The device includes:
the acquisition module is used for acquiring weather forecast data, equipment operation data and electric power acquisition data and preprocessing the acquired data;
the dimensionality reduction module is used for extracting cost characteristics and operating efficiency characteristics of the preprocessed data, performing dimensionality reduction on the extracted characteristics by using a Principal Component Analysis (PCA) algorithm, and dividing the data subjected to dimensionality reduction into training set data and test set data;
the training module is used for establishing a BP neural network model, inputting the training set data into the BP neural network model for training, and taking the trained neural network model as a heat supplementing and storing strategy model;
and the output module is used for inputting the test set data into the heat supplementing and heat storing strategy model and outputting the energy cost.
In a third aspect of the invention, an electronic device is provided. The electronic device at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect of the invention.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of any embodiment of the invention, nor are they intended to limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present invention will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 shows a flow chart of a method for electric boiler recuperative heat storage cost optimization based on electric power big data according to an embodiment of the present invention;
FIG. 2 illustrates a flow diagram for supplementing missing data according to an embodiment of the present invention;
FIG. 3 shows a dimension reduction process flow diagram according to an embodiment of the invention;
FIG. 4 shows a schematic diagram of an M-P neuron structure, according to an embodiment of the invention;
FIG. 5 shows a BP neural network architecture diagram according to an embodiment of the invention;
FIG. 6 shows a flow diagram for training a BP neural network model according to an embodiment of the invention;
FIG. 7 shows a block diagram of an electric boiler recuperative heat storage cost optimization device based on electric power big data according to an embodiment of the present invention;
FIG. 8 illustrates a block diagram of an exemplary electronic device capable of implementing embodiments of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
According to the invention, the operation strategy and the energy-saving and consumption-reducing effects of the heat exchange station are fitted through the deep neural network algorithm, the optimized BP neural network is adopted, the fitting efficiency is obviously improved, the operation strategy model can be rapidly derived for the heat exchange station, and the energy utilization efficiency of the heat exchange station is improved.
Fig. 1 shows a flowchart of an electric boiler heat compensation and heat storage cost optimization method based on electric power big data according to an embodiment of the present invention.
The method comprises the following steps:
s101, acquiring weather forecast data, equipment operation data and electric power acquisition data, and preprocessing the acquired data.
As an embodiment of the present invention, the weather forecast data may be local weather forecast data. For example, the data structure of the acquired weather forecast data is shown in table 1.
Figure BDA0003268253620000071
Figure BDA0003268253620000081
TABLE 1
The data structure of the device operation data is shown in table 2 as an embodiment of the present invention.
Figure BDA0003268253620000082
Figure BDA0003268253620000091
TABLE 2
As an embodiment of the present invention, a data structure of the power consumption data is shown in table 3.
Figure BDA0003268253620000092
Figure BDA0003268253620000101
TABLE 3
By standardizing the data format of the input data, the adopted data can more accurately reflect the operation strategy and the energy consumption of the heat exchange station.
After the weather forecast data, the equipment operation data and the electric power acquisition data are acquired, the acquired data need to be preprocessed.
As an embodiment of the present invention, preprocessing acquired data includes:
identifying missing data from the acquired data, and supplementing the missing data;
and carrying out normalization processing on the data subjected to the filling-up to obtain preprocessed data.
In this embodiment, the identifying missing data from the acquired data S and performing a gap filling on the missing data includes:
s201, identifying missing data group object x corresponding to missing dataiObtaining a missing data group object set MOS; wherein x isiIndicating the ith missing data group object, i.e. indicating a group of missing data.
As an embodiment of the invention, the missing data takes a null value as a missing mark, and the missing data is identified by identifying the null value. Data loss includes a variety of situations, for example:
1) when a row-column conversion problem occurs in the data conversion process, the data are lost due to different data monitoring frequencies, and null values are taken as loss marks for the data;
2) when data aggregation calculation is encountered in the data conversion process, the condition of insufficient aggregation sample number occurs, and null value of the data is taken as a missing mark;
3) when the data are missing and the missing data amount does not exceed 20% of the total data amount, the null value of the missing data is used as a missing mark; when the data itself is missing by more than 20%, the group data should be deleted and not used as model training data.
S202, calculating the similarity of each missing data group object in the missing data group object set from the complete data group object set. The similarity class is extracted from the known complete dataset object set.
If the reversible matrix P exists, the P is enabled to be P-1And if AP is B, B is called as a similar matrix of A, and the matrixes A and B are called as A-B similarly.
S203-1, if the similarity class of the missing data group object does not exist in the complete data group object set, deleting the missing data group object from the missing data group object set.
In this embodiment, if xiSimilar class S ofR(x) If the data set is empty, that is, no similar matrix is found in the existing complete data set object set, x is directly deleted from the acquired data set Si
S203-2, if only one data group object exists in the similar class of the missing data group object, filling up the missing data of the missing data group object according to the values in the similar class.
In this embodiment, if xiSimilar class S ofR(x) Only one object in, then for xiThe missing data of (1) is supplemented according to the similarity class value.
S203-3, if a plurality of data group objects exist in the similarity class of the missing data group object, calculating the mode of numerical values in the similarity class corresponding to the missing data, and supplementing the missing data of the missing data group object according to the mode.
The mode refers to a numerical value with a significant central tendency point on a statistical distribution, and represents a general level of data, namely a numerical value with the largest occurrence number in a group of data.
As an embodiment of the present invention, the normalizing the data after the padding to obtain the preprocessed data includes:
performing linear transformation on original data by adopting a Min-max standardization method, setting minA and maxA as the minimum value and the maximum value of an attribute set A respectively, and mapping one original value x in the attribute set A into a value x' in an interval [0,1] through Min-max standardization, wherein the formula is as follows:
Figure BDA0003268253620000121
wherein A is an attribute set; minA is the minimum value of the set A; maxA is the maximum value of the set A; a' is the normalized attribute set.
And S102, performing cost characteristic extraction and operation efficiency characteristic extraction on the preprocessed data, performing dimensionality reduction on the extracted characteristics by using a Principal Component Analysis (PCA) algorithm, and dividing the data subjected to dimensionality reduction into training set data and test set data.
In the field of machine learning, different evaluation indexes (that is, different features in a feature vector are the different evaluation indexes) often have different dimensions and dimension units, which affect the result of data analysis, and in order to eliminate the dimension influence between indexes, data standardization processing, that is, data complementation and normalization processing, is required to solve comparability between data indexes, so that the machine learning efficiency is higher.
As an embodiment of the present invention, the performing dimension reduction processing on the extracted features by using a PCA algorithm includes:
s301, converting the preprocessed data into a one-dimensional array according to the sequence of weather forecast data, equipment operation data and electric power utilization data.
S302, calculating a variance matrix, and decomposing eigenvalues of the cost characteristic and the operation efficiency characteristic; and arranging the eigenvectors of the cost characteristic and the operation efficiency characteristic in a descending order according to the size of the corresponding eigenvalue.
In this embodiment, calculating the variance matrix includes:
Figure BDA0003268253620000122
Figure BDA0003268253620000131
wherein S isTThe variance matrix is a (m × n) x (m × n) matrix; mu is a mean vector; t is the transposition of the matrix; n is the number of matrix parameters; x is the number ofiIs a parameter value; XX is a one-dimensional matrix variance array. And S303, forming a projection matrix by the eigenvectors corresponding to the maximum eigenvalue, and projecting the one-dimensional array to the projection matrix to obtain the dimension reduction expression of the one-dimensional array.
As an embodiment of the invention, the obtained eigenvectors corresponding to the N-1 maximum eigenvalues with k less than or equal to form a projection matrix A, and the converted one-dimensional array (x) is used1,...,xn) Projecting to the matrix A to obtain the reduced dimension representation (y) of the one-dimensional array1,…,yn) The mapping formula is as follows:
y=(X-μ)Ak
wherein y is a one-dimensional array vector after dimension reduction; x is an original one-dimensional array vector; mu is a mean vector; a. thekThe feature vectors corresponding to the first k maximum feature values (k is less than or equal to N-1).
The dimensionality reduction processing can enable data to be convenient to calculate and visualize, and the deeper significance of the dimensionality reduction processing lies in extraction and synthesis of effective information and rejection of useless information, and the problems that the calculated amount is large and the training time is long due to overlarge feature matrix are avoided.
S103, establishing a BP neural network model, inputting the training set data into the BP neural network model for training, and taking the trained neural network model as a heat supplementing and storing strategy model.
As an embodiment of the invention, the BP neural network model adopts an M-P neuron structure, an activation function is a Logistic function, and an optimizer is lbfgs.
As shown in fig. 4, in an M-P neuron structure, a neuron receives input signals transmitted from n other neurons, and then superimposes the received input signals according to a certain weight, and the superimposed stimulation intensity S can be expressed by the following formula:
Figure BDA0003268253620000141
wherein S is the stimulation intensity after superposition; n is the number of neurons; w is aiA connection weight for the ith neuron; x is the number ofiIs the input to the ith neuron.
And after the superposed stimulation intensity S is obtained, comparing the stimulation intensity S with the threshold value of the current neuron, and expressing and outputting the stimulation intensity S outwards through an activation function. The M-P neuron model can be represented by the following formula:
Figure BDA0003268253620000142
where θ is the neuron threshold (bias); f is an activation function; and y is the output of the M-P neuron model.
In this embodiment, the structure of the BP neural network is shown in fig. 5. Wherein x isjInputs representing j nodes of the input layer, j being 1,2, … …, M; w is aijRepresenting the weight from the ith node of the hidden layer to the jth node of the input layer; thetaiA threshold value representing the ith node of the hidden layer;
Figure BDA0003268253620000143
an excitation function representing the hidden layer; w is akjRepresenting the weight value from the kth node of the output layer to the ith node of the hidden layer, wherein i is 1,2, … …, q; a iskA threshold value indicating the kth node of the output layer, k being 1,2, … …, L; Ψ (x) represents the excitation function of the output layer; o iskRepresenting the output of the kth node of the output layer.
In this embodiment, as shown in fig. 6, inputting the training set data into the BP neural network model for training includes:
s601, inputting the training set data to an input layer of the BP neural network model, respectively calculating the input and output of a hidden layer and an output layer through the forward propagation of signals, and outputting the energy consumption cost by the output layer.
In this embodiment, the forward propagation process of the signal in the BP neural network includes:
input net of i-th node of hidden layeri
Figure BDA0003268253620000144
Output y of the ith node of the hidden layeri
Figure BDA0003268253620000151
Input net of k node of output layerk
Figure BDA0003268253620000152
Output O of kth node of output layerk
Figure BDA0003268253620000153
And S602, reversely calculating the output error of each layer of neuron of the BP neural network model from the output layer, and performing primary correction on the weight and the threshold of the hidden layer and the output layer by an error gradient descent method.
In this embodiment, once correcting the weights and thresholds of the hidden layer and the output layer by using an error gradient descent method through a back propagation process of an error in the BP neural network includes:
and (3) the back propagation of the error, namely calculating the output error of each layer of neuron layer by layer from the output layer, and then adjusting the weight and the threshold of each layer according to an error gradient descent method to enable the final output of the modified network to be close to the expected value.
The quadratic error criterion function for each sample p is Ep
Figure BDA0003268253620000154
The total error criterion function of the system for P training samples is:
Figure BDA0003268253620000155
wherein, TkRepresents a target output of the kth node;
Figure BDA0003268253620000161
representing the actual output of the p sample data of the kth node;
Figure BDA0003268253620000162
and representing the target output of the p sample data of the kth node.
Correcting quantity delta w for sequentially correcting weight of output layer according to error gradient descent methodkiCorrection amount of output layer threshold value DeltaakCorrection amount Δ w of weight of hidden layerijCorrection amount of hidden layer threshold value Delta thetai
Figure BDA0003268253620000163
Figure BDA0003268253620000164
Figure BDA0003268253620000165
Figure BDA0003268253620000166
Where η is the learning rate.
Output layer weight correction formula:
Figure BDA0003268253620000167
output layer threshold correction formula:
Figure BDA0003268253620000168
hidden layer weight modification formula:
Figure BDA0003268253620000169
hidden layer threshold modification equation:
Figure BDA00032682536200001610
and S603, carrying out secondary correction on the weight and the threshold of the hidden layer and the output layer after primary adjustment to obtain a corrected BP neural network model.
In the embodiment, in order to improve the fitting efficiency, a weight value with an additional momentum factor and a threshold quadratic correction formula are introduced. And performing secondary correction on the weights and the thresholds of the hidden layer and the output layer after the primary adjustment, wherein the secondary correction comprises the following steps:
Δwij(k+1)=(1-mc)ηδipj+mcΔwij(k)
Δθi(k+1)=(1-mc)ηδi+mcΔθi(k)
wherein k is the training times; mc is a momentum factor, and 0.95 is taken; p is a radical ofjIs the output of the previous layer node j; delta thetai(k) The result of the first correction of the threshold value of the hidden layer is obtained; Δ wij(k) The result of the first correction of the weight of the hidden layer is obtained; Δ wij(k +1) is the secondary correction result of the weight of the hidden layer; delta thetai(k +1) is the secondary correction result of the hidden layer threshold; eta is the learning rate; deltaiFor coefficients, the calculation formula is as follows:
δi=yi(1-yi)(di-yi)
wherein d isiIs the target output of neuron node i, yiIs the actual output of neuron node i.
The weight and the threshold of the hidden layer and the output layer are secondarily corrected by introducing a weight and threshold secondary correction formula with an additional momentum factor, the influence of the last weight (or threshold) change is transmitted by a momentum factor, the network is favorably jumped out of the local minimum value of the error curved surface, and if the additional momentum method is not used, the condition that the model parameters stay at the local minimum value can be generated with a certain probability.
And S104, inputting the test set data into the heat supplementing and heat storing strategy model, and outputting energy cost. The energy usage cost includes electricity usage and heat usage cost.
In some embodiments of the invention, further comprising:
storing the heat supplementing and heat storing strategy model through a model graph data structure and a model variable value data structure; wherein the model graph data structure comprises:
the first part is used for storing layer information of the model, the information of each layer occupies a data block, and the data block sequentially records a layer sequence number, the number of neurons of the current layer and neuron description information.
A second part for storing parameter propagation direction data of the model; the parameter propagation direction data of the model is described through the communication relation among the front layer neurons, the target neurons and the neurons; for example, the second part is represented as { a front layer neuron, a target neuron, whether connected }, wherein the front layer neuron and the target neuron are represented by unique numbers, and whether connected information is represented by logical values.
The model variable value data structure comprising:
for each neuron, it is described by a neuron number, bias, and weight array, e.g., { neuron number,bias, weight array }; the weight array is an object array, and the object array comprises the preposed neurons and the weights of all signal transmission directions. The data structure thereof is, for example: [ { 1 st leading neuron numbering, weight w1j{ 2 nd leading neuron numbering, weight w2j},……]。
The heat supplementing and heat storing strategy model is stored through the model graph data structure and the model variable value data structure, the model graph data and the model variable value data are respectively stored, so that the model graph data and the model variable value data are separated, different training data can be conveniently fitted on the model, the original training result can be flexibly used, and the model can be conveniently migrated.
According to the embodiment of the invention, the operation strategy and the energy-saving and consumption-reducing effects of the heat exchange station are fitted through the deep neural network algorithm, the optimized BP neural network is adopted, the fitting efficiency is obviously improved, the operation strategy model can be rapidly led out for the heat exchange station, and the energy utilization efficiency of the heat exchange station is improved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules illustrated are not necessarily required to practice the invention.
The above is a description of method embodiments, and the embodiments of the present invention are further described below by way of apparatus embodiments.
As shown in fig. 7, the apparatus 700 includes:
the acquisition module 710 is configured to acquire weather forecast data, device operation data, and power consumption data, and preprocess the acquired data;
the dimensionality reduction module 720 is used for extracting cost characteristics and operating efficiency characteristics of the preprocessed data, performing dimensionality reduction on the extracted characteristics by using a PCA (principal component analysis) algorithm, and dividing the data subjected to dimensionality reduction into training set data and test set data;
the training module 730 is used for establishing a BP neural network model, inputting the training set data into the BP neural network model for training, and taking the trained neural network model as a heat supplementing and storing strategy model;
and the output module 740 is used for inputting the test set data into the heat supplementing and heat storing strategy model and outputting the energy cost.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the described module may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
In the technical scheme of the invention, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations without violating the good customs of the public order.
According to an embodiment of the invention, the invention further provides an electronic device.
FIG. 8 shows a schematic block diagram of an electronic device 800 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
The apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above, such as the methods S101 to S104. For example, in some embodiments, methods S101-S104 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When the computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the methods S101-S104 described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the methods S101-S104 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present invention may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The electric boiler heat supplementing and heat storing cost optimization method based on electric power big data is characterized by comprising the following steps of:
acquiring weather forecast data, equipment operation data and electric power acquisition data, and preprocessing the acquired data;
carrying out cost characteristic extraction and operation efficiency characteristic extraction on the preprocessed data, carrying out dimensionality reduction on the extracted characteristics by using a PCA (principal component analysis) algorithm, and dividing the data subjected to dimensionality reduction into training set data and test set data;
establishing a BP neural network model, inputting the training set data into the BP neural network model for training, and taking the trained neural network model as a heat supplementing and storing strategy model;
and inputting the test set data into the heat supplementing and heat storing strategy model, and outputting the energy cost.
2. The method according to claim 1, wherein the preprocessing the acquired data includes:
identifying missing data from the acquired data, and supplementing the missing data;
and carrying out normalization processing on the data subjected to the filling-up to obtain preprocessed data.
3. The method of claim 2, wherein the identifying missing data from the acquired data, and the complementing the missing data comprises:
identifying a missing data group object corresponding to the missing data to obtain a missing data group object set;
calculating a similarity class for each missing dataset object in the missing dataset object set from the complete dataset object set;
if the complete dataset object set does not have the similarity of the missing dataset object, deleting the missing dataset object from the missing dataset object set;
if only one data group object exists in the similar class of the missing data group object, filling up the missing data of the missing data group object according to the numerical value in the similar class;
if the similarity class of the missing data group object has a plurality of data group objects, calculating the mode of the numerical values in the similarity class corresponding to the missing data, and supplementing the missing data of the missing data group object according to the mode.
4. The method of claim 1, wherein the using the PCA algorithm to perform the dimensionality reduction on the extracted features comprises:
converting the preprocessed data into a one-dimensional array according to the sequence of weather forecast data, equipment operation data and power consumption data;
calculating a variance matrix, and decomposing the eigenvalues of the cost characteristic and the operation efficiency characteristic; arranging the eigenvectors of the cost characteristic and the operation efficiency characteristic in a descending order according to the magnitude of the corresponding eigenvalue;
and forming a projection matrix by using the eigenvectors corresponding to the maximum eigenvalue, and projecting the one-dimensional array to the projection matrix to obtain the dimensionality reduction representation of the one-dimensional array.
5. The method of claim 1, wherein the BP neural network model employs an M-P neuron structure, the activation function is a Logistic function, and the optimizer is lbfgs.
6. The method of claim 1, wherein inputting the training set data into the BP neural network model for training comprises:
inputting the training set data into an input layer of the BP neural network model, respectively calculating the input and output of a hidden layer and an output layer through the forward propagation of signals, and outputting the energy consumption cost by the output layer;
calculating the output error of each layer of neuron of the BP neural network model in a reverse direction from the output layer, and correcting the weight and the threshold of the hidden layer and the output layer once by an error gradient descent method;
and carrying out secondary correction on the weight and the threshold of the hidden layer and the output layer after the primary adjustment to obtain a corrected BP neural network model.
7. The method according to claim 6, wherein the performing the second modification on the weights and thresholds of the hidden layer and the output layer after the first adjustment comprises:
Δwij(k+1)=(1-mc)ηδipj+mcΔwij(k)
Δθi(k+1)=(1-mc)ηδi+mcΔθi(k)
wherein k is the training times; mc is a momentum factor, and 0.95 is taken; p is a radical ofjIs the output of the previous layer node j; delta thetai(k) The result of the first correction of the threshold value of the hidden layer is obtained; Δ wij(k) The result of the first correction of the weight of the hidden layer is obtained; Δ wij(k +1) is the secondary correction result of the weight of the hidden layer; delta thetai(k +1) is the secondary correction result of the hidden layer threshold; eta is the learning rate; deltaiFor coefficients, the calculation formula is as follows:
δi=yi(1-yi)(di-yi)
wherein d isiIs the target output of neuron node i, yiIs the actual output of neuron node i.
8. The method of claim 1, further comprising:
storing the heat supplementing and heat storing strategy model through a model graph data structure and a model variable value data structure; wherein the model graph data structure comprises:
the first part is used for storing layer information of the model, wherein each layer of information occupies a data block, and a layer serial number, the number of neurons of the current layer and neuron description information are sequentially recorded in the data block;
a second part for storing parameter propagation direction data of the model; the parameter propagation direction data of the model is described through the communication relation among the front layer neurons, the target neurons and the neurons;
the model variable value data structure comprising:
describing each neuron by neuron numbers, bias and weight arrays; the weight array is an object array, and the object array comprises the preposed neurons and the weights of all signal transmission directions.
9. The utility model provides an electric boiler concurrent heating heat-retaining cost optimization device based on electric power big data which characterized in that includes:
the acquisition module is used for acquiring weather forecast data, equipment operation data and electric power acquisition data and preprocessing the acquired data;
the dimensionality reduction module is used for extracting cost characteristics and operating efficiency characteristics of the preprocessed data, performing dimensionality reduction on the extracted characteristics by using a Principal Component Analysis (PCA) algorithm, and dividing the data subjected to dimensionality reduction into training set data and test set data;
the training module is used for establishing a BP neural network model, inputting the training set data into the BP neural network model for training, and taking the trained neural network model as a heat supplementing and storing strategy model;
and the output module is used for inputting the test set data into the heat supplementing and heat storing strategy model and outputting the energy cost.
10. An electronic device, at least one processor; and
a memory communicatively coupled to the at least one processor; it is characterized in that the preparation method is characterized in that,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
CN202111092993.6A 2021-09-17 2021-09-17 Electric boiler heat supplementing and heat storing cost optimization method and device based on electric power big data Pending CN113624998A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111092993.6A CN113624998A (en) 2021-09-17 2021-09-17 Electric boiler heat supplementing and heat storing cost optimization method and device based on electric power big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111092993.6A CN113624998A (en) 2021-09-17 2021-09-17 Electric boiler heat supplementing and heat storing cost optimization method and device based on electric power big data

Publications (1)

Publication Number Publication Date
CN113624998A true CN113624998A (en) 2021-11-09

Family

ID=78390348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111092993.6A Pending CN113624998A (en) 2021-09-17 2021-09-17 Electric boiler heat supplementing and heat storing cost optimization method and device based on electric power big data

Country Status (1)

Country Link
CN (1) CN113624998A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115841201A (en) * 2022-09-16 2023-03-24 呼伦贝尔安泰热电有限责任公司海拉尔热电厂 Heat supply network loss prediction method and system considering heat supply network characteristics in alpine region
CN117237034A (en) * 2023-11-10 2023-12-15 宁德时代新能源科技股份有限公司 Method, device, computer equipment and storage medium for determining electricity cost

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102705303A (en) * 2012-05-16 2012-10-03 北京航空航天大学 Fault location method based on residual and double-stage Elman neural network for hydraulic servo system
CN109994179A (en) * 2019-04-02 2019-07-09 上海交通大学医学院附属新华医院 The medication and device of vancomycin
CN110297269A (en) * 2018-03-23 2019-10-01 中国石油化工股份有限公司 A kind of bi-directional predicted interpolation method of seismic data based on Speed Controlling Based on Improving BP Neural Network
CN112268312A (en) * 2020-10-23 2021-01-26 哈尔滨派立仪器仪表有限公司 Intelligent heat supply management system based on deep learning
CN112700324A (en) * 2021-01-08 2021-04-23 北京工业大学 User loan default prediction method based on combination of Catboost and restricted Boltzmann machine
CN112784967A (en) * 2021-01-29 2021-05-11 北京百度网讯科技有限公司 Information processing method and device and electronic equipment
CN112836370A (en) * 2021-02-03 2021-05-25 北京百度网讯科技有限公司 Heating system scheduling method, apparatus, device, storage medium, and program product

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102705303A (en) * 2012-05-16 2012-10-03 北京航空航天大学 Fault location method based on residual and double-stage Elman neural network for hydraulic servo system
CN110297269A (en) * 2018-03-23 2019-10-01 中国石油化工股份有限公司 A kind of bi-directional predicted interpolation method of seismic data based on Speed Controlling Based on Improving BP Neural Network
CN109994179A (en) * 2019-04-02 2019-07-09 上海交通大学医学院附属新华医院 The medication and device of vancomycin
CN112268312A (en) * 2020-10-23 2021-01-26 哈尔滨派立仪器仪表有限公司 Intelligent heat supply management system based on deep learning
CN112700324A (en) * 2021-01-08 2021-04-23 北京工业大学 User loan default prediction method based on combination of Catboost and restricted Boltzmann machine
CN112784967A (en) * 2021-01-29 2021-05-11 北京百度网讯科技有限公司 Information processing method and device and electronic equipment
CN112836370A (en) * 2021-02-03 2021-05-25 北京百度网讯科技有限公司 Heating system scheduling method, apparatus, device, storage medium, and program product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
AURELIEN GERON等: "《深度学习 工程师认证初级教程》", 31 October 2020, 北京航空航天大学出版社, pages: 278 - 279 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115841201A (en) * 2022-09-16 2023-03-24 呼伦贝尔安泰热电有限责任公司海拉尔热电厂 Heat supply network loss prediction method and system considering heat supply network characteristics in alpine region
CN115841201B (en) * 2022-09-16 2023-10-31 呼伦贝尔安泰热电有限责任公司海拉尔热电厂 Heat supply network loss prediction method and system considering heat supply network characteristics in alpine region
CN117237034A (en) * 2023-11-10 2023-12-15 宁德时代新能源科技股份有限公司 Method, device, computer equipment and storage medium for determining electricity cost
CN117237034B (en) * 2023-11-10 2024-02-09 宁德时代新能源科技股份有限公司 Method, device, computer equipment and storage medium for determining electricity cost

Similar Documents

Publication Publication Date Title
CN111537945B (en) Intelligent ammeter fault diagnosis method and equipment based on federal learning
US11409347B2 (en) Method, system and storage medium for predicting power load probability density based on deep learning
WO2023123941A1 (en) Data anomaly detection method and apparatus
CN113624998A (en) Electric boiler heat supplementing and heat storing cost optimization method and device based on electric power big data
CN113837308B (en) Knowledge distillation-based model training method and device and electronic equipment
Lv et al. Very short-term probabilistic wind power prediction using sparse machine learning and nonparametric density estimation algorithms
CN104239964A (en) Ultra-short-period wind speed prediction method based on spectral clustering type and genetic optimization extreme learning machine
CN111460001A (en) Theoretical line loss rate evaluation method and system for power distribution network
CN110781970A (en) Method, device and equipment for generating classifier and storage medium
Ibragimovich et al. Effective recognition of pollen grains based on parametric adaptation of the image identification model
Chen et al. PR-KELM: Icing level prediction for transmission lines in smart grid
CN110059938B (en) Power distribution network planning method based on association rule driving
CN116910573B (en) Training method and device for abnormality diagnosis model, electronic equipment and storage medium
Wei et al. Energy financial risk early warning model based on Bayesian network
CN114118370A (en) Model training method, electronic device, and computer-readable storage medium
CN114004530A (en) Enterprise power credit score modeling method and system based on sequencing support vector machine
WO2021243930A1 (en) Method for identifying composition of bus load, and machine-readable storage medium
CN113935413A (en) Distribution network wave recording file waveform identification method based on convolutional neural network
CN116776209A (en) Method, system, equipment and medium for identifying operation state of gateway metering device
Zhai et al. Combining PSO-SVR and Random Forest Based Feature Selection for Day-ahead Peak Load Forecasting.
CN116702839A (en) Model training method and application system based on convolutional neural network
CN109784748B (en) User electricity consumption behavior identification method and device under market competition mechanism
CN116777646A (en) Artificial intelligence-based risk identification method, apparatus, device and storage medium
CN116404212A (en) Capacity equalization control method and system for zinc-iron flow battery system
CN116010875A (en) Method and device for classifying ammeter faults, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220505

Address after: 110006 No. 18, Ningbo Road, Heping District, Liaoning, Shenyang

Applicant after: STATE GRID LIAONING ELECTRIC POWER SUPPLY Co.,Ltd.

Applicant after: Liaoning xuneng Technology Co., Ltd

Address before: 110000 room 315, No. 7-4, Jinke street, Hunnan New District, Shenyang, Liaoning

Applicant before: Liaoning xuneng Technology Co.,Ltd.

TA01 Transfer of patent application right