CN111178630A - Load prediction method and device - Google Patents

Load prediction method and device Download PDF

Info

Publication number
CN111178630A
CN111178630A CN201911401100.4A CN201911401100A CN111178630A CN 111178630 A CN111178630 A CN 111178630A CN 201911401100 A CN201911401100 A CN 201911401100A CN 111178630 A CN111178630 A CN 111178630A
Authority
CN
China
Prior art keywords
data
load
predicted value
vector
encoder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911401100.4A
Other languages
Chinese (zh)
Inventor
赵蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinao Shuneng Technology Co Ltd
Original Assignee
Xinao Shuneng Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinao Shuneng Technology Co Ltd filed Critical Xinao Shuneng Technology Co Ltd
Priority to CN201911401100.4A priority Critical patent/CN111178630A/en
Publication of CN111178630A publication Critical patent/CN111178630A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Water Supply & Treatment (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention is suitable for the technical field of energy, and provides a load prediction method and a device, wherein the method comprises the following steps: preprocessing the acquired original data of the comprehensive energy system to acquire preprocessed influence factor data; then, extracting the characteristics of the influence factor data to obtain characteristic vectors corresponding to the influence factor data; and finally, decoding the characteristic vector to obtain a load predicted value corresponding to the characteristic vector. The method has the advantages of reducing the complexity of data and improving the generalization capability of the model, and meanwhile, self-sampling processing is adopted, so that overfitting of the data is avoided, the robustness of data processing is increased, the accuracy of overall data processing is improved, the overall operation is simple, the accuracy, convenience and rapidness of prediction are ensured, and the processing resources are saved.

Description

Load prediction method and device
Technical Field
The invention belongs to the technical field of energy, and particularly relates to a load prediction method and a load prediction device.
Background
With the continuous development of the society, the combined cooling heating and power system is used as a novel comprehensive energy system, which not only can meet the cooling heating and power load requirements in the region, but also effectively improves the utilization rate of energy and reduces the emission of gases such as carbon dioxide and sulfur dioxide. How to master the change relation among different loads according to the operation characteristics of the comprehensive energy system has important significance. In order to grasp the change relationship among the electrical load, the cold load, the heat load and the natural gas, a load prediction method for predicting the correlation between the electrical load, the cold load, the heat load and the natural gas is needed to realize the prediction of the short-term load in the energy system.
Disclosure of Invention
In view of this, embodiments of the present invention provide a load prediction method, a load prediction apparatus, a terminal device, and a computer-readable storage medium, so as to solve the technical problem that correlations between electrical loads, cold loads, heat loads, and natural gas cannot be accurately predicted in the prior art.
In a first aspect of the embodiments of the present invention, a load prediction method is provided, including:
preprocessing acquired original data of the comprehensive energy system to acquire preprocessed influence factor data, wherein the original data at least comprises historical electric load data and historical natural gas load data;
extracting the characteristics of the influence factor data to obtain characteristic vectors corresponding to the influence factor data;
and decoding the characteristic vector to obtain a load predicted value corresponding to the characteristic vector, wherein the load predicted value at least comprises an electric load predicted value, a cold load predicted value and a heat load predicted value.
In a second aspect of the embodiments of the present invention, there is provided a load prediction apparatus, including:
the system comprises an influence factor data acquisition module, a data processing module and a data processing module, wherein the influence factor data acquisition module is used for preprocessing acquired raw data of the comprehensive energy system and acquiring preprocessed influence factor data, and the raw data at least comprises historical electric load data and historical natural gas load data;
the characteristic vector acquisition module is used for extracting characteristics of the influence factor data to acquire a characteristic vector corresponding to the influence factor data;
and the load predicted value acquisition module is used for decoding the characteristic vector to acquire a load predicted value corresponding to the characteristic vector, wherein the load predicted value at least comprises an electric load predicted value, a cold load predicted value and a heat load predicted value.
In a third aspect of the embodiments of the present invention, there is provided a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the load prediction method when executing the computer program.
In a fourth aspect of the embodiments of the present invention, a computer-readable storage medium is provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the load prediction method.
The load prediction method provided by the embodiment of the invention has the beneficial effects that at least:
(1) according to the embodiment of the invention, firstly, the acquired original data of the comprehensive energy system is preprocessed to obtain influence factor data, and secondly, the characteristic extraction is carried out on the influence factor data to acquire the characteristic vector corresponding to the influence factor data; and finally, decoding the characteristic vector to obtain a load predicted value corresponding to the characteristic vector.
(2) The embodiment of the invention has the advantages of reducing the complexity of data and improving the generalization capability of the model, and simultaneously adopts self-sampling processing, avoids overfitting of data, increases the robustness of data processing, improves the accuracy of overall data processing, has simple overall operation, ensures the accuracy, convenience and rapidness of prediction and saves processing resources.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a first schematic flow chart illustrating an implementation process of a load prediction method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating an implementation process of obtaining a feature vector corresponding to the influence factor data in the load prediction method according to the embodiment of the present invention;
fig. 3 is a schematic flow chart illustrating an implementation process of obtaining a load prediction value corresponding to the feature vector in the load prediction method according to the embodiment of the present invention;
fig. 4 is a schematic flow chart illustrating an implementation process of the load prediction method according to the embodiment of the present invention;
fig. 5 is a schematic flow chart illustrating an implementation process of acquiring an encoder and a decoder meeting preset requirements for load prediction in the load prediction method according to the embodiment of the present invention;
fig. 6 is a schematic diagram of an implementation flow of determining whether an updated parameter meets a preset requirement in the load prediction method according to the embodiment of the present invention;
FIG. 7 is a first schematic diagram of a load prediction apparatus according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a feature vector obtaining module in the load prediction apparatus according to the embodiment of the present invention;
fig. 9 is a schematic diagram of a load prediction value obtaining module in the load prediction apparatus according to the embodiment of the present invention;
FIG. 10 is a second schematic diagram of a load forecasting apparatus according to an embodiment of the present invention;
FIG. 11 is a diagram illustrating a training module of a load prediction apparatus according to an embodiment of the present invention;
fig. 12 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the invention, are within the scope of the invention. Unless otherwise specified, the technical means used in the examples are conventional means well known to those skilled in the art.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Referring to fig. 1, a first implementation flow diagram of a load prediction method according to an embodiment of the present invention is shown, where the method may include:
step S10: preprocessing the acquired raw data of the comprehensive energy system to acquire preprocessed influence factor data, wherein the raw data at least comprises historical electric load data and historical natural gas load data.
In the step of preprocessing the acquired raw data of the comprehensive energy system, mapping the raw data by adopting a minimum-maximum standardization method to acquire the preprocessed influence factor data.
In order to eliminate the influence of different orders of magnitude, units and the like on prediction of each influence factor, the input data and the output data of load prediction are mapped into the range of [0, 1] by adopting a minimum-maximum standardization method, so that the preprocessed influence factor data are obtained, and the subsequent steps are convenient to carry out. Of course, in other embodiments, the mapping range may be other, and is not limited to the above-mentioned case, and is not limited herein.
The correlation between natural gas and load. The comprehensive energy system generally consists of 3 parts, namely an energy input layer, an energy consumption output layer and an energy conversion intermediate layer. The input layer in a typical integrated energy system framework is mainly powered by natural gas and electric energy; the middle layer is used for generating, converting and storing energy through the energy conversion equipment; and the output layer meets the requirements of the region on electric cooling and heating loads. The load is composed of two parts, one part is an external network, the other part is provided by the triple inside the comprehensive energy system, the input energy of the system is natural gas, and the load and the natural gas can be considered to be positively correlated.
Referring to fig. 1, after acquiring the preprocessed influencing factor data, the following steps may be performed:
step S30: and performing feature extraction on the influence factor data to obtain a feature vector corresponding to the influence factor data.
Short term load prediction based on the self-encoder framework. The automatic encoder utilizes the encoder to realize the feature extraction of complex data, and a decoder is arranged behind the automatic encoder to process tasks of data classification, regression, generation and the like. The part uses a plurality of convolution layers to construct an encoder to extract main characteristics influencing the load, and uses the extracted characteristics as the input of a decoder consisting of a long-term and short-term memory layer to realize short-term load prediction.
Further, in order to obtain the feature vector corresponding to the influence factor data, feature extraction needs to be performed on the preprocessed influence factor data. Fig. 2 is a schematic diagram of an implementation flow of obtaining a feature vector corresponding to the influencing factor data in the load prediction method according to the embodiment of the present invention, in this embodiment, feature extraction is performed on the preprocessed influencing factor data to obtain neurons; and carrying out self-sampling processing on the neurons to obtain the characteristic vectors. One way to obtain the feature vector corresponding to the influence factor data may include the following steps:
step S301: and (4) carrying out feature extraction on the preprocessed influence factor data to obtain the neuron.
The neuron acquisition mode is as follows:
Figure BDA0002347467750000051
wherein, apThe neurons processed by the activation function are characterized, w characterizes a weight vector, where p 1,2, …, m, x characterizes the input vector, where p 1,2, …, n.
The encoder is constructed by a one-dimensional convolution network with strong feature extraction capability, and input influence factors (time, temperature, humidity, wind speed, visibility, pressure and historical electric and gas loads) are projected into a group of feature vectors through the encoder.
Convolutional layer based encoders. The convolutional layer network has the advantages of reducing the complexity of data and improving the generalization capability of a prediction model, and is selected as the structure of an encoder. The data input by the encoder is a discrete 1-dimensional element sequence, so that the 1-dimensional convolutional layer is selected to extract the main characteristics influencing short-term load prediction. Since the input of the 1-dimensional convolution layer is a 1-dimensional vector, the feature map and convolution kernel inside the convolution network are also 1-dimensional.
The convolutional layer can be regarded as an operation between a weight vector w, p is 1,2, …, m and an input vector x, p is 1,2, …, n, where m is the size of the weight vector, i.e., the convolutional kernel, n is the length of the input vector, and n is required to be much larger than m, so as to ensure that each influence factor of the input is processed by the convolutional operation, and the convolutional output is:
Figure BDA0002347467750000061
approcessed by an activation function to obtain ap. In the convolutional layer, each neuron of the L-th layer is connected with a neuron of a local window of the upper L-1 layer to form a local network, and the input of the ith neuron of the L-th layer is
Figure BDA0002347467750000062
Where b represents a bias function and ReLU is an activation function.
After obtaining the neurons, the following steps may be performed:
step S302: and carrying out self-sampling processing on the neurons to obtain the characteristic vectors.
The feature vector acquisition mode is as follows:
poolmax(Rk)=maxaii∈Rk
wherein R iskThe regions into which the convolutional layer output is divided are characterized.
The convolutional layer is followed by a pooling layer of self-sampling to avoid over-fitting and to increase the robustness of the network. The method adds 1 maximum pooling layer after the convolution layer, and the definition is as follows:
poolmax(Rk)=maxaii∈Rk
in the formula, RkIs a plurality of regions into which the convolutional layer output is divided.
Referring to fig. 1, further, after obtaining the feature vector corresponding to the influencing factor data, the following steps may be performed:
step S40: and decoding the characteristic vector to obtain a load predicted value corresponding to the characteristic vector, wherein the load predicted value at least comprises an electric load predicted value, a cold load predicted value and a heat load predicted value.
A decoder based on long and short term memory network. The Long Short-term memory network (LSTM) can learn Long-distance dependency relationship and can solve time correlation before and after a time sequence, and the network is widely applied to 1-dimensional time sequence prediction based on the Long Short-term memory network. The invention uses the long and short term memory layer as a decoder to establish a mapping between features and loads.
The decoder is constructed by using a long-short term memory network, and the nonlinear relation between the characteristic vector and the load is established by the decoder, so that the function of automatically encoding the sequence to the sequence is realized.
Further, in order to obtain the load prediction value corresponding to the feature vector, the feature vector needs to be decoded. Please refer to fig. 3, which is a schematic diagram illustrating an implementation flow of obtaining a load prediction value corresponding to the feature vector in the load prediction method according to the embodiment of the present invention, in this embodiment, the feature vector is decoded to obtain load data; acquiring an initial load predicted value corresponding to the characteristic vector according to the load data; and performing reverse normalization processing on the initial load predicted value corresponding to the characteristic vector to obtain the load predicted value corresponding to the characteristic vector. One way to obtain the load prediction value corresponding to the feature vector may include the following steps:
step S401: and decoding the characteristic vector to obtain load data.
The load data is acquired in the following mode:
ft=σg(Wfxt+Ufht-1+bf)
it=σg(Wixt+Uiht-1+bi)
ot=σg(Woxt+Uoht-1+bo)
ct=ft*ct-1+itc(Wcxt+Ucht-1+bc)
ht=otc(ct)
wherein b characterizes a bias vector; u represents a vector; sigmagCharacterizing an S-type growth curve; sigmacCharacterizing a hyperbolic tangent function; f. oftCharacterizing an activation vector of a forgetting gate; i.e. itCharacterizing an activation vector of an input gate; otCharacterizing an activation vector of the output gate; t represents the t moment; h istCharacterizing the load data.
The long-short term memory network comprises three main structures of an input gate, a forgetting gate and an output gate, each data characteristic enters the long-short term memory network, whether the data characteristic is useful or not can be judged according to rules, information meeting the algorithm requirement is left, and otherwise the data characteristic is forgotten. The input parameters of the long-short term memory network at the time t comprise: output h of a moment on the long-short term memory networkt-1Neuron state vector c of last momentt-1Input vector x of long-short term memory networktThe input to output relationship is implemented with three gates.
After acquiring the load data, the following steps may be performed:
step S402: and acquiring an initial load predicted value corresponding to the characteristic vector according to the load data.
After the initial load prediction value corresponding to the feature vector is obtained, the following steps can be performed:
step S403: and performing reverse normalization processing on the initial load predicted value corresponding to the characteristic vector to obtain the load predicted value corresponding to the characteristic vector.
Short-term load prediction flow based on an automatic encoder. The auto-encoder consists of two main groups, encoder and decoder respectively, where a number of convolutional layers and a max-pooling layer constitute the encoder, with 1 Dropout layer inserted between the max-pooling layer and the next convolutional layer to prevent over-fitting. A plurality of long-short term memory network layers form a decoder, and finally a full connection layer is accessed at an output position.
Dropout refers to that during the training process of the deep learning network, a part of neural network units are temporarily discarded from the network according to a certain probability, which is equivalent to finding a thinner network from the original network.
There are two such disadvantages in large-scale neural networks: time consuming and easy to overfit.
For a neural network with N nodes, with Dropout, it can be regarded as 2NA set of models, but the number of parameters to be trained is constant, which alleviates time-consuming questionsTo give a title.
The asexual propagation can keep the excellent genes of a large segment, while the sexual propagation randomly breaks and breaks the genes, thus destroying the joint adaptability of the genes of the large segment, but the sexual propagation is selected in the natural selection, the breeding competition is carried out, the suitable person survives, and the strong sexual propagation can be seen. Dropout can achieve the same effect by forcing a neural unit to work together with other randomly selected neural units, eliminating the joint adaptability between the neural nodes and enhancing the generalization ability.
Referring to fig. 4, a schematic flow chart of an implementation process of the load prediction method according to the embodiment of the present invention is shown, and further, before the step of performing feature extraction on the influence factor data to obtain the feature vector corresponding to the influence factor data, the method may further include: training the initial encoder and the initial decoder to obtain an encoder and a decoder satisfying preset requirements for the load prediction step, which may include the steps of:
step S20: the initial encoder and the initial decoder are trained to obtain encoders and decoders meeting preset requirements for load prediction.
Further, in order to obtain an encoder and a decoder satisfying preset requirements for load prediction, an initial encoder and an initial decoder need to be trained. Please refer to fig. 5, which is a schematic flow chart illustrating an implementation process of acquiring an encoder and a decoder meeting a preset requirement for load prediction in the load prediction method according to the embodiment of the present invention. In the embodiment, the influence factor data is segmented to obtain training data and test data; training an initial encoder by adopting training data to obtain a trained encoder and a feature vector of the training data; training an initial decoder by adopting a trained encoder and a feature vector of training data to obtain a trained decoder and a training predicted value; testing the trained encoder and the trained decoder with test data to determine encoders and decoders meeting preset requirements for load prediction. One way to obtain encoders and decoders meeting preset requirements for load prediction may include the steps of:
step S201: and segmenting the influence factor data to obtain training data and test data.
After the training data and the test data are acquired, the following steps may be performed:
step S202: training the initial encoder by using the training data to obtain the trained encoder and the feature vector of the training data.
After obtaining the feature vectors of the trained encoder and training data, the following steps may be performed:
step S203: and training an initial decoder by adopting the trained encoder and the feature vector of the training data to obtain the trained decoder and a training prediction value.
After obtaining the trained decoder and the training predictors, the following steps may be performed:
step S204: and testing the trained encoder and the trained decoder by using the test data to determine that the encoder and the decoder meeting preset requirements are used for load prediction.
Calculating a loss function by comparing load data output by a decoder with a real load; the weights of the encoder and decoder networks are then updated in reverse, depending on the error, and training is repeated until the iteration is complete.
Further, in order to determine that the encoder and the decoder satisfying the preset requirements are used for the load prediction, the initial encoder and the initial decoder need to be trained. Fig. 6 is a schematic flow chart illustrating an implementation process for determining whether the updated parameter meets the preset requirement in the load prediction method according to the embodiment of the present invention. In this embodiment, the test data is input into the trained encoder to obtain a feature vector of the test data; inputting the feature vector of the test data into the trained decoder to obtain a test predicted value; acquiring updated parameters according to the training predicted value and the test predicted value; judging whether the updated parameters meet preset requirements or not; if the updated parameters meet preset requirements, determining that the trained encoder and the trained decoder are respectively used for an encoder and a decoder for load prediction; and if the updated parameters do not meet the preset requirements, returning to the step of training the initial encoder by adopting the training data. One way of determining which encoders and decoders meet preset requirements for load prediction may include the steps of:
step S2041: inputting the test data into the trained encoder to obtain a feature vector of the test data.
After obtaining the feature vectors of the test data, the following steps may be performed:
step S2042: and inputting the feature vector of the test data into the trained decoder to obtain a test predicted value.
After obtaining the test prediction value, the following steps may be performed:
step S2043: and acquiring updated parameters according to the training predicted value and the test predicted value.
After obtaining the updated parameters, the following steps may be performed:
step S2044: and judging whether the updated parameters meet preset requirements or not.
If the updated parameter meets the preset requirement, step S2045: determining an encoder and a decoder, respectively, for load prediction, that the trained encoder and the trained decoder are to be used.
If the updated parameter does not meet the preset requirement, step S2046: and returning to the step of training the initial encoder by adopting the training data.
It should be understood that the present method is not limited to a cold-hot cogeneration apparatus and/or system, but may be applied to any other apparatus and/or system that is compatible with the present method, and is not limited thereto.
It should be understood that the above-mentioned letters and/or symbols are only used for the purpose of clearly explaining the meaning of specific parameters of the device or method, and other letters or symbols can be used for representation, and are not limited herein.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
The load prediction method provided by the embodiment of the invention has the beneficial effects that at least: according to the embodiment of the invention, firstly, the acquired original data of the comprehensive energy system is preprocessed to obtain influence factor data, and secondly, the characteristic extraction is carried out on the influence factor data to acquire the characteristic vector corresponding to the influence factor data; and finally, decoding the characteristic vector to obtain a load predicted value corresponding to the characteristic vector. The method has the advantages of reducing the complexity of data and improving the generalization capability of the model, and meanwhile, self-sampling processing is adopted, so that overfitting of the data is avoided, the robustness of data processing is increased, the accuracy of overall data processing is improved, the overall operation is simple, the accuracy, convenience and rapidness of prediction are ensured, and the processing resources are saved.
Fig. 7 is a first schematic diagram of the load prediction apparatus according to the embodiment of the present invention, and for convenience of explanation, only the portions related to the embodiment of the present application are shown.
Referring to fig. 7, a load prediction apparatus includes an influence factor data obtaining module 51, a feature vector obtaining module 53, and a load prediction value obtaining module 54. The influence factor data acquisition module 51 is configured to preprocess acquired raw data of the integrated energy system, and acquire preprocessed influence factor data, where the raw data at least includes historical electric load data and historical natural gas load data; the feature vector obtaining module 53 is configured to perform feature extraction on the influence factor data to obtain a feature vector corresponding to the influence factor data; the load predicted value obtaining module 54 is configured to decode the eigenvector, and obtain a load predicted value corresponding to the eigenvector, where the load predicted value at least includes an electrical load predicted value, a cold load predicted value, and a heat load predicted value.
Referring to fig. 8, the feature vector obtaining module 53 further includes a neuron obtaining unit 531 and a feature vector obtaining unit 532. The neuron acquisition unit 531 is configured to perform feature extraction on the preprocessed influence factor data to acquire neurons; the feature vector obtaining unit 532 is configured to perform self-sampling processing on the neurons to obtain the feature vectors.
Referring to fig. 9, the load predicted value obtaining module 54 further includes a load data determining unit 541, an initial load predicted value determining unit 542, and a load predicted value determining unit 543. The load data determining unit 541 is configured to decode the feature vector to obtain load data; the initial load predicted value determining unit 542 is configured to obtain an initial load predicted value corresponding to the feature vector according to the load data; the load predicted value determining unit 543 is configured to perform inverse normalization processing on the initial load predicted value corresponding to the feature vector, and obtain the load predicted value corresponding to the feature vector.
Further, please refer to fig. 10, which is a schematic diagram of a load prediction apparatus according to a second embodiment of the present invention, wherein the load prediction apparatus further includes a training module 52, configured to train an initial encoder and an initial decoder to obtain an encoder and a decoder that meet preset requirements for load prediction.
Referring to fig. 11, the training module 52 further includes a data determining unit 521, a first obtaining unit 522, a second obtaining unit 523, and a testing unit 524. The data determination unit 521 is configured to segment the influencing factor data to obtain training data and test data; the first obtaining unit 522 is configured to train the initial encoder by using training data, and obtain a feature vector of the trained encoder and the training data; the second obtaining unit 523 is configured to train the initial decoder by using the trained encoder and the feature vector of the training data, and obtain a trained decoder and a training prediction value; the testing unit 524 is configured to test the trained encoder and the trained decoder using the test data to determine that the encoder and the decoder satisfying the predetermined requirements are used for load prediction.
Fig. 12 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 12, the terminal device 6 includes a memory 61, a processor 60, and a computer program 62 stored in the memory 61 and operable on the processor 60, and the processor 60 implements steps of the load prediction method when executing the computer program 62, such as steps S10 to S40 shown in fig. 1-6.
The terminal device 6 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, the processor 60 and the memory 61. Those skilled in the art will appreciate that fig. 12 is merely an example of a terminal device 6 and does not constitute a limitation of terminal device 6 and may include more or fewer components than shown, or some components may be combined, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may also be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) and the like provided on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing the computer programs and other programs and data required by the terminal device. The memory 61 may also be used to temporarily store data that has been output or is to be output.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
Specifically, the present application further provides a computer-readable storage medium, which may be a computer-readable storage medium contained in the memory in the foregoing embodiments; or it may be a separate computer-readable storage medium not incorporated into the terminal device. The computer readable storage medium stores one or more computer programs:
computer-readable storage medium, comprising a computer program stored thereon, which, when being executed by a processor, carries out the steps of the load prediction method.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A method of load prediction, comprising:
preprocessing acquired original data of the comprehensive energy system to acquire preprocessed influence factor data, wherein the original data at least comprises historical electric load data and historical natural gas load data;
extracting the characteristics of the influence factor data to obtain characteristic vectors corresponding to the influence factor data;
and decoding the characteristic vector to obtain a load predicted value corresponding to the characteristic vector, wherein the load predicted value at least comprises an electric load predicted value, a cold load predicted value and a heat load predicted value.
2. The load prediction method of claim 1, wherein in the step of preprocessing the raw data of the integrated energy system, the raw data is mapped by a min-max normalization method to obtain the preprocessed data of the influencing factors.
3. The load prediction method of claim 2, wherein the performing feature extraction on the influence factor data to obtain a feature vector corresponding to the influence factor data comprises:
performing feature extraction on the preprocessed influence factor data to obtain neurons, wherein the neuron obtaining mode is as follows:
Figure FDA0002347467740000011
wherein, apCharacterizing neurons processed by an activation function, w characterizing a weight vector, wherein p 1, 2.., m, x characterizing an input vector, wherein p 1, 2.., n;
carrying out self-sampling processing on the neurons to obtain the feature vectors, wherein the feature vector obtaining mode is as follows:
poolmax(Rk)=maxaii∈Rk
wherein R iskThe regions into which the convolutional layer output is divided are characterized.
4. The load prediction method according to claim 1, wherein the decoding the feature vector to obtain the load prediction value corresponding to the feature vector comprises:
decoding the characteristic vector to obtain load data;
acquiring an initial load predicted value corresponding to the characteristic vector according to the load data;
and performing reverse normalization processing on the initial load predicted value corresponding to the characteristic vector to obtain the load predicted value corresponding to the characteristic vector.
5. The load prediction method according to claim 4, wherein in the step of decoding the eigenvector to obtain the load data, the load data is obtained by:
ft=σg(Wfxt+Ufht-1+bf)
it=σg(Wixt+Uiht-1+bi)
ot=σg(Woxt+Uoht-1+bo)
ct=ft*ct-1+itc(Wcxt+Ucht-1+bc)
ht=otc(ct)
wherein b characterizes a bias vector;
u represents a vector;
σgcharacterizing an S-type growth curve;
σccharacterizing a hyperbolic tangent function;
ftcharacterizing an activation vector of a forgetting gate;
itcharacterizing an activation vector of an input gate;
otcharacterizing an activation vector of the output gate;
t represents the t moment;
htcharacterizing the load data.
6. The load prediction method according to claim 1, wherein before the step of extracting the features of the influencing factor data to obtain the feature vector corresponding to the influencing factor data, the method further comprises:
training an initial encoder and an initial decoder to obtain an encoder and a decoder meeting preset requirements for load prediction, comprising:
segmenting the influence factor data to obtain training data and test data;
training an initial encoder by adopting training data to obtain a trained encoder and a feature vector of the training data;
training an initial decoder by using the trained encoder and the feature vector of the training data to obtain a trained decoder and a training predicted value;
and testing the trained encoder and the trained decoder by using the test data to determine that the encoder and the decoder meeting preset requirements are used for load prediction.
7. The load prediction method of claim 6, wherein said testing the trained encoder and the trained decoder with the test data to determine that the encoder and decoder that meet preset requirements are for load prediction comprises:
inputting the test data into the trained encoder to obtain a feature vector of the test data;
inputting the feature vector of the test data into the trained decoder to obtain a test predicted value;
acquiring updated parameters according to the training predicted value and the test predicted value;
judging whether the updated parameters meet preset requirements or not;
if the updated parameters meet preset requirements, determining that the trained encoder and the trained decoder are respectively used for an encoder and a decoder for load prediction;
and if the updated parameters do not meet the preset requirements, returning to the step of training the initial encoder by adopting the training data.
8. A load prediction apparatus, comprising:
the system comprises an influence factor data acquisition module, a data processing module and a data processing module, wherein the influence factor data acquisition module is used for preprocessing acquired raw data of the comprehensive energy system and acquiring preprocessed influence factor data, and the raw data at least comprises historical electric load data and historical natural gas load data;
the characteristic vector acquisition module is used for extracting characteristics of the influence factor data to acquire a characteristic vector corresponding to the influence factor data;
and the load predicted value acquisition module is used for decoding the characteristic vector to acquire a load predicted value corresponding to the characteristic vector, wherein the load predicted value at least comprises an electric load predicted value, a cold load predicted value and a heat load predicted value.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201911401100.4A 2019-12-31 2019-12-31 Load prediction method and device Pending CN111178630A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911401100.4A CN111178630A (en) 2019-12-31 2019-12-31 Load prediction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911401100.4A CN111178630A (en) 2019-12-31 2019-12-31 Load prediction method and device

Publications (1)

Publication Number Publication Date
CN111178630A true CN111178630A (en) 2020-05-19

Family

ID=70646495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911401100.4A Pending CN111178630A (en) 2019-12-31 2019-12-31 Load prediction method and device

Country Status (1)

Country Link
CN (1) CN111178630A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861229A (en) * 2020-07-24 2020-10-30 上海交通大学 Method for quickly estimating end state of energy supply of natural gas pipeline based on LSTM

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100074331A1 (en) * 2008-09-25 2010-03-25 Oki Electtric Industry Co., Ltd. Image encoder, image decoder, and image encoding system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100074331A1 (en) * 2008-09-25 2010-03-25 Oki Electtric Industry Co., Ltd. Image encoder, image decoder, and image encoding system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱瑞金等: "考虑天然气和电负荷之间相关性的短期电负荷预测", 《电力系统及其自动化学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861229A (en) * 2020-07-24 2020-10-30 上海交通大学 Method for quickly estimating end state of energy supply of natural gas pipeline based on LSTM
CN111861229B (en) * 2020-07-24 2021-08-06 上海交通大学 Method for quickly estimating end state of energy supply of natural gas pipeline based on LSTM

Similar Documents

Publication Publication Date Title
CN113159147B (en) Image recognition method and device based on neural network and electronic equipment
CN112069319B (en) Text extraction method, text extraction device, computer equipment and readable storage medium
CN110175641B (en) Image recognition method, device, equipment and storage medium
CN111950596A (en) Training method for neural network and related equipment
CN112418292B (en) Image quality evaluation method, device, computer equipment and storage medium
CN114780727A (en) Text classification method and device based on reinforcement learning, computer equipment and medium
CN113239702A (en) Intention recognition method and device and electronic equipment
CN115063875A (en) Model training method, image processing method, device and electronic equipment
CN115687934A (en) Intention recognition method and device, computer equipment and storage medium
CN111091420A (en) Method and device for predicting power price
CN113239883A (en) Method and device for training classification model, electronic equipment and storage medium
CN111178630A (en) Load prediction method and device
CN112906398A (en) Sentence semantic matching method, system, storage medium and electronic equipment
CN117391466A (en) Novel early warning method and system for contradictory dispute cases
CN116092101A (en) Training method, image recognition method apparatus, device, and readable storage medium
CN113989569B (en) Image processing method, device, electronic equipment and storage medium
CN114819140A (en) Model pruning method and device and computer equipment
CN115062769A (en) Knowledge distillation-based model training method, device, equipment and storage medium
CN116562952A (en) False transaction order detection method and device
CN114822562A (en) Training method of voiceprint recognition model, voiceprint recognition method and related equipment
CN113723712A (en) Wind power prediction method, system, device and medium
CN113469237A (en) User intention identification method and device, electronic equipment and storage medium
CN114238583B (en) Natural language processing method, device, computer equipment and storage medium
CN111709346B (en) Historical building identification and detection method based on deep learning and high-resolution images
CN117473332A (en) Data processing method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200519

RJ01 Rejection of invention patent application after publication