CN114444817A - Output prediction method and device, electronic equipment and storage medium - Google Patents

Output prediction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114444817A
CN114444817A CN202210236289.1A CN202210236289A CN114444817A CN 114444817 A CN114444817 A CN 114444817A CN 202210236289 A CN202210236289 A CN 202210236289A CN 114444817 A CN114444817 A CN 114444817A
Authority
CN
China
Prior art keywords
data
prediction
weather
historical
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210236289.1A
Other languages
Chinese (zh)
Inventor
余佩佩
王臻懿
惠红勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Um Zhuhai Research Institute
University of Macau
Original Assignee
Um Zhuhai Research Institute
University of Macau
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Um Zhuhai Research Institute, University of Macau filed Critical Um Zhuhai Research Institute
Priority to CN202210236289.1A priority Critical patent/CN114444817A/en
Publication of CN114444817A publication Critical patent/CN114444817A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks
    • H02J3/004Generation forecast, e.g. methods or systems for forecasting future energy generation
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J2300/00Systems for supplying or distributing electric power characterised by decentralized, dispersed, or local generation
    • H02J2300/20The dispersed energy generation being of renewable origin
    • H02J2300/22The renewable source being solar energy
    • H02J2300/24The renewable source being solar energy of photovoltaic origin
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J2300/00Systems for supplying or distributing electric power characterised by decentralized, dispersed, or local generation
    • H02J2300/20The dispersed energy generation being of renewable origin
    • H02J2300/28The renewable source being wind energy

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Operations Research (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Power Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Primary Health Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an output prediction method, an output prediction device, electronic equipment and a storage medium, wherein the output prediction method comprises the following steps: acquiring weather prediction data; inputting the weather prediction data into a weather correction learning model to obtain weather prediction correction data, wherein the weather prediction correction data comprises a plurality of weather characteristics; processing at least one of a plurality of weather features in the weather forecast correction data to construct additional features; combining the weather prediction correction data and the additional features into prediction input data; the output prediction learning model can obtain a more accurate output prediction result according to the higher-quality prediction input data.

Description

Output prediction method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of new energy, in particular to an output prediction method and device, electronic equipment and a storage medium.
Background
As new energy is developed in power systems, "new" power systems mainly based on new energy power generation are gradually formed. However, the new energy output is very dependent on weather changes (such as solar irradiation determining photovoltaic output and wind speed determining wind power output), so that the new energy output has the characteristics of volatility, randomness and intermittence. Because the power system needs to keep the balance between power utilization and power generation in real time, the difficulty of real-time balance of the power supply and demand of the power grid can be greatly reduced by accurate prediction of the power output of the new energy source, and therefore the safe and stable operation of the power grid is guaranteed.
The traditional prediction method processes original data and weather characteristics by fuzzy clustering, then uses algorithms such as a support vector machine and a random forest to describe the relation between weather and new energy output, and finally finds out proper characteristic coefficients and model parameters by a least square method. In the process of predicting the day-ahead output, weather prediction data is of great importance, however, the quality of the weather prediction data is not guaranteed in the traditional prediction method, so that the accuracy of the day-ahead output prediction is greatly reduced.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art. Therefore, the invention provides an output prediction method, an output prediction device, electronic equipment and a storage medium, which can improve the accuracy of the output prediction before the day.
In a first aspect, the present invention provides a method for predicting output, including:
acquiring weather prediction data;
inputting the weather prediction data into a weather correction learning model to obtain weather prediction correction data, wherein the weather prediction correction data comprises a plurality of weather features;
processing at least one of a number of said weather features in said weather forecast correction data to construct additional features;
combining the weather prediction correction data and the additional features into prediction input data;
and inputting the predicted input data into a force prediction learning model to predict force data so as to obtain a force prediction result.
The output prediction method provided by the embodiment of the invention at least has the following beneficial effects: according to the method, the weather prediction data are firstly obtained, then the weather prediction data are input into a weather correction learning model to be corrected to obtain more accurate weather prediction correction data, then additional features are built for the corrected weather prediction data, the quality of the prediction input data is improved through the steps of correction and building of the additional features, and the output prediction model can obtain more accurate day-ahead output prediction results according to the processed higher-quality prediction input data.
According to some embodiments of the invention, the weather modification learning model is obtained by training based on a long-term memory network algorithm according to historical weather prediction data and historical weather actual data.
According to some embodiments of the present invention, the output prediction learning model is obtained by training according to historical data, wherein the historical data includes historical weather prediction data and historical actual output data, and the training of the output prediction learning model according to the historical data includes:
acquiring historical data;
inputting the historical weather prediction data into the weather correction learning model to obtain historical weather prediction correction data, wherein the historical weather prediction correction data comprises a plurality of weather features;
processing at least one of a number of said weather features in said historical weather forecast correction data to construct additional features;
combining the historical weather prediction correction data and the additional features into training input data;
and training a preset contribution learning model according to the training input data and the historical actual contribution data to obtain the contribution prediction learning model.
According to some embodiments of the invention, the preset contribution learning model is constructed based on an XGBoost algorithm.
According to some embodiments of the invention, the obtaining historical data comprises:
acquiring an original data set, wherein the original data set comprises a plurality of pieces of original data, each piece of original data comprises a plurality of subdata, each subdata corresponds to different characteristics, and the plurality of pieces of original data are arranged according to a time sequence;
processing the abnormal data condition and/or the missing data condition in the original data set to obtain the historical data, wherein the abnormal data condition corresponds to abnormal data, the missing data condition comprises data vacancy, and the data vacancy is used for representing the sub-data vacancy corresponding to the same characteristic in a plurality of pieces of original data corresponding to at least one period of time interval in the original data set or the sub-data vacancy in one piece of original data corresponding to at least one period of time interval.
According to some embodiments of the invention, the processing for the data abnormal condition in the original data set to obtain the historical data comprises:
acquiring actual maintenance and electricity limiting records;
and screening and deleting the abnormal data according to actual overhaul and power limit records to obtain the historical data.
According to some embodiments of the invention, the processing for the data missing situation in the original data set to obtain the historical data comprises:
detecting the time interval corresponding to the data vacancy in the original data set, if the time interval is smaller than a preset time interval threshold, determining the data vacancy to be a type I defect, and if the time interval is larger than the preset time interval threshold, determining the data vacancy to be a type II defect;
if the data is missing, obtaining the subdata values in the original data corresponding to the end points at the two ends of the time interval, calculating the average value of the two subdata values, and filling the missing subdata in the original data by using the average value to obtain the historical data;
and if the two types of data are missing, deleting each piece of original data in the time interval from the original data set to obtain the historical data.
According to some embodiments of the invention, the obtaining historical data comprises:
acquiring a time sequencing relation among the historical weather prediction data in the historical data;
processing according to the time sequencing relation to obtain time characteristics corresponding to the historical weather prediction data;
and eliminating the time sequencing relation among the historical weather prediction data, and correspondingly adding each time characteristic to each historical weather prediction data.
According to some embodiments of the present invention, the training a preset contribution learning model according to the training input data and the historical actual contribution data to obtain the contribution prediction learning model includes:
acquiring the current training input data;
calculating the similarity between the current training input data and other training input data by adopting a similarity calculation algorithm;
correspondingly adjusting the weights of other training input data according to the similarity to obtain optimized training input data;
and training the preset output learning model according to the optimized training input data and the historical actual output data to obtain the output prediction learning model.
According to some embodiments of the invention, the similarity calculation algorithm is a mahalanobis distance similarity calculation algorithm.
According to some embodiments of the present invention, the training a preset contribution learning model according to the training input data and the historical actual contribution data to obtain the contribution prediction learning model includes:
acquiring a parameter adjusting signal;
the output prediction learning model comprises a plurality of hyper-parameters, and at least one hyper-parameter comprises a plurality of preset values;
randomly selecting one preset value as a use value corresponding to the hyper-parameter according to the parameter adjusting signal;
inputting the historical weather prediction data into the output prediction learning model to generate predicted output data, wherein the preset values of the hyper-parameters are continuously replaced when the output prediction learning model predicts so that all preset values of at least one hyper-parameter are in one-to-one correspondence to generate predicted output data;
traversing the predicted output data corresponding to each preset value and the historical actual output data to perform error analysis;
and selecting a preset value with the minimum corresponding error as a use value of the hyperparameter.
In a second aspect, the present invention further provides an output prediction apparatus, including:
the acquisition module is used for acquiring weather prediction data;
the processing module is used for inputting weather prediction data into a weather correction learning model to obtain weather prediction correction data, processing at least one of a plurality of weather features in the weather prediction correction data to construct additional features, and combining the weather prediction correction data and the additional features into prediction input data;
and the prediction output module is used for inputting the prediction input data into an output prediction learning model so as to predict output data and obtain an output prediction result.
In a third aspect, the present invention further provides an electronic device, which includes a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for implementing connection communication between the processor and the memory, wherein the program, when executed by the processor, implements the steps of the exertion prediction method according to any one of the embodiments of the first aspect.
In a fourth aspect, the present invention provides a storage medium having stored thereon computer-executable instructions for causing a computer to perform a method of force prediction as defined in any one of the embodiments of the first aspect above.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 is a flow chart of a method for force prediction according to an embodiment of the present invention;
FIG. 2 is a first flow chart of a contribution prediction method provided by some embodiments of the present invention;
FIG. 3 is a second flowchart of a contribution prediction method provided by some embodiments of the present invention;
FIG. 4 is a third flowchart of a contribution prediction method provided by some embodiments of the present invention;
FIG. 5 is a fourth flowchart of a contribution prediction method provided by some embodiments of the present invention;
FIG. 6 is a fifth flowchart of a contribution prediction method provided by some embodiments of the present invention;
FIG. 7 is a sixth flowchart of a contribution prediction method provided by some embodiments of the present invention;
FIG. 8 is a seventh flowchart of a contribution prediction method provided by some embodiments of the present invention;
FIG. 9 is a block diagram of an output prediction device according to some embodiments of the present invention;
fig. 10 is a block diagram of an electronic device according to some embodiments of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, it should be understood that the orientation or positional relationship referred to in the description of the orientation, such as the upper, lower, front, rear, left, right, etc., is based on the orientation or positional relationship shown in the drawings, and is only for convenience of description and simplification of description, and does not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number. If there is a description of first and second for the purpose of distinguishing technical features only, this is not to be understood as indicating or implying a relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of technical features indicated.
In the description of the present invention, unless otherwise explicitly limited, terms such as arrangement, installation, connection and the like should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the specific contents of the technical solutions.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
The invention provides a contribution prediction method applied to an electronic device, and referring to fig. 10, fig. 10 is a block diagram of the electronic device according to some embodiments of the invention.
In some embodiments of the present invention, the electronic device may be a server, a smartphone, a tablet computer, a portable computer, a desktop computer, or a like terminal device having an arithmetic function. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
The electronic device includes: memory 11, processor 12, network interface 13, and data bus 14.
The memory 11 includes at least one type of readable storage medium, which may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card-type memory, and the like. In some embodiments, the readable storage medium may be an internal storage unit of the electronic device, such as a hard disk of the electronic device. In other embodiments, the readable storage medium may be an external memory of the electronic device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the electronic device.
In some embodiments of the present invention, the readable storage medium of the memory 11 is generally used for storing the exertion prediction program 10, historical data, prediction models, and the like installed in the electronic device. The memory 11 may also be used to temporarily store data that has been output or is to be output.
Processor 12, which in some embodiments may be a Central Processing Unit (CPU), microprocessor or other data Processing chip, executes program code or processes data stored in memory 11, such as executing a force prediction program.
The network interface 13 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), and is typically used to establish a communication link between the electronic device and other electronic devices.
The data bus 14 is used to enable connection communication between these components.
Fig. 10 only shows an electronic device having components 11-14, but it is to be understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead.
Referring to fig. 1, fig. 1 is a flowchart of a contribution predicting method according to an embodiment of the present invention. Based on the electronic device shown in fig. 10, processor 12, when executing exertion prediction program 10 stored in memory 11, implements the following steps:
step S100: weather prediction data is obtained.
In some embodiments of the present invention, the weather prediction data includes characteristic data corresponding to a plurality of weather characteristics, such as temperature, wind speed, irradiation, humidity, and the like, and the weather data also corresponds to time data, that is, each time point corresponds to its own weather characteristic data.
The weather forecast data can use data of short-term weather forecast issued by the weather station as weather forecast data of the next day, and the short-term weather forecast issued by the weather station is acquired again and used as new weather forecast data each time the output condition of the next day needs to be forecasted. The latest weather forecast is used as weather forecast data more accurately, and the more accurate the weather forecast data is, the more accurate the corresponding output forecast result is.
Step S200: inputting the weather prediction data into a weather modification learning model to obtain weather prediction modification data.
In some embodiments of the present invention, after the weather prediction data is obtained, the weather prediction data is input into the weather correction learning model to perform residual error analysis and error compensation on the weather prediction data, so as to obtain weather prediction correction data. The residual error is the difference between an actual observed value and an estimated value (fitting value) in mathematical statistics, residual error analysis is used for analyzing the reliability and periodicity of weather prediction data, and generally comprises contents such as abnormal value inspection, variance homogeneity inspection, error normality inspection and the like, and error compensation is to construct a new original error to offset the original error. The residual analysis and the error compensation are both completed by a weather modification learning model.
It should be noted that, the weather correction learning model has a corresponding relationship between weather prediction data and weather prediction correction data, and after the weather prediction data is input into the weather correction learning model, the weather correction learning model outputs the weather prediction correction data according to the corresponding relationship.
In some embodiments of the present invention, the weather modification learning model is trained based on a long-term memory network algorithm (LTSM) based on historical weather forecast data and historical weather actual data, wherein the historical weather forecast data and the historical weather actual data are obtained from the memory 11.
It should be noted that the training of the weather modification learning model is to establish a preset weather modification learning model based on a long-term memory network algorithm (LTSM), the training of the preset weather modification model adopts a back propagation algorithm, the training step is to input historical weather prediction data into the preset weather modification model to obtain a modification result, analyze and compare the modification result with historical weather actual data to construct a loss function, then update network weight parameters according to gradient guidance of the loss function, and repeat the training step continuously until the network error is smaller than a given value, thereby obtaining the weather modification model after finishing the primary training.
The above LSTM algorithm is as follows:
Figure 949335DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 582442DEST_PATH_IMAGE002
is the input of the LSTM and,
Figure 292909DEST_PATH_IMAGE003
the door is left to be forgotten,
Figure 542535DEST_PATH_IMAGE004
is an input gate for the input of the image,
Figure 389268DEST_PATH_IMAGE005
is an output gate of the optical fiber,
Figure 193276DEST_PATH_IMAGE006
is in a hidden state and is in a hidden state,
Figure 109148DEST_PATH_IMAGE007
is a unit door, and the unit door is,
Figure 391225DEST_PATH_IMAGE008
it is the state of the cell that is,
Figure 843197DEST_PATH_IMAGE009
is a network parameter matrix.
It should be noted that the long-and-short-term memory network algorithm can solve the problems of disappearance of gradients or gradient explosion and the like caused by the increase of the number of network layers along with the lapse of time in the traditional recurrent neural network.
It should be noted that the weather modification learning model may also be constructed based on a gated recurrent neural network (GRU).
Step S300: at least one of the weather features in the weather forecast correction data is processed to construct additional features.
The corrected weather correction data has a plurality of weather characteristics, the weather characteristics include but are not limited to original characteristics such as temperature, wind speed, irradiation, humidity and wind direction, at least one of the characteristics is processed to construct a grading characteristic, namely a temperature level or a wind speed level or an irradiation level or a humidity level, a wind direction type characteristic can be constructed according to the original characteristics, the wind direction is generally represented by 16 directions, the wind directions of the 16 directions are reduced into 8 wind direction types by constructing the wind direction type characteristic, for example, northeastern north are all classified into northeastern wind types, and the refined wind direction is pasted to a certain extent.
It should be noted that the characteristics of temperature level, wind speed level, irradiation level, wind direction type, etc. may be used singly or in any combination as additional characteristics.
It should be noted that, the additional features are constructed for the corrected weather correction prediction data, specific values of some features are classified or summarized, a plurality of specific values correspond to one grade or type, when weather is slightly changed, the specific values of the features change, but the data corresponding to the additional features do not change within a certain change range of the specific values, the influence change of the output prediction result is small, and further, the randomness caused by weather fluctuation is effectively reduced.
Step S400: the weather prediction correction data and the additional features are combined into prediction input data.
In some embodiments of the invention, after the additional features are constructed, the additional features are spliced to the weather prediction correction data to obtain the prediction input data, and the feature data corresponding to the additional features are spliced to the weather prediction data. The resulting prediction input data is used as the final input throughout the prediction process to predict the output data.
Step S500: and inputting the predicted input data into the output prediction learning model to predict the output data to obtain an output prediction result.
It should be noted that the output prediction learning model has a corresponding relationship between the prediction input data and the output data, and after the prediction input data is input into the output prediction learning model, the output prediction model outputs the output data according to the corresponding relationship as the output prediction result.
In summary, the predicted contribution method provided by the invention improves the quality of the predicted input data through the steps of correcting the weather predicted data and constructing additional features for the weather predicted data, and the contribution prediction model can obtain more accurate prediction results of the day-ahead contribution according to the processed higher-quality predicted input data
In some embodiments of the invention, the contribution prediction learning model is trained from historical data, wherein the historical data comprises historical weather prediction data and historical actual contribution data.
Referring to FIG. 2, in some embodiments of the invention, training the contribution prediction learning model based on historical data comprises:
step S510: historical data is acquired.
The historical data are collected by the new energy station, the historical data comprise historical weather prediction data and historical actual output data, the historical weather prediction data are weather prediction data used for predicting output data in the past, and the historical actual output data are actual output conditions of the new energy station in the past.
It should be noted that the historical weather forecast data and the historical actual contribution data correspond to each other, and the historical weather forecast data for one day corresponds to the historical actual contribution data for one day.
Step S520: and inputting the historical weather prediction data into a weather modification learning model to obtain historical weather prediction modification data.
In some embodiments of the present invention, the historical weather prediction data is input into a weather modification learning model for modification, so as to obtain more accurate historical weather prediction modification data, wherein the weather modification learning model in step S200 and the weather modification learning model in step S520 are the same model, and are obtained by training the historical weather prediction data and the historical weather actual data.
Step S530: at least one of the weather features in the historical weather forecast correction data is processed to construct additional features.
Step S540: the historical weather forecast correction data and the additional features are combined into training input data.
It should be noted that the processing method in step S530 is the same as that in step S300, the processing method in step S540 is the same as that in step S400, the current weather forecast data is processed in step S300 and step S400, and the historical weather forecast data is processed in step S530 and step S540.
Step S550: and training the preset output learning model according to the training input data and the historical actual output data to obtain an output prediction learning model.
In some embodiments of the invention, a preset output learning model is firstly constructed according to experience and historical conditions, then training input data is continuously input into the preset output learning model to generate output prediction data, historical actual output data is used as an expected value of model output, the preset output learning model is continuously trained to obtain the output prediction model, and the trained processing prediction model can obtain more accurate predicted output data according to weather prediction correction data.
In some embodiments of the invention, the predetermined output learning model is constructed based on the XGBoost algorithm.
It should be noted that the xgboost (extreme Gradient boosting) algorithm, which is fully called as an extreme Gradient boosting algorithm in chinese, has many advantages, including: regularization term to prevent overfitting; not only the first derivative is used, but also the second derivative is used, so that the loss is more accurate, and the loss can be customized; when parallel optimization is carried out, parallelism is on the feature granularity; the condition that the training data are sparse values is considered, the default direction of the branch can be specified for the missing value or the specified value, and therefore the efficiency of the algorithm can be greatly improved; column sampling is supported, so that overfitting can be reduced, and calculation can be reduced.
It should be noted that the high-quality prediction input data and the output prediction learning model constructed by using the XGBoost algorithm make the whole output prediction method have the mobility and the expandability of the high-performance effect, the high-quality prediction input data is used for ensuring the high-performance effect of the prediction, and the XGBoost ensures the mobility and the expandability.
It should be noted that the preset output learning model constructed by using the XGBoost algorithm satisfies the requirements of rapid deployment, simplicity and easy use of the model in a real scene, and simultaneously can ensure the sublimation capability of the model in different environments, and enhances the environment self-adaptive capability of the output prediction learning model.
It should be noted that the preset contribution learning model may also be constructed by using a light gbm (light Gradient Boosting machine) algorithm.
Referring to fig. 3, in some embodiments of the present invention, step S510 includes the following steps:
step S511: an original data set is acquired.
It should be noted that the original data set includes a plurality of pieces of original data, each piece of original data includes a plurality of pieces of sub-data, each piece of sub-data has different characteristics, and the plurality of pieces of original data are arranged in a time sequence, where the original data is historical actual output data.
Step S512: and processing the abnormal data condition and/or the missing data condition in the original data set to obtain historical data.
The data exception condition corresponds to abnormal data, the data missing condition comprises data vacancy, and the data vacancy is used for representing sub-data vacancy corresponding to the same characteristic in a plurality of pieces of original data corresponding to at least one period of time interval in the original data set or sub-data missing in one piece of original data corresponding to at least one period of time interval.
It should be noted that the historical data obtained after processing the data abnormal condition and/or the data missing condition in the original data set enables the quality of the historical data to be higher, and the higher-quality historical data enables the output prediction learning model trained according to the historical data to be more perfect, so that the output prediction result is more accurate.
Referring to fig. 4, in some embodiments of the present invention, the processing for the data abnormal condition in the original data set in step S512 to obtain the historical data includes:
s513: and acquiring actual maintenance and electricity limiting records.
It should be noted that the actual service and electricity limit records are stored in the memory 11.
It should be noted that, both the actual maintenance record and the actual power-limiting record are generated due to the abnormal data condition, and therefore, the maintenance record and the power-limiting record correspond to the abnormal data condition. Abnormal data corresponds to the abnormal conditions of the data, the abnormal data comprises out-of-limit data and repeated data, the out-of-limit data is data of which the actual value of the subdata exceeds the preset maximum value of the data, and the repeated data is data of which the original data is identical to the previous original data.
S514: and screening and deleting abnormal data according to the actual overhaul and power limit records to obtain historical data.
It should be noted that, when deleting, abnormal data needs to be screened out according to actual maintenance and electricity limit records, and then the abnormal data is deleted from the original data set. It should be noted that, when the abnormal data is the out-of-limit data, the whole original data corresponding to the out-of-limit sub-data needs to be deleted.
Based on the contents of the above steps S513 and S514, it can be understood that deleting the abnormal data can make the quality of the history data higher.
Referring to fig. 5, in some embodiments of the present invention, the processing for the data missing condition in the original data set in step S512 to obtain the historical data includes:
step S515: and detecting a time interval corresponding to the data vacancy in the original data set, if the time interval is smaller than a preset time interval threshold, determining the data vacancy as a class I defect, and if the time interval is larger than the preset time interval threshold, determining the data vacancy as a class II defect.
It is understood that the original data in the original data set are arranged in time sequence, and when there is a data vacancy in the original data, there is a time interval.
The preset time interval threshold is determined according to the actual application scenario of the method, and may be 5 minutes, 10 minutes, 15 minutes, and the like.
Step S516 a: if the data is missing, obtaining subdata values in the original data corresponding to end points at two ends of the time interval, calculating an average value of the two subdata values, and filling the missing subdata in the original data by using the average value to obtain historical data.
It should be noted that, in the original data corresponding to the end points at the two ends of the time interval, the sub-data having the same corresponding characteristics as the vacant sub-data is not missing, that is, the average value of the two sub-data having the same corresponding characteristics as the vacant sub-data and located at the two ends of the time interval is used to fill the vacancy.
Step S516 b: and if the data is the class II missing data and part of the sub data in each of the plurality of pieces of original data corresponding to the second time interval is missing, deleting each piece of original data in the second time interval from the original data set to obtain historical data.
It should be noted that if the time interval corresponding to the data vacancy is greater than the preset time interval threshold, if the filling method is continuously adopted, the data inaccuracy may be caused by adopting the average value to fill the vacancy due to the too long time interval. Therefore, when there is a type two missing, each piece of original data in the second time interval is deleted from the original data set, and history data no longer containing the original data with the missing is obtained.
Based on the contents of the above steps S515, S516a, and S516b, it can be understood that filling or deleting data gaps in the original data can make the historical data have higher quality.
Referring to fig. 6, in some embodiments of the present invention, step S510 includes the following steps:
step S517: and acquiring a time ordering relation between historical weather prediction data in the historical data.
It should be noted that the historical weather prediction data are sequentially arranged according to a time sequence and have a time sequence relationship, and then the time sequence relationship among the historical weather prediction data, that is, the time information carried by the historical weather prediction data, is obtained.
Step S518: and processing according to the time sequencing relation to obtain time characteristics corresponding to each historical weather prediction data.
Specifically, the acquired time ordering relationship is processed, and the time information is constructed into time characteristics, wherein the time characteristics comprise the characteristics of minutes, hours, weeks, agricultural calendar days and the like.
Step S519: and eliminating the time sequencing relation among the historical weather prediction data, and correspondingly adding each time characteristic to each historical weather prediction data.
It should be noted that after the time ordering relationship between historical weather data predictions is eliminated, each piece of weather prediction data becomes independent data, and when the historical weather prediction data is subsequently used for training the weather modification learning model based on the LSTM algorithm and the historical weather prediction data is missing, the missing of one piece of data does not interfere with the subsequent input training of the historical weather prediction data, so that the trained weather modification learning model is more accurate. Meanwhile, the time characteristics are correspondingly added to each historical weather forecast data, so that the time information carried by the original time ordering relation can be reserved.
Based on the contents of the above steps S517, S518, and S519, it can be understood that after the time-series relationship is eliminated from the historical weather prediction data, all the historical weather prediction data can be changed into independent data, so that the influence of the time-series relationship on the training of the weather modification learning model is effectively avoided, the trained weather modification learning model is more accurate, and the output prediction result is more accurate.
Referring to fig. 7, in some embodiments of the present invention, step S550 includes the following steps:
step 551: current training input data is obtained.
It should be noted that, in the training process of the output prediction model, the training input data is input one by one, and then the current training input data is obtained.
Step S552: and calculating the similarity of the current training input data and other training input data by adopting a similarity calculation algorithm.
Specifically, other training input data are traversed, similarity calculation is performed between the current training input data and the other training input data by adopting a similarity calculation algorithm during traversal, and the similarity between each of the other training input data and the current training data is recorded.
It should be noted that, in some embodiments of the present invention, the similarity calculation algorithm is a mahalanobis distance similarity calculation algorithm, and mahalanobis distances are all referred to as mahalanobis distances, which has the advantages of being not affected by dimensions and being capable of eliminating interference of correlation between variables, and the formula of the mahalanobis distance algorithm is as follows:
Figure 818107DEST_PATH_IMAGE010
where X and Y are two samples, the covariance matrix of the two.
It should be noted that the similarity calculation algorithm may also adopt algorithms such as a euclidean distance algorithm, a mahhattan distance algorithm, a cosine similarity calculation method, a pearson correlation coefficient algorithm, and the like.
Step S553: and correspondingly adjusting the weights of other training input data according to the similarity to obtain optimized training input data.
Specifically, the weights of other training input data with high similarity to the current training input data are improved, and the weights of other training input data with low similarity to the current training input data are reduced to obtain the optimized training input data.
Step S554: and training the preset output learning model according to the optimized training input data and the historical actual output data to obtain an output prediction learning model.
It should be noted that, in the training of the preset output learning model, the model may determine the weight of the training input data, and the training input data with high weight has a greater influence on the training of the model.
Based on the above contents of step 551, step 552, step 553, and step 554, it can be understood that after the weight optimization is performed on the training input data, the trained output prediction learning model can be more accurate, and further, the output prediction result can be more accurate.
Referring to fig. 8, in some embodiments of the present invention, step S550 includes the following steps:
step S555: and acquiring a parameter adjusting signal.
It should be noted that the parameter adjusting signal is sent by the user using the electronic device.
It should be noted that the output prediction learning model includes a plurality of hyper-parameters, and at least one hyper-parameter includes a plurality of preset values. The hyper-parameters comprise the minimum number of samples required for building each model, the maximum depth of the tree, a penalty coefficient of L2 regular and the like.
Step S556: and randomly selecting a preset value as a use value of the corresponding hyper-parameter according to the parameter adjusting signal.
After the parameter adjusting signal is obtained, one preset value is selected for the super-parameters with a plurality of preset values to serve as the corresponding super-parameter use value.
Step S557: inputting historical weather prediction data into the output prediction learning model to generate predicted output data, wherein the output prediction learning model continuously changes the preset values of the hyper-parameters when predicting so that all the preset values of at least one hyper-parameter are in one-to-one correspondence to generate the predicted output data.
Specifically, when a use value is selected for the super-parameter with a plurality of preset values, historical weather prediction data is input into the output prediction learning model to generate an output prediction result, then the historical weather prediction data is not replaced, the use value of the super-parameter is replaced, an output prediction result is generated, the steps are repeated until all the preset values of at least one super-parameter are used as the use values to generate predicted output data in a one-to-one correspondence mode, and the output prediction data generated corresponding to each preset value are recorded.
It can be understood that a plurality of hyper-parameters may have a plurality of preset values, a preset value is selected for each hyper-parameter having a plurality of preset values, and then all the preset values are continuously replaced while the model predicts until all combinations of the preset values generate output prediction data correspondingly.
Step S558: and traversing the predicted output data corresponding to each preset value and the historical actual output data to perform error analysis.
Specifically, all the recorded predicted output data are compared with the historical actual output data, errors are analyzed, and the errors are sequenced.
Step S559: and selecting the preset value with the minimum corresponding error as the use value of the hyperparameter.
Specifically, each preset value or each group of combined preset values corresponds to one predicted output data, each predicted output data corresponds to one error, and the preset value generating the minimum error is selected as the use value of the hyper-parameter or the combination of the preset values with the minimum cicada sound error is selected as the use values of the corresponding multiple hyper-parameters.
Referring to fig. 9, some embodiments of the present invention further provide an output prediction apparatus 600, including:
an obtaining module 610 is configured to obtain weather forecast data.
And the processing module 620 is used for inputting the weather prediction data into the weather correction learning model to obtain weather prediction correction data, processing at least one of a plurality of weather characteristics in the weather prediction correction data to construct additional characteristics, and combining the weather prediction correction data and the additional characteristics into prediction input data.
And a prediction output module 630, configured to input the prediction input data into the output prediction learning model to predict the output data to obtain an output prediction result.
One of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
The preferred embodiments of the present invention have been described above with reference to the accompanying drawings, and the scope of the invention is not limited thereby. Any modifications, equivalents and improvements which may occur to those skilled in the art without departing from the scope and spirit of the present invention are intended to be within the scope of the claims.

Claims (14)

1. A method of force prediction, comprising:
acquiring weather prediction data;
inputting the weather prediction data into a weather correction learning model to obtain weather prediction correction data, wherein the weather prediction correction data comprises a plurality of weather features;
processing at least one of a number of said weather features in said weather forecast correction data to construct additional features;
combining the weather prediction correction data and the additional features into prediction input data;
and inputting the predicted input data into a force prediction learning model to predict force data so as to obtain a force prediction result.
2. The contribution prediction method of claim 1, wherein the weather modification learning model is trained based on a long-term memory network algorithm based on historical weather prediction data and historical weather actual data.
3. The output prediction method of claim 1, wherein the output prediction learning model is trained based on historical data, wherein the historical data comprises historical weather prediction data and historical actual output data, and wherein the training of the output prediction learning model based on the historical data comprises:
acquiring historical data;
inputting the historical weather prediction data into the weather correction learning model to obtain historical weather prediction correction data, wherein the historical weather prediction correction data comprises a plurality of weather features;
processing at least one of a number of said weather features in said historical weather forecast correction data to construct additional features;
combining the historical weather forecast correction data and the additional features into training input data;
and training a preset output learning model according to the training input data and the historical actual output data to obtain the output prediction learning model.
4. The contribution prediction method of claim 3, wherein the pre-set contribution learning model is constructed based on an XGboost algorithm.
5. A method of force prediction according to claim 3, wherein said obtaining historical data comprises:
acquiring an original data set, wherein the original data set comprises a plurality of pieces of original data, each piece of original data comprises a plurality of sub-data, each sub-data corresponds to different characteristics, and the plurality of pieces of original data are arranged according to a time sequence;
processing the abnormal data condition and/or the missing data condition in the original data set to obtain the historical data, wherein the abnormal data condition corresponds to abnormal data, the missing data condition comprises data vacancy, and the data vacancy is used for representing the sub-data vacancy corresponding to the same characteristic in a plurality of pieces of original data corresponding to at least one period of time interval in the original data set or the sub-data vacancy in one piece of original data corresponding to at least one period of time interval.
6. The contribution prediction method of claim 5, wherein the processing for the data anomaly in the raw data set to obtain the historical data comprises:
acquiring actual maintenance and power limit records;
and screening and deleting the abnormal data according to actual overhaul and power limit records to obtain the historical data.
7. The contribution prediction method of claim 5, wherein the processing the missing data in the raw data set to obtain the historical data comprises:
detecting the time interval corresponding to the data vacancy in the original data set, if the time interval is smaller than a preset time interval threshold, determining the data vacancy to be a type I defect, and if the time interval is larger than the preset time interval threshold, determining the data vacancy to be a type II defect;
if the data is missing, obtaining the subdata values in the original data corresponding to the end points at the two ends of the time interval, calculating the average value of the two subdata values, and filling the missing subdata in the original data by using the average value to obtain the historical data;
and if the two types of data are missing, deleting each piece of original data in the time interval from the original data set to obtain the historical data.
8. A method of force prediction according to claim 3, wherein said obtaining historical data comprises:
acquiring a time sequencing relation among the historical weather prediction data in the historical data;
processing according to the time sequencing relation to obtain time characteristics corresponding to the historical weather prediction data;
and eliminating the time sequencing relation among the historical weather prediction data, and correspondingly adding each time characteristic to each historical weather prediction data.
9. The method of claim 3, wherein training a pre-set contribution learning model based on the training input data and the historical actual contribution data to obtain the contribution prediction learning model comprises:
acquiring the current training input data;
calculating the similarity between the current training input data and other training input data by adopting a similarity calculation algorithm;
correspondingly adjusting the weights of other training input data according to the similarity to obtain optimized training input data;
and training the preset output learning model according to the optimized training input data and the historical actual output data to obtain the output prediction learning model.
10. A method of force prediction according to claim 9 wherein said similarity calculation algorithm is a mahalanobis distance similarity calculation algorithm.
11. The method of claim 3, wherein training a pre-set contribution learning model based on the training input data and the historical actual contribution data to obtain the contribution prediction learning model comprises:
acquiring a parameter adjusting signal;
the output prediction learning model comprises a plurality of hyper-parameters, and at least one hyper-parameter comprises a plurality of preset values;
randomly selecting one preset value as a use value corresponding to the hyper-parameter according to the parameter adjusting signal;
inputting the historical weather prediction data into the output prediction learning model to generate predicted output data, wherein the preset values of the hyper-parameters are continuously replaced when the output prediction learning model predicts so that all preset values of at least one hyper-parameter are in one-to-one correspondence to generate predicted output data;
traversing the predicted output data corresponding to each preset value and the historical actual output data to perform error analysis;
and selecting a preset value with the minimum corresponding error as a use value of the hyperparameter.
12. A contribution predicting device, comprising:
the acquisition module is used for acquiring weather prediction data;
the processing module is used for inputting weather prediction data into a weather correction learning model to obtain weather prediction correction data, processing at least one of a plurality of weather features in the weather prediction correction data to construct additional features, and combining the weather prediction correction data and the additional features into prediction input data;
and the prediction output module is used for inputting the prediction input data into an output prediction learning model so as to predict output data and obtain an output prediction result.
13. An electronic device comprising a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for enabling connection communication between the processor and the memory, the program, when executed by the processor, implementing the steps of the exertion prediction method of any of claims 1 to 11.
14. A storage medium for computer readable storage, wherein the storage medium stores one or more programs executable by one or more processors to perform the steps of the contribution prediction method of any of claims 1-11.
CN202210236289.1A 2022-03-11 2022-03-11 Output prediction method and device, electronic equipment and storage medium Pending CN114444817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210236289.1A CN114444817A (en) 2022-03-11 2022-03-11 Output prediction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210236289.1A CN114444817A (en) 2022-03-11 2022-03-11 Output prediction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114444817A true CN114444817A (en) 2022-05-06

Family

ID=81359442

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210236289.1A Pending CN114444817A (en) 2022-03-11 2022-03-11 Output prediction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114444817A (en)

Similar Documents

Publication Publication Date Title
Rahman et al. Predicting electricity consumption for commercial and residential buildings using deep recurrent neural networks
Jiang et al. Day‐ahead renewable scenario forecasts based on generative adversarial networks
CN115225520B (en) Multi-mode network flow prediction method and device based on meta-learning framework
CN112906757A (en) Model training method, optical power prediction method, device, equipment and storage medium
CN114330934A (en) Model parameter self-adaptive GRU new energy short-term power generation power prediction method
CN115545331A (en) Control strategy prediction method and device, equipment and storage medium
CN107358059A (en) Short-term photovoltaic energy Forecasting Methodology and device
Constantinescu et al. Unit commitment with wind power generation: integrating wind forecast uncertainty and stochastic programming.
Ghassemi et al. Optimal surrogate and neural network modeling for day-ahead forecasting of the hourly energy consumption of university buildings
CN116739172B (en) Method and device for ultra-short-term prediction of offshore wind power based on climbing identification
CN116845868A (en) Distributed photovoltaic power prediction method and device, electronic equipment and storage medium
CN114444817A (en) Output prediction method and device, electronic equipment and storage medium
CN116646917A (en) Campus multi-step probability power load prediction method and system
CN111697560A (en) Method and system for predicting load of power system based on LSTM
Lai et al. Short‐term passenger flow prediction for rail transit based on improved particle swarm optimization algorithm
CN113660176B (en) Traffic prediction method and device for communication network, electronic device and storage medium
Peng et al. Short‐term wind power prediction based on stacked denoised auto‐encoder deep learning and multi‐level transfer learning
CN115796338A (en) Photovoltaic power generation power prediction model construction and photovoltaic power generation power prediction method
CN114970855A (en) Method, device, equipment, medium and prediction method for constructing wind field prediction model
CN114566048A (en) Traffic control method based on multi-view self-adaptive space-time diagram network
CN113011674A (en) Photovoltaic power generation prediction method and device, electronic equipment and storage medium
CN114066079A (en) Multi-tenant-based online appointment vehicle supply and demand difference prediction method and device
CN113723712A (en) Wind power prediction method, system, device and medium
CN112787882A (en) Internet of things edge traffic prediction method, device and equipment
KR102642404B1 (en) Method for prediction power generation using meta-learning, device and system using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination