CN117318055A - Power load prediction model processing method and device, electronic equipment and storage medium - Google Patents

Power load prediction model processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117318055A
CN117318055A CN202311630715.0A CN202311630715A CN117318055A CN 117318055 A CN117318055 A CN 117318055A CN 202311630715 A CN202311630715 A CN 202311630715A CN 117318055 A CN117318055 A CN 117318055A
Authority
CN
China
Prior art keywords
prediction model
data
objective function
power load
predictive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311630715.0A
Other languages
Chinese (zh)
Other versions
CN117318055B (en
Inventor
高琰
屈道宽
明玲
高玉欣
付龙
郑子龙
吴琼华
高吉荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Ligong Haoming New Energy Co ltd
Original Assignee
Shandong Ligong Haoming New Energy Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Ligong Haoming New Energy Co ltd filed Critical Shandong Ligong Haoming New Energy Co ltd
Priority to CN202311630715.0A priority Critical patent/CN117318055B/en
Publication of CN117318055A publication Critical patent/CN117318055A/en
Application granted granted Critical
Publication of CN117318055B publication Critical patent/CN117318055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks
    • H02J3/003Load forecast, e.g. methods or systems for forecasting future load demand
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J2203/00Indexing scheme relating to details of circuit arrangements for AC mains or AC distribution networks
    • H02J2203/20Simulating, e g planning, reliability check, modelling or computer assisted design [CAD]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Business, Economics & Management (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Human Resources & Organizations (AREA)
  • Power Engineering (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Water Supply & Treatment (AREA)
  • Public Health (AREA)
  • Physiology (AREA)
  • Genetics & Genomics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a power load prediction model processing method, a device, electronic equipment and a storage medium, and relates to the technical field of data processing. Wherein the method comprises the following steps: acquiring original data; the original data comprises load data and a target label corresponding to the load data; training a basic prediction model based on the raw data; wherein the base prediction model comprises: the first objective function is used for representing the difference between the predictive label obtained by inputting the load data into the basic predictive model and the target label; the second objective function is used for representing the difference of adjacent prediction labels; the third objective function is used for representing the difference between the predictive label and the average value of the predictive labels; taking the trained basic prediction model as a power load prediction model; through designing the power load prediction model, the accuracy, smoothness and stability of the prediction result are optimized, so that the prediction result is close to a true value, and the user experience is improved.

Description

Power load prediction model processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and apparatus for processing a power load prediction model, an electronic device, and a storage medium.
Background
With the continuous development of society and the acceleration of urban process, the power demand is growing increasingly, and the stable operation and efficient utilization of power systems are becoming more important. Therefore, power load prediction becomes a key element of power system scheduling and economic operation.
Power load prediction refers to predicting power demand over a period of time in the future based on historical load data, weather data, and other relevant factors. The accuracy of the prediction directly affects the scheduling and operating efficiency of the power system, and the smoothness of the prediction affects the stability of the power system. However, the electrical load is affected by a variety of factors, such as weather changes, user behavior, and socioeconomic activity, among others, the uncertainty involved in which makes predictions complex and difficult. At the same time, electrical load data often has a strong time series characteristic, requiring predictive models to be able to effectively capture and utilize this characteristic.
However, many conventional prediction methods use a single prediction model, such as a time series model, a support vector machine, etc., which cannot well cope with the complex problem of power load prediction that requires multi-objective optimization, and the user experience is poor.
Disclosure of Invention
Accordingly, the present invention is directed to a method, an apparatus, an electronic device, and a storage medium for processing a power load prediction model, which optimize the accuracy, smoothness, and stability of a prediction result by designing the power load prediction model, so that the prediction result approaches to a true value, and user experience is improved.
In a first aspect, the present invention provides a method for processing a power load prediction model, where the method includes: acquiring original data; the original data comprises load data and a target label corresponding to the load data; training a basic prediction model based on the raw data; wherein the base prediction model comprises: a first objective function, a second objective function, and a third objective function; the first objective function is used for representing the difference between the predictive label obtained by inputting the load data into the basic predictive model and the target label; the second objective function is used for representing the difference of adjacent prediction labels; the third objective function is used for representing the difference between the predictive label and the average value of the predictive labels; and taking the trained basic prediction model as a power load prediction model.
In some preferred embodiments of the present invention, after the step of acquiring the raw data, the method further comprises: preprocessing the original data; wherein the pretreatment comprises at least one of the following: a cleaning process and a missing value process.
In some preferred embodiments of the present invention, after the step of acquiring the raw data, the method further comprises: and carrying out dimension standardization processing on the original data.
In some preferred embodiments of the invention, the raw data comprises a raw training set; a step of training a base predictive model based on raw data, comprising: determining a migration training set based on a pre-established migration function and an original training set; combining the migration training set and the original training set to serve as a target training set; and training a basic prediction model based on the target training set.
In some preferred embodiments of the present invention, the step of training the base predictive model based on the raw data comprises: determining a first power load characteristic based on the raw data; determining a second electrical load characteristic based on the first electrical load characteristic; determining and a third electrical load characteristic based on the second electrical load characteristic; wherein the first power load feature is used to characterize a subset of features that meet a preset size; the second power load signature is used to characterize the mapping of the first power load signature to the hidden layer output by the encoder; the third power load characteristic is used for representing a coefficient matrix determined based on the preset base and the second power load characteristic; the predictive tag is calculated by the gating loop unit based on the first power load characteristic, the second power load characteristic, and the third power load characteristic.
In some preferred embodiments of the present invention, the first objective function is:the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>For the first objective function +.>In order to predict the tags of the tag,for the target label->N is the predicted time step number for the power load type target gradient function; the second objective function is: />The method comprises the steps of carrying out a first treatment on the surface of the Wherein->For the second objective function +.>For predicting the tag +.>Average value of all prediction results; the third objective function is: />The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>For the third objective function->For predicting the tag +.>N is the number of predicted time steps, which is the average of all predicted results.
In some preferred embodiments of the present invention, after the step of using the trained base prediction model as the power load prediction model, the method further comprises: acquiring load data to be predicted; and inputting the load data to be predicted into the electric load prediction model, and outputting a prediction label of the load data to be predicted.
In a second aspect, the present invention provides a power load prediction model processing apparatus, including: the original data acquisition module is used for acquiring original data; the original data comprises load data and a target label corresponding to the load data; the model training module is used for training a basic prediction model based on the original data; wherein the base prediction model comprises: a first objective function, a second objective function, and a third objective function; the first objective function is used for representing the difference between the predictive label obtained by inputting the load data into the basic predictive model and the target label; the second objective function is used for representing the difference of adjacent prediction labels; the third objective function is used for representing the difference between the predictive label and the average value of the predictive labels; and the model determining module is used for taking the trained basic prediction model as a power load prediction model.
In a third aspect, the present invention provides an electronic device, including a processor and a memory, the memory storing computer executable instructions executable by the processor, the processor executing the computer executable instructions to implement the power load prediction model processing method of any one of the above.
In a fourth aspect, the present invention provides a storage medium storing computer executable instructions that, when invoked and executed by a processor, cause the processor to implement the power load prediction model processing method of any one of the above.
The embodiment of the invention has the following beneficial effects:
the invention provides a power load prediction model processing method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring original data; the original data comprises load data and a target label corresponding to the load data; training a basic prediction model based on the raw data; wherein the base prediction model comprises: a first objective function, a second objective function, and a third objective function; the first objective function is used for representing the difference between the predictive label obtained by inputting the load data into the basic predictive model and the target label; the second objective function is used for representing the difference of adjacent prediction labels; the third objective function is used for representing the difference between the predictive label and the average value of the predictive labels; taking the trained basic prediction model as a power load prediction model; through designing the power load prediction model, the accuracy, smoothness and stability of the prediction result are optimized, so that the prediction result is close to a true value, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a power load prediction model processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart of another power load prediction model processing method according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a power load prediction model processing device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Icon: 310-an original data acquisition module; 320-a model training module; 330-a model determination module; 400-memory; 401-a processor; 402-bus; 403-communication interface.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present invention, it should be noted that, directions or positional relationships indicated by terms such as "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., are directions or positional relationships based on those shown in the drawings, or are directions or positional relationships conventionally put in use of the inventive product, are merely for convenience of describing the present invention and simplifying the description, and are not indicative or implying that the apparatus or element to be referred to must have a specific direction, be constructed and operated in a specific direction, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
Furthermore, the terms "horizontal," "vertical," "overhang," and the like do not denote a requirement that the component be absolutely horizontal or overhang, but rather may be slightly inclined. As "horizontal" merely means that its direction is more horizontal than "vertical", and does not mean that the structure must be perfectly horizontal, but may be slightly inclined.
In the description of the present invention, it should also be noted that, unless explicitly specified and limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
With the continuous development of society and the acceleration of urban process, the power demand is growing increasingly, and the stable operation and efficient utilization of power systems are becoming more important. Therefore, power load prediction becomes a key element of power system scheduling and economic operation.
Power load prediction refers to predicting power demand over a period of time in the future based on historical load data, weather data, and other relevant factors. The accuracy of the prediction directly affects the scheduling and operating efficiency of the power system, and the smoothness of the prediction affects the stability of the power system. However, the electrical load is affected by a variety of factors, such as weather changes, user behavior, and socioeconomic activity, among others, the uncertainty involved in which makes predictions complex and difficult. At the same time, electrical load data often has a strong time series characteristic, requiring predictive models to be able to effectively capture and utilize this characteristic. In addition, the actual power system data often has problems such as incomplete data, abnormal values and the like, and effective pretreatment is required.
The invention patent CN202211460794.0 provides a power load prediction method, a device and terminal equipment, wherein the method comprises the following steps: acquiring encrypted electricity consumption data of a user client as encrypted training data; training a prediction model by using the encryption training data to obtain an initial prediction model and encryption model parameters of the initial prediction model; feeding back the encryption model parameters to the user client so that the user client decrypts the encryption model parameters; obtaining updated model parameters output by a user client for decrypting the encrypted model parameters; adjusting an initial prediction model according to the updated model parameters to obtain a power load prediction model; and predicting the power load condition of the user client by using the power load prediction model. The invention can overcome the problems that the data in the deep learning and machine learning processes in the power load prediction method have no privacy protection measures and the privacy risk of users is revealed.
The above patent, while protecting the private data of the user, still has the following problems to be further solved:
1. accuracy and smoothness cannot be considered: the traditional power load prediction method mainly focuses on the accuracy of prediction, but often neglects the smoothness of a prediction result. However, in actual power system operation, smoothness of the prediction result is also very important, because the power system needs to be kept stable, avoiding the problem caused by sudden load changes.
2. The data preprocessing is insufficient: the actual data of the power system may have problems such as abnormal values, missing values, and the like. These problems may affect the performance of the predictive model if no effective data preprocessing is performed.
3. The number of training samples is insufficient: in practice, the amount of load data available for training is often insufficient for various reasons, which may result in either over-fitting or under-fitting of the predictive model.
4. The feature selection and extraction are insufficient: the power load data includes a number of characteristics, such as historical load data, weather data, and the like. How to select and extract the effective features is a key factor affecting the predictive effect. However, conventional prediction methods may not fully utilize these features or have certain limitations in feature selection and extraction.
5. Selection and optimization of prediction model: many conventional prediction methods use a single prediction model, such as a time series model, a support vector machine, and the like. These models may not handle the complex problem of power load prediction well. In addition, how to optimize the parameters of the model to adapt it to the power load prediction task is also a challenge.
In view of the above, the invention provides a power load prediction model processing method, a device, an electronic device and a storage medium, which optimize the whole power load flow, enrich and accurately process training data, optimize a training model, enable a prediction result to be more accurate, smooth and stable, and promote user experience.
Some embodiments of the present invention are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Example 1
The embodiment of the invention provides a power load prediction model processing method, which is described in detail with reference to a flowchart of the power load prediction model processing method provided by the embodiment of the invention shown in fig. 1, and comprises the following steps:
Step S102, obtaining original data; the original data comprises load data and a target label corresponding to the load data;
specifically, the raw data of the embodiment of the invention mainly originate from load data of an actual power system, and the load data of the power system generally comprise power consumption, weather conditions and user behaviors, and the data are collected from a plurality of power system devices, weather forecast systems and user feedback systems.
The payload data is in the form of time-series data, and each row represents power consumption data for a specific period of time. Specifically, each row of data includes a plurality of fields such as a time stamp, an amount of power consumption, an air temperature, a humidity, a user behavior, and the like; the timestamp is the point in time at which the data was collected, and the user behavior refers to the pattern or habit of the user using electricity, such as, for example, household electricity: household members use high power appliances (such as washing machines, air conditioners, etc.), when they are at home, when they leave, when they use electricity commercially: when commercial buildings are open, when they are closed, peak usage periods, etc., industrial electricity: the processing of user behavior, such as production cycle, shift change, peak production period, etc., is encoded in a tag encoding manner, for example, "at home", "at work", "at school", and "1", "2", and "3" in a numerical format, respectively.
Let the original data set beWherein->,/>For time stamp->For the power consumption, ++>For the temperature, the>For moisture>Is a user behavior. For the problem of power load prediction, the power consumption amount in a future period is taken as a label. For example, the power consumption amount of the next hour is taken as a label of the current time point.
Is provided withRepresentation->Is a label->Can be obtained by the following formula (1):
(1)
wherein,is->The amount of power consumption at the next time.
Exemplary, let the current time beThe power consumption is->The temperature is->Humidity is->User behavior->The corresponding data instance is +>Label->. Then, a group->I.e. as a training sample of the model.
Step S104, training a basic prediction model based on the original data; wherein the base prediction model comprises: a first objective function, a second objective function, and a third objective function; the first objective function is used for representing the difference between the predictive label obtained by inputting the load data into the basic predictive model and the target label; the second objective function is used for representing the difference of adjacent prediction labels; the third objective function is used for representing the difference between the predictive label and the average value of the predictive labels;
Further, the first objective function is:the method comprises the steps of carrying out a first treatment on the surface of the Wherein,for the first objective function +.>For predicting the tag +.>For the target label->N is the predicted time step number for the power load type target gradient function; the second objective function is: />The method comprises the steps of carrying out a first treatment on the surface of the Wherein->For the second objective function +.>For predicting the tag +.>Average value of all prediction results; the third objective function is:the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>For the third objective function->For predicting the tag +.>For averaging all predicted resultsThe value, N, is the predicted number of time steps.
Specifically, the embodiment of the invention provides an improved gating cycle unit (GRU) model as a basic model for classifying data.
Let the input vectorThe hidden state at the last moment is +.>Then, the update procedure of the GRU can be expressed as the following steps:
update door: the information of how much of the previous hidden state will be used by the gate control is updated. The calculation method (2) is as follows:
(2)
wherein,for the first update super parameter, +.>For the second update super parameter, +.>For the third update super parameter, ++>The function is activated for sigmoid.
Reset gate: the reset gate determines how much of the previous hidden state information will be used in calculating the new hidden state. The calculation method (3) is as follows:
(3)
Wherein,for the first reset superparameter, +.>For the second reset hyper-parameter, +.>For the third reset hyper-parameter, +.>The function is activated for sigmoid.
The new memory content: a new memory is calculated based on the input and the reset gate. The calculation method (4) is as follows:
(4)
wherein,for the first memory super parameter,/I>For the second memory hyper-parameter, < >>For the third memory super parameter,/I>Is a hyperbolic tangent activation function.
Hidden state: the hidden state is updated based on the update gate and the new memory content. The calculation method (5) is as follows:
(5)
wherein,for the purpose of attention weighting, which is to enhance the ability of the GRU model to handle the dependency in long sequences, the calculation formula (6) is:
(6)
wherein,
(7)
wherein,for the first attention parameter, +.>For the first attention parameter, +.>As a third parameter of attention to the person,representing the current input +.>Importance in the sequence.
Further, use is made ofAs the hidden state at the next moment, and is used to calculate the predicted power load.
In the power load prediction task, the following two objectives are mainly focused on: accuracy of prediction and smoothness of prediction results. Thus, two objective functions are defined:
accuracy of prediction: the mean square error is used to measure the accuracy of the prediction. Let the predicted load be The actual load is +.>Then, the first objective function is:
(8)
wherein,for the number of predicted time steps +.>For the electric load type objective gradient function, is an electric load type objective function +.>About->Is a gradient of (a).
Specifically, for the firstAnd defining the pre-training objective function of each load type as follows:
(9)
wherein,is of the type->Sample number of>Is a pre-trained GRU model in sample +.>Predicted value of->Is sample->Is used for the actual load of the vehicle.
Smoothness of the predicted results: the prediction results are as smooth as possible, i.e. the prediction results of adjacent time steps differ little. Using the variance of the prediction as a second objective function:
(10)
wherein,is the average of all predictions.
Further, in solving this multi-objective optimization problem, non-dominant ranking genetic algorithm II (NSGA-II, nondominated Sorting Genetic Algorithm-II) is selected for solving.
In the traditional NSGA-II algorithm, the population is first ranked using non-dominant ranking. Further, a crowdedness comparison operator is used to select among individuals having the same rank. In particular, the crowding comparison operator may be more inclined to select individuals with lesser crowding (i.e., fewer neighbors around) in the objective function space.
However, in the power load prediction task, the stability of the prediction result is also very important. Therefore, the embodiment of the invention provides a new objective functionTo measure the stability of the predicted outcome. Embodiments of the invention desire->If it is as large as possible, the modified NSGA-II algorithm will look for a solution +.>So that->And->As small as possible, whileAs large as possible.
Is defined as follows: the standard deviation of the predicted outcome divided by the mean of the predicted outcome, i.e.,
(11)
wherein,is the predicted outcome, ->Is the mean value of the predicted outcome,/->Is the predicted number of time steps.
Specifically, the detailed steps of the modified NSGA-II algorithm are as follows A1 to A6:
step A1, initializing: randomly initializing a containerAnd a population of individual solutions. Each solution contains parameters of the GRU model.
Step A2, sequencing: non-dominant ranking is used to rank the populations.
Step A3, selecting: the solutions are selected using an improved crowdedness comparison operator. The improved crowdedness comparison operator considers not only the crowdedness of the solution, but also the solutionValues.
Step A4, crossing and mutation: the crossover and mutation operations are used to generate new solutions.
Step A5, updating: the population is updated with the newly generated solutions.
Step A6, repeating: the steps are repeated until a termination condition is met, wherein the termination condition is that the iteration number meets a preset threshold, and the general iteration number can be set to 500 times.
Further, the model training in the embodiment of the present invention adopts an early-stop training method, that is, when the variation of the three objective functions is continuously multiple (may be set 50 times or 100 times) and the variation is smaller than the preset variation threshold (may be set to 0.01%), the training is completed in advance.
Step S106, taking the trained basic prediction model as a power load prediction model;
specifically, after the iteration is completed, the optimal solution is selected as a parameter of the GRU model, namely, the GRU model training is completed.
The embodiment of the invention provides a power load prediction model processing method, which comprises the following steps: acquiring original data; the original data comprises load data and a target label corresponding to the load data; training a basic prediction model based on the raw data; wherein the base prediction model comprises: a first objective function, a second objective function, and a third objective function; the first objective function is used for representing the difference between the predictive label obtained by inputting the load data into the basic predictive model and the target label; the second objective function is used for representing the difference of adjacent prediction labels; the third objective function is used for representing the difference between the predictive label and the average value of the predictive labels; taking the trained basic prediction model as a power load prediction model; through two objective functions of prediction accuracy and prediction result smoothness, a non-dominant sorting genetic algorithm II (NSGA-II) is utilized to solve, accurate prediction of power load is achieved, the performance of the model on processing time sequence data is enhanced by utilizing an improved GRU model, and an objective function considering the stability of the prediction result is added in the NSGA-II algorithm, so that the prediction result is more stable, the robustness of the model is enhanced, the prediction result is close to a true value, and user experience is improved.
Example two
On the basis of the above embodiment, the embodiment of the present invention provides another power load prediction model processing method, focusing on describing a processing procedure of training data, referring to a flowchart of another power load prediction model processing method provided by the embodiment of the present invention shown in fig. 2, the method includes:
step S202, obtaining original data; the original data comprises load data and a target label corresponding to the load data; the step S102 is the same as the step S of the embodiment, and will not be described here again.
Step S204, preprocessing the original data; wherein the pretreatment comprises at least one of the following: cleaning treatment and missing value treatment;
specifically, according to the data obtained in step S202, the data format isThe following pretreatment steps are performed.
First, data cleaning is performed. It will be appreciated that there may be some outliers in the load data of the power system, such as sudden power consumption decreases due to equipment failure. For such outliers, cleaning is performed by way of setting a threshold. If the power consumption at a certain time is too large, and the power consumption at the previous and subsequent times exceeds a preset threshold, the power consumption is regarded as an abnormal value and replaced by an average value of adjacent time points.
Specifically, the threshold is set asThe formula for data cleansing is expressed as follows:
(12)
and secondly, performing missing value processing. It can be appreciated that the load data may have missing values during the process of collection, and the data is processed by using a data linear interpolation method.
Specifically, it is provided withFor the power consumption after washing, for any deletion +.>The value is obtained by:
(13)
wherein,and->Respectively represent the missing value->The previous time power consumption amount and the subsequent time power consumption amount.
Step S206, carrying out dimension standardization processing on the original data;
specifically, to eliminate the effects of different feature amounts and ranges, the model is made more stable and is processed using a standard Z-score (standardized variable, standard score) normalization approach.
Specifically, it is provided withFor the average of all power consumption, +.>The standard deviation of all the power consumption amounts is expressed as follows:
wherein,for the power consumption before normalization, +.>Is the standardized power consumption.
Furthermore, the standardized processing mode of the air temperature, the humidity and the user behavior is the same as the power consumption, and the vectorization processing is carried out by adopting a word2vec algorithm before the standardized processing of the user behavior.
Step S208, determining a migration training set based on a pre-established migration function and an original training set;
step S210, combining the migration training set and the original training set to serve as a target training set;
step S212, training a basic prediction model based on a target training set;
specifically, in the power load prediction task, the number of training samples is often insufficient, so the invention provides an Easy Ensemble (an integrated learning algorithm) algorithm based on domain adaptive learning to expand training data to obtain a target training set.
Specifically, the training data set after the pretreatment is set asWhereinRepresenting a sample,/->Label representing sample, ++>Representing the total number of samples.
The traditional Easy Ensemble algorithm first trains a data setDividing into positive sample sets->And negative sample set->. Further, for the negative sample set +.>Go->Sub-random downsampling to generate +.>Individual subset->The number of samples per subset is equal to the positive sample set +.>The number of samples of (a) is the same. Further, utilize->And each->Training to get->And a base classifier. Then, even if the original training data is unbalanced, the number of positive and negative samples is balanced during the training of each base classifier, thus avoiding model bias towards most classes.
The Easy Ensemble algorithm based on the domain self-adaptive learning provided by the invention does not randomly downsample the negative sample set any more, but introduces the idea of domain self-adaptive learning, and each is introduced into the systemConsidered as a source domain, +.>Regarding as a target domain, sample selection and migration are performed on the source domain so that the source domain is closer to the target domain.
Specifically, first, a migration function is definedThe effect of this is to map samples of the source domain (i.e. a subset of the negative sample set) to the target domain (i.e. the positive sample set). Then, each->In (a) and (b)Sample->Can all pass through the migration function->Conversion to a new sample
In the ideal case of a combination of the above-mentioned,the converted negative sample set is made +.>And positive sample set->As close as possible in data distribution. The present invention introduces a maximum mean difference (Maximum Mean Discrepancy, MMD) to measure the distribution difference between the source domain and the target domain and selects the optimal migration function by minimizing the MMD.
Specifically, assume thatIs sample->Feature map of->And->The MMD of (c) is defined as follows:
wherein,representation->Sample of (a),. About.>Representation->Sample of (a),. About.>And->Respectively->And->Is a sample number of (a) in a sample. By minimizing +.>Formula, can find the optimal transfer function +.>And generating a new training data set as a migration training set, and combining the new training data set with the original training data set to form an expanded target training data set.
Step S214, determining a first power load characteristic based on the raw data; determining a second electrical load characteristic based on the first electrical load characteristic; determining and a third electrical load characteristic based on the second electrical load characteristic; wherein the first power load feature is used to characterize a subset of features that meet a preset size; the second power load signature is used to characterize the mapping of the first power load signature to the hidden layer output by the encoder; the third power load characteristic is used for representing a coefficient matrix determined based on the preset base and the second power load characteristic; calculating a predictive tag by a gating loop unit based on the first power load characteristic, the second power load characteristic, and the third power load characteristic;
specifically, the target of feature extraction may be an original training set or an expanded target training data set, and key information that is helpful for the task is acquired from the target training set is described below.
Set target training data set to be expandedWherein->Is the number of samples, < >>Is the feature number. The goal of feature selection is to select a feature subset +.>So that +.>The learning effect is best.
Specifically, the invention adopts a feature selection method based on the maximum correlation minimum redundancy. The maximum correlation minimum redundancy algorithm is a method for evaluating the importance of features, and considers both the correlation of features and targets and the redundancy between features. The objective function of the maximum correlation minimum redundancy algorithm is:
(14)
Wherein,is characterized by->Is->For measuring the correlation between them;is characterized by->And feature subset->Average mutual information of other features in the network, which is used for measuring redundancy; />Is a feature subset +.>Is of a size of (a) and (b).
Further, the maximum correlation minimum redundancy algorithm formula is used as an optimization target, and the solution is carried out through a greedy algorithm. Specifically, the method comprises the following steps B1 to B3:
step B1, initializing a feature subset
Step B2, for each feature in the remaining feature setCalculate to add it +.>Growth of post-objective function +.>
(15)
Wherein,is a feature subset +.>The j-th feature of (a)>Is a feature subset +.>Is the kth feature of (a).
Step B3, selecting the feature with the largest incrementAdd it to the feature subset +.>In (i.e.)>
Repeating steps B2 and B3 until the feature subset is satisfiedStopping iteration when the magnitude of the characteristic subset reaches a preset value, and taking the characteristic subset after the iteration is ended as a first power load characteristic +.>
Further, a Riemann self-encoding network is used for feature extraction. A Riemann self-coding network is a neural network capable of learning the structure of a data manifold. A typical self-encoding network maps the input data to a hidden layer and then from the hidden layer back to the original space. The Riemann self-coding network is characterized in that it maps the input data onto a Riemann manifold and then from the manifold back to the original space. This mapping approach can better capture the inherent structure of the data and thus can better perform feature extraction.
Specifically, a first electrical load characteristic is setBy encoder->Mapped to hidden layer and then passed through decoderReconstructing into the original space to form a reconstruction data +.>Can be expressed as:
(15)/>
further, a loss function is definedTo measure the error between the raw data and the reconstructed data. The loss function is the average of the reconstruction errors for all sample points:
(16)
wherein,is the number of samples, +.>And->The +.f. respectively representing the original data and the reconstructed data>Samples.
Further, the loss function is optimized through iterative training. In the iterative training process, the learning rate is adjusted in a self-adaptive mode. In particular, the method comprises the following steps of,is the learning rate used in the t-th training round,/->Is the loss function value after the t-th training, and the new learning rate +.>Can be adjusted as follows:
(17)
wherein,is a predefined constant for controlling the speed of the learning rate adjustment. If the loss function value after the t-th training is larger than the loss function value after the t-1 th training, the learning rate is increased, and otherwise, the learning rate is decreased.
Further, by optimizing the loss function, parameters of the encoder and decoder can be obtainedThe output of the encoder is taken as the second power load characteristic +. >
Further, feature extraction is performed on the learning based on the dictionary for the second power load feature. A group of basesInput data can be +.>Expressed as a linear combination of bases:
(18)
wherein the method comprises the steps ofIs a coefficient matrix. Then define a loss function +.>So that the error and the coefficient matrix are reconstructed +.>The sum of norms is minimized:
(19)
wherein,representing the Frobenius norm, +.>Is a super parameter and controls the weight of sparsity.
Further, by optimizing the loss function, a base can be obtainedSum coefficient matrix->. Then, coefficient matrix->As a third electrical load feature.
Further, provideAnd->Respectively representing a second power load characteristic->And third electric load characteristic->Weight of->And->Representing the loss functions of the two, respectively, the overall loss function/>Can be expressed as:
(20)
wherein,and->The initial values of (2) are all 0.5, which satisfies +.>
To adaptively adjust the weights of both, a weight adjustment function is definedThe weights are adjusted according to the values of the loss functions of the two: />
(21)
(22)
Based on this, if the loss function value of a certain method is larger, the weight thereof is correspondingly increased, so that more attention is paid to the subsequent feature extraction, and conversely, the weight thereof is reduced.
The subsequent training steps are the same as those in the first embodiment, and will not be described here again.
The embodiment of the invention provides another power load prediction model processing method, which uses preprocessing steps such as data cleaning, missing value processing, standardization and the like, and a feature selection method based on maximum correlation minimum redundancy and a Riemann self-coding network to perform feature extraction, so that features are more representative and distinguishable; in the training data expansion, the idea of domain self-adaptive learning is introduced, and based on an Easy Ensemble algorithm of the domain self-adaptive learning, samples selected and migrated from a source domain are closer to a target domain.
Example III
On the basis of the above embodiment, the embodiment of the present invention provides a further power load prediction model processing method, specifically a trained model using method, and the main objective is to predict new power load data by using a trained multi-objective optimized elastic network model based on the selected feature subset X' and comprising: acquiring load data to be predicted; and inputting the load data to be predicted into the electric load prediction model, and outputting a prediction label of the load data to be predicted.
Specifically, it is provided withIs newly collected power load data, wherein Is the number of data, each +.>Is +.>Feature vectors of dimensions corresponding to +.>Is->And features.
For prediction, each needs to be first of allThrough the chaos feature selection process, the corresponding feature sub-vector +.>
(23)
Wherein,is->Is>Personal characteristics (I)>Indicate->The features are in selected feature subsetsIs a kind of medium.
Then, each is combined withInputting into a trained classification model to obtain corresponding predictive value +.>
(24)
Wherein,is a trained classification model, ++>Is a parameter of the model.
Finally, all ofIn combination, the prediction of new electrical load data is obtained:
(25)
according to the power load prediction model processing method, the power load can be accurately predicted by adopting the multi-objective optimization method, so that errors are reduced, the operation efficiency of a power system is improved, an objective function considering the stability of a prediction result is added, the prediction result is more stable, the stable operation of the power system is guaranteed, the adopted data is derived from an actual power system, in a model reasoning stage, a trained model can be directly applied to an actual energy power load prediction task, the practicability is high, and meanwhile, the proposed Easy model algorithm based on domain adaptive learning can enable selected and migrated samples to be closer to a target domain under the condition of unbalanced data, and the self-adaptability of the model is enhanced.
Example IV
On the basis of the foregoing embodiments, an embodiment of the present invention provides a power load prediction model processing apparatus, and referring to a schematic structural diagram of the power load prediction model processing apparatus provided in the embodiment of the present invention shown in fig. 3, the apparatus includes:
an original data acquisition module 310, configured to acquire original data; the original data comprises load data and a target label corresponding to the load data;
a model training module 320 for training a base prediction model based on the raw data; wherein the base prediction model comprises: a first objective function, a second objective function, and a third objective function; the first objective function is used for representing the difference between the predictive label obtained by inputting the load data into the basic predictive model and the target label; the second objective function is used for representing the difference of adjacent prediction labels; the third objective function is used for representing the difference between the predictive label and the average value of the predictive labels;
the model determining module 330 is configured to take the trained basic prediction model as the power load prediction model.
In the following preferred embodiments of the present invention, the apparatus further comprises: the preprocessing module is used for preprocessing the original data; wherein the pretreatment comprises at least one of the following: a cleaning process and a missing value process.
In the following preferred embodiments of the present invention, the apparatus further comprises: and the standard processing module is used for carrying out dimension standardization processing on the original data.
In the following preferred embodiment of the present invention, the raw data comprises a raw training set; a model training module 320 for determining a migration training set based on a pre-established migration function and an original training set; combining the migration training set and the original training set to serve as a target training set; and training a basic prediction model based on the target training set.
In the following preferred embodiment of the present invention, the model training module 320 is configured to determine a first power load characteristic based on the raw data; determining a second electrical load characteristic based on the first electrical load characteristic; determining and a third electrical load characteristic based on the second electrical load characteristic; wherein the first power load feature is used to characterize a subset of features that meet a preset size; the second power load signature is used to characterize the mapping of the first power load signature to the hidden layer output by the encoder; the third power load characteristic is used for representing a coefficient matrix determined based on the preset base and the second power load characteristic; the predictive tag is calculated by the gating loop unit based on the first power load characteristic, the second power load characteristic, and the third power load characteristic.
In the following preferred embodiments of the present invention, the apparatus further comprises: the prediction module is used for obtaining load data to be predicted; and inputting the load data to be predicted into the electric load prediction model, and outputting a prediction label of the load data to be predicted.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the above-described power load prediction model processing device may refer to the corresponding process in the foregoing embodiment of the power load prediction model processing method, which is not described herein again.
Example five
The embodiment of the invention also provides electronic equipment, which is used for running the power load prediction model processing method; referring to fig. 4, an electronic device according to an embodiment of the present invention includes a memory 400 and a processor 401, where the memory 400 is configured to store one or more computer instructions, and the one or more computer instructions are executed by the processor 401 to implement the above-mentioned power load prediction model processing method.
Further, the electronic device shown in fig. 4 further comprises a bus 402 and a communication interface 403, and the processor 401, the communication interface 403 and the memory 400 are connected by the bus 402.
The memory 400 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 403 (which may be wired or wireless), which may use the internet, a wide area network, a local network, a metropolitan area network, etc. Bus 402 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 4, but not only one bus or type of bus.
The processor 401 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 401 or by instructions in the form of software. The processor 401 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 400, and the processor 401 reads the information in the memory 400, and in combination with its hardware, performs the steps of the method of the previous embodiment.
The embodiment of the invention also provides a storage medium, which stores computer executable instructions that, when being called and executed by a processor, cause the processor to implement the service recommendation method, and the specific implementation can be referred to the method embodiment and will not be described herein.
The method, the device, the electronic device, the storage medium and the computer program product of the electronic device for processing the power load prediction model provided by the embodiment of the invention comprise the storage medium storing the program code, and the instructions included in the program code can be used for executing the method in the previous method embodiment, and specific implementation can be referred to the method embodiment and will not be repeated here.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and/or apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (10)

1. A method of power load prediction model processing, the method comprising:
acquiring original data; wherein the original data comprises load data and a target tag corresponding to the load data;
training a base prediction model based on the raw data; wherein the base prediction model comprises: a first objective function, a second objective function, and a third objective function; the first objective function is used for representing the difference between the predictive label obtained by inputting the load data into the basic predictive model and the target label; the second objective function is used for representing the difference of adjacent predictive labels; the third objective function is used for representing the difference between the predictive label and the average value of a plurality of predictive labels;
and taking the trained basic prediction model as a power load prediction model.
2. The method of power load prediction model processing according to claim 1, wherein after the step of acquiring the raw data, the method further comprises:
preprocessing the original data; wherein the pretreatment comprises at least one of the following: a cleaning process and a missing value process.
3. The method of power load prediction model processing according to claim 1, wherein after the step of acquiring the raw data, the method further comprises:
and carrying out dimension standardization processing on the original data.
4. The method of claim 1, wherein the raw data comprises a raw training set;
a step of training a base predictive model based on the raw data, comprising:
determining a migration training set based on a pre-established migration function and an original training set;
combining the migration training set and the original training set to serve as a target training set;
and training the basic prediction model based on the target training set.
5. The method of claim 1, wherein training a base predictive model based on the raw data comprises:
determining a first power load characteristic based on the raw data; determining a second electrical load characteristic based on the first electrical load characteristic; determining a third electrical load characteristic based on the second electrical load characteristic; wherein the first power load signature is used to characterize a subset of signatures meeting a preset size; the second power load signature is used to characterize the mapping of the first power load signature to a hidden layer output by an encoder; the third electrical load characteristic is used for representing a coefficient matrix determined based on a preset base and the second electrical load characteristic;
The predictive tag is calculated by a gating loop unit based on the third power load characteristic.
6. The power load prediction model processing method according to claim 1, wherein the first objective function is:
wherein,for the first objective function, +.>For the predictive tag,/->For the target tag, < >>N is the predicted time step number for the power load type target gradient function;
the second objective function is:
wherein,for the second objective function, +.>For the predictive tag,/->Average value of all prediction results;
the third objective function is:
wherein,for the third objective function, +.>For the predictive tag,/->N is the number of predicted time steps, which is the average of all predicted results.
7. The power load prediction model processing method according to claim 1, wherein after the step of taking the trained basic prediction model as a power load prediction model, the method further comprises:
acquiring load data to be predicted;
and inputting the load data to be predicted into the power load prediction model, and outputting a prediction label of the load data to be predicted.
8. An electrical load prediction model processing apparatus, comprising:
the original data acquisition module is used for acquiring original data; wherein the original data comprises load data and a target tag corresponding to the load data;
the model training module is used for training a basic prediction model based on the original data; wherein the base prediction model comprises: a first objective function, a second objective function, and a third objective function; the first objective function is used for representing the difference between the predictive label obtained by inputting the load data into the basic predictive model and the target label; the second objective function is used for representing the difference of adjacent predictive labels; the third objective function is used for representing the difference between the predictive label and the average value of a plurality of predictive labels;
and the model determining module is used for taking the trained basic prediction model as a power load prediction model.
9. An electronic device comprising a processor and a memory, the memory storing computer executable instructions executable by the processor, the processor executing the computer executable instructions to implement the power load prediction model processing method of any one of claims 1 to 7.
10. A storage medium storing computer executable instructions which, when invoked and executed by a processor, cause the processor to implement the power load prediction model processing method of any one of claims 1 to 7.
CN202311630715.0A 2023-12-01 2023-12-01 Power load prediction model processing method and device, electronic equipment and storage medium Active CN117318055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311630715.0A CN117318055B (en) 2023-12-01 2023-12-01 Power load prediction model processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311630715.0A CN117318055B (en) 2023-12-01 2023-12-01 Power load prediction model processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117318055A true CN117318055A (en) 2023-12-29
CN117318055B CN117318055B (en) 2024-03-01

Family

ID=89287065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311630715.0A Active CN117318055B (en) 2023-12-01 2023-12-01 Power load prediction model processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117318055B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020035413A (en) * 2018-08-28 2020-03-05 日鉄エンジニアリング株式会社 Electric power demand prediction system, construction method of electric power demand prediction model, program, business support system
CN112365098A (en) * 2020-12-07 2021-02-12 国网冀北电力有限公司承德供电公司 Power load prediction method, device, equipment and storage medium
CN113255973A (en) * 2021-05-10 2021-08-13 曙光信息产业(北京)有限公司 Power load prediction method, power load prediction device, computer equipment and storage medium
CN113516291A (en) * 2021-05-24 2021-10-19 国网河北省电力有限公司经济技术研究院 Power load prediction method, device and equipment
CN113570105A (en) * 2021-05-10 2021-10-29 国网河北省电力有限公司营销服务中心 Power load prediction method and device and terminal
CN114065653A (en) * 2022-01-17 2022-02-18 南方电网数字电网研究院有限公司 Construction method of power load prediction model and power load prediction method
CN114757441A (en) * 2022-05-10 2022-07-15 广东电网有限责任公司 Load prediction method and related device
CN115293326A (en) * 2022-07-05 2022-11-04 深圳市国电科技通信有限公司 Training method and device of power load prediction model and power load prediction method
CN115860190A (en) * 2022-11-16 2023-03-28 广东电网有限责任公司 Training of load detection model, method for detecting power load and related device
US20230096258A1 (en) * 2019-06-21 2023-03-30 Siemens Aktiengesellschaft Power load data prediction method and device, and storage medium
CN116596044A (en) * 2023-07-18 2023-08-15 华能山东发电有限公司众泰电厂 Power generation load prediction model training method and device based on multi-source data
CN116663709A (en) * 2023-05-05 2023-08-29 华中科技大学 Power load multi-step prediction method and device based on enhanced decoder
CN116845874A (en) * 2023-07-06 2023-10-03 固德威技术股份有限公司 Short-term prediction method and device for power load
CN117096875A (en) * 2023-10-19 2023-11-21 国网江西省电力有限公司经济技术研究院 Short-term load prediction method and system based on ST-transducer model

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020035413A (en) * 2018-08-28 2020-03-05 日鉄エンジニアリング株式会社 Electric power demand prediction system, construction method of electric power demand prediction model, program, business support system
US20230096258A1 (en) * 2019-06-21 2023-03-30 Siemens Aktiengesellschaft Power load data prediction method and device, and storage medium
CN112365098A (en) * 2020-12-07 2021-02-12 国网冀北电力有限公司承德供电公司 Power load prediction method, device, equipment and storage medium
CN113255973A (en) * 2021-05-10 2021-08-13 曙光信息产业(北京)有限公司 Power load prediction method, power load prediction device, computer equipment and storage medium
CN113570105A (en) * 2021-05-10 2021-10-29 国网河北省电力有限公司营销服务中心 Power load prediction method and device and terminal
CN113516291A (en) * 2021-05-24 2021-10-19 国网河北省电力有限公司经济技术研究院 Power load prediction method, device and equipment
CN114065653A (en) * 2022-01-17 2022-02-18 南方电网数字电网研究院有限公司 Construction method of power load prediction model and power load prediction method
CN114757441A (en) * 2022-05-10 2022-07-15 广东电网有限责任公司 Load prediction method and related device
CN115293326A (en) * 2022-07-05 2022-11-04 深圳市国电科技通信有限公司 Training method and device of power load prediction model and power load prediction method
CN115860190A (en) * 2022-11-16 2023-03-28 广东电网有限责任公司 Training of load detection model, method for detecting power load and related device
CN116663709A (en) * 2023-05-05 2023-08-29 华中科技大学 Power load multi-step prediction method and device based on enhanced decoder
CN116845874A (en) * 2023-07-06 2023-10-03 固德威技术股份有限公司 Short-term prediction method and device for power load
CN116596044A (en) * 2023-07-18 2023-08-15 华能山东发电有限公司众泰电厂 Power generation load prediction model training method and device based on multi-source data
CN117096875A (en) * 2023-10-19 2023-11-21 国网江西省电力有限公司经济技术研究院 Short-term load prediction method and system based on ST-transducer model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
黄元生;邓佳佳;苑珍珍;: "基于ARMA误差修正和自适应粒子群优化的SVM短期负荷预测", 电力系统保护与控制, no. 14 *
齐彩娟;田星;张坤;卢博;彭正伟;: "基于PQ耦合潮流模型的电力负荷滚动预测方法", 自动化与仪器仪表, no. 09 *

Also Published As

Publication number Publication date
CN117318055B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
Wang et al. A hybrid wind speed forecasting model based on phase space reconstruction theory and Markov model: A case study of wind farms in northwest China
CN109871860B (en) Daily load curve dimension reduction clustering method based on kernel principal component analysis
Abo-Hammour et al. A genetic algorithm approach for prediction of linear dynamical systems
CN104657744B (en) A kind of multi-categorizer training method and sorting technique based on non-determined Active Learning
CN111553543B (en) TPA-Seq2 Seq-based power load prediction method and related components
Yu et al. Mining stock market tendency using GA-based support vector machines
Hassan et al. A hybrid of multiobjective Evolutionary Algorithm and HMM-Fuzzy model for time series prediction
CN111724278A (en) Fine classification method and system for power multi-load users
Amri et al. Analysis clustering of electricity usage profile using k-means algorithm
CN111028100A (en) Refined short-term load prediction method, device and medium considering meteorological factors
Yang et al. A pattern fusion model for multi-step-ahead CPU load prediction
Lyu et al. Dynamic feature selection for solar irradiance forecasting based on deep reinforcement learning
Bayati et al. Gaussian process regression ensemble model for network traffic prediction
CN113268929B (en) Short-term load interval prediction method and device
CN117318055B (en) Power load prediction model processing method and device, electronic equipment and storage medium
Nahid et al. Home occupancy classification using machine learning techniques along with feature selection
Bezerra et al. A self-adaptive multikernel machine based on recursive least-squares applied to very short-term wind power forecasting
Zhang et al. Multi-objective PSO algorithm for feature selection problems with unreliable data
CN113762591A (en) Short-term electric quantity prediction method and system based on GRU and multi-core SVM counterstudy
Has Consensual Aggregation on Random Projected High-dimensional Features for Regression
CN112949908A (en) Electricity price probability prediction method and device
Kuvayskova et al. Forecasting the Technical State of an Object Based on the Composition of Machine Learning Methods
Silva et al. Fuzzy time series applications and extensions: analysis of a short term load forecasting challenge
CN111695739B (en) Load prediction method, system and equipment
CN117614323B (en) Brushless motor rotation speed control method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant