CN116544931B - Power load distribution prediction method based on integrated fragment transformation and time convolution network - Google Patents
Power load distribution prediction method based on integrated fragment transformation and time convolution network Download PDFInfo
- Publication number
- CN116544931B CN116544931B CN202310762859.5A CN202310762859A CN116544931B CN 116544931 B CN116544931 B CN 116544931B CN 202310762859 A CN202310762859 A CN 202310762859A CN 116544931 B CN116544931 B CN 116544931B
- Authority
- CN
- China
- Prior art keywords
- model
- prediction
- time
- load
- power load
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000009466 transformation Effects 0.000 title claims abstract description 9
- 239000012634 fragment Substances 0.000 title claims abstract description 6
- 230000000737 periodic effect Effects 0.000 claims abstract description 14
- 238000005457 optimization Methods 0.000 claims description 22
- 238000012549 training Methods 0.000 claims description 10
- 238000003062 neural network model Methods 0.000 claims description 9
- 238000005070 sampling Methods 0.000 claims description 8
- 238000000354 decomposition reaction Methods 0.000 claims description 6
- 230000005855 radiation Effects 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims description 6
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 5
- 230000000694 effects Effects 0.000 claims description 5
- 238000009795 derivation Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J3/00—Circuit arrangements for ac mains or ac distribution networks
- H02J3/003—Load forecast, e.g. methods or systems for forecasting future load demand
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Business, Economics & Management (AREA)
- Economics (AREA)
- Power Engineering (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
A power load distribution prediction method based on an integrated fragment transformation and time convolution network. According to the method, historical power load data and weather data are decomposed through integrated fragment transformation, so that trend term and periodic term components of a time sequence are obtained; constructing a power load multi-step prediction model by utilizing mode characteristics in the time convolution network self-adaptive learning historical data; introducing quantile loss targets to obtain prediction models under different quantiles; and finally, applying the trained model to actual power load prediction, taking a median model predicted value as a real-time load prediction result, and obtaining real-time load prediction distribution by utilizing nuclear density estimation. The method can provide more comprehensive and accurate information for power load prediction and provide important support for scheduling management of a power system.
Description
Technical Field
The application relates to the technical field of power load prediction and power grid optimization control, in particular to a power load distribution prediction method based on an EPT (EnsemblePatch Transformation, integrated fragment transformation) and a time convolution network.
Background
Power systems are one of the important infrastructures of modern society, and power load prediction is an important issue in power system operation and planning. Accurately predicting the power load distribution may help the power system to plan and optimize the grid equipment, ensuring stable operation of the power system.
The existing research predicts the power load by adopting a statistical model, an artificial neural network and the like. There are certain limitations, such as limited modeling capability for nonlinear relationships, inability to handle periodic changes in time series data, and the like. According to the power load distribution prediction method based on the EPT and the time convolution network, through decomposing and learning historical power load data and weather data, periodic changes and trend changes of the data can be captured better, and prediction accuracy and generalization capability are improved. Compared with the traditional RNN, LSTM and other circulating neural networks, the adopted time convolution network has better long-term dependence modeling capability, simpler model structure and better performance in prediction accuracy and calculation efficiency.
The existing power load prediction usually adopts a point estimation method to carry out deterministic prediction on the load at the future moment, and the overall trend of the future power load can be reflected. However, the above method cannot take into account uncertainty of the predicted result. The prediction of the power load interval can give a certain confidence interval, but the load distribution characteristics cannot be expressed. Therefore, the power load distribution prediction can effectively make up for the defects, and more accurate power load prediction results and decision support are provided.
Disclosure of Invention
The utility model provides a power load distribution prediction method based on EPT and a time convolution network, which can realize accurate prediction of power system load demand and provide important reference for power dispatching and energy planning.
The power load distribution prediction method based on the EPT and the time convolution network comprises the following steps:
s1, collecting power load historical data and weather data at corresponding moments, wherein the power load data is a time sequence of fixed time intervals, and the weather data mainly comprises temperature, humidity, radiation intensity and the like. Decomposing each time sequence by using EPT to obtain trend term and periodic term components of each sequence;
s2, setting time convolution network model parameters, and constructing a load prediction neural network model. Constructing a training sample set based on trend item and periodic item components of each sequence;
s3, introducing a quantile loss optimization target, and optimizing model parameters by utilizing Adam. Taking 0.05 as an interval, obtaining load prediction models under different quantile loss optimization targets;
s4, comprehensively analyzing load prediction results under different quantile loss optimization targets, taking the model prediction results under the median (namely 0.5 quantile) as actual load prediction results, and acquiring real-time load prediction distribution based on the model prediction results under the different quantile and the kernel density estimation.
Further, in step S1, the power load data sampling interval should be generally less than 1h, which is recommended here as 15min. The weather data and the power load data are required to be consistent in sampling time. Load prediction is performed by comprehensively considering power load data, temperature, humidity and radiation intensity multi-source time series data, wherein the multi-source time series can be expressed as
Is the time series length. EPT decomposition was performed on each time series, expressed by the following formula:
wherein,and->Representing time series high frequency and low frequency components, respectively.
The multisource time series after EPT decomposition can be expressed asIncluding low frequency trend term components and high frequency periodic components of each time series.
Further, in step S2, first, time convolution network model parameters including an input layer, two hidden layers, and an output layer are set. The number of channels of the input layer is 8, and the numbers of channels of the two hidden layers and the output layer are 64, 64 and 1 respectively. The two hidden and output layer convolution kernel sizes are 32, 16, 1, respectively. Based on the set time convolution network parameters, an end-to-end load prediction neural network model is built, the model is input into an EPT decomposed multi-source time sequence, and the model is output into a load under a specific prediction step length.
Then, a training sample set is constructed. The model input in the training sample set is a multi-source time series segment, which can be expressed asWherein->Segment length entered for model, +.>The number of samples. Model output label +.>Representing->Actual load values after a time step.
Further, the step S3 specifically includes:
s31, introducing a quantile loss optimization target, which can be expressed as follows:
wherein the method comprises the steps ofFor time convolution network model parameter weights, +.>Bias for model parameters->Representing a time convolution neural network model, mapping an original multi-source time sequence segment to a load predicted value, and dividing the original multi-source time sequence segment into a number of bits>。
S32, updating model parameters by utilizing Adam optimization quantile loss, wherein the model parameters can be expressed as follows:
wherein the method comprises the steps ofFor a set of time-convolution network model parameters, +.>And->Is->First and second moment estimates of the introduced momentum in the sub-model parameter update, +.>And->For decay rate parameter, +.>Optimizing goal for loss->Derivation of model parameters, ++>For learning rate->For bias term constant->。
S33, taking 0.05 as an interval, obtaining 20 different quantile loss optimization orders from 0 to 1Under-labeled predictive modelCorresponding model load prediction results。
Further, in step S4, the result of the median (i.e. 0.5 quantile) prediction model is used as the actual load prediction result, i.e.。
Model prediction results under different quantiles, assuming that the quantiles continuously take values from 0 to 1, regarding load prediction valuesThe conditional density of (2) can be expressed as follows:
further discretizing, and estimating the available load prediction distribution result by utilizing the Gaussian kernel density.
Further, the present disclosure further includes step S5: the effect and feasibility of the trained model was checked in the test set.
In step S5, a test set sample is selectedCorresponding predicted loadModel checking is performed, and the accuracy and effectiveness of model load prediction are respectively evaluated by using Root Mean Square Error (RMSE), mean Absolute Error (MAE), mean Absolute Percentage Error (MAPE), normalized Root Mean Square Error (NRMSE) and R2 index. The indices are defined as follows:
implementations of the present disclosure may be realized by means of a computer program, and a programming language may be employed including, but not limited to Python, MATLAB.
The present disclosure also includes a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method as described in the first aspect.
According to the method provided by the disclosure, the historical power load data and the weather data are decomposed through EPT, so that trend term and periodic term components of a time sequence are obtained; constructing a power load multi-step prediction model by using an artificial intelligent deep learning model and an instant convolution network; introducing quantile loss optimization targets, and obtaining power load distribution prediction results based on model prediction results and kernel function estimation methods under different quantiles.
Compared with the prior art, the beneficial effects of the present disclosure are: 1) The EPT is used for decomposing the historical power load data and the weather data, so that the periodic change and the trend change of the data can be better captured, and the prediction accuracy and the generalization capability are improved; 2) Compared with the traditional RNN, LSTM and other circulating neural networks, the time convolution network adopted by the method has better long-term dependency modeling capability and simpler model structure, and has more excellent performance in prediction accuracy and calculation efficiency; 3) By introducing a quantile loss optimization target and a kernel function estimation method, the interval and the distribution of the predicted load at each moment can be given, a more accurate power load prediction result and decision support are provided, and important references are provided for power dispatching and energy planning.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following more particular descriptions of exemplary embodiments of the disclosure as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the disclosure.
FIG. 1 is a flowchart of an exemplary embodiment of a power load distribution prediction method according to the present disclosure;
FIG. 2 is a schematic diagram of a time convolution network employed in the embodiments;
FIG. 3 is a graph of partial electrical load data used in the examples.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are illustrated in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The present disclosure provides a power load distribution prediction method based on EPT and a time convolution network, and a flowchart of an exemplary embodiment is shown in FIG. 1, including the following steps:
s1, collecting power load historical data and weather data at corresponding moments, wherein the power load data is a time sequence of fixed time intervals, and the weather data mainly comprises temperature, humidity, radiation intensity and the like. Decomposing each time sequence by using EPT to obtain trend term and periodic term components of each sequence;
s2, setting time convolution network model parameters, and constructing a load prediction neural network model. Constructing a training sample set based on trend item and periodic item components of each sequence;
s3, introducing a quantile loss optimization target, and optimizing model parameters by utilizing Adam. Taking 0.05 as an interval, obtaining load prediction models under different quantile loss optimization targets;
s4, comprehensively analyzing load prediction results under different quantile loss optimization targets, taking the model prediction results under the median (namely 0.5 quantile) as actual load prediction results, and acquiring real-time load prediction distribution based on the model prediction results under the different quantile and the kernel density estimation;
s5, checking the effect and feasibility of the trained model in the test set.
The step S1 specifically includes:
the power load data sampling interval should typically be less than 1h, here recommended to be 15min. The weather data and the power load data are required to be consistent in sampling time.
Load prediction based on multi-source time series data comprehensively considering power load data, temperature, humidity and radiation intensity, wherein the multi-source time series can be expressed as,/>Is the time series length. EPT decomposition was performed on each time series, expressed by the following formula:
wherein,and->Representing time series high frequency and low frequency components, respectively.
The multisource time series after EPT decomposition can be expressed asIncluding low frequency trend term components and high frequency periodic components of each time seriesAmount of the components.
The partial electrical load data curve employed in this embodiment is shown in fig. 3. The graph shows the power load curve of a certain region of China from 1 month 1 day to 1 month 5 days in 2015, and the sampling interval is 15min. It can be seen that the daily electrical load profile exhibits a more pronounced cycle characteristic. Meanwhile, the power load shows strong relevance with the date and season, for example, the day-1 denier at 1 month and 1 day, and the total load is low. The obvious characteristic mode in the data provides a basis for accurate load prediction, and the self-adaptive mining and prediction are also needed by means of the artificial intelligent machine learning method adopted by the application.
The step S2 specifically comprises the following steps:
first, time convolution network model parameters are set, including an input layer, two hidden layers and an output layer. Preferably, the number of channels of the input layer is 8, and the numbers of channels of the two hidden layers and the output layer are 64, 64 and 1 respectively; the two hidden and output layer convolution kernel sizes are 32, 16, 1, respectively. Based on the set time convolution network parameters, an end-to-end load prediction neural network model is built, the model is input into an EPT decomposed multi-source time sequence, and the model is output into a load under a specific prediction step length.
Then, a training sample set is constructed. The model input in the training sample set is a multi-source time series segment, which can be expressed asWherein->Segment length entered for model, +.>The number of samples. Model output label +.>Representing->Actual load values after a time step.
The structure of the time convolution network used in this embodiment is shown in fig. 2, where the time convolution network includes an input layer, two hidden layers, and an output layer. The number of channels of the input layer is 8, and the numbers of channels of the two hidden layers and the output layer are 64, 64 and 1 respectively. The two hidden and output layer convolution kernel sizes are 32, 16, 1, respectively. The convolution process operates in a causal convolution manner, i.e., no future information is considered. The expansion convolution is formed by introducing the superparameter condition, namely, adding intervals in the convolution process, so that the perception field of view is increased. The parameter padding is determined by the relation, and the input and output dimensions are ensured to be the same for zero padding operation.
The step S3 specifically comprises the following steps:
s31, introducing a quantile loss optimization target, which can be expressed as follows:
wherein the method comprises the steps ofFor time convolution network model parameter weights, +.>Bias for model parameters->Representing a time convolution neural network model, mapping an original multi-source time sequence segment to a load predicted value, and dividing the original multi-source time sequence segment into a number of bits>。
S32, updating model parameters by utilizing Adam optimization quantile loss, wherein the model parameters can be expressed as follows:
wherein the method comprises the steps ofFor time rollsIntegrating network model parameter set, < >>And->Is->First and second moment estimates of the introduced momentum in the sub-model parameter update, +.>And->For decay rate parameter, +.>Optimizing goal for loss->Derivation of model parameters, ++>For learning rate->For bias term constant->。
S33, taking 0.05 as an interval, obtaining a prediction model under 20 different quantile loss optimization targets from 0 to 1Corresponding model load prediction results。
The step S4 specifically comprises the following steps:
output results of a median (i.e. 0.5-decimal) prediction model are taken as actual load prediction results, namely。
Model prediction results under different quantiles, assuming that the quantiles continuously take values from 0 to 1, regarding load prediction valuesThe conditional density of (2) can be expressed as follows:
further discretizing, and estimating the available load prediction distribution result by utilizing the Gaussian kernel density.
The step S5 specifically comprises the following steps:
selecting test set samplesCorresponding predicted loadModel checking is performed, and the accuracy and effectiveness of model load prediction are respectively evaluated by using Root Mean Square Error (RMSE), mean Absolute Error (MAE), mean Absolute Percentage Error (MAPE), normalized Root Mean Square Error (NRMSE) and R2 index. The indices are defined as follows:
the present embodiment also provides a readable storage medium, where a program or an instruction is stored, where the program or the instruction, when executed by a processor, implements each process of the foregoing embodiment of the power load distribution prediction method based on the EPT and the time convolution network, and the same technical effect can be achieved, so that repetition is avoided, and no description is repeated here.
Application example
The power load distribution prediction method based on the EPT and the time convolution network is verified by specific application:
the power load data of a certain region in China is selected, the training data comprise 100 days of power load data (9600 sampling points), and the test data are 10 days of power load data (960 sampling points). Table 1 gives a comparison of the load predictions for the different methods.
Table 1 comparison of different method load predictions
From the results, it can be seen that the time convolution network employed in the present disclosure has the best predictive effect. Meanwhile, it is emphasized that SVM, RNN and LSTM in the comparison method are point estimation, the load prediction result contains limited information, and the method can not only predict the power load value at different moments, but also give out corresponding confidence interval and load distribution, and provide more comprehensive reference information for power dispatching and energy planning.
The aspects of the present disclosure may be embodied in essence or contributing to the prior art in the form of a software product stored on a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a cell phone, computer, server, air conditioner, or network device, etc.) to perform the methods described in the various embodiments of the disclosure.
The foregoing technical solutions are merely exemplary embodiments of the present application, and various modifications and variations can be easily made by those skilled in the art based on the application methods and principles disclosed in the present application, not limited to the methods described in the foregoing specific embodiments of the present application, so that the foregoing description is only preferred and not in a limiting sense.
Claims (6)
1. A method for predicting power load distribution based on an integrated segment transformation and time convolution network, comprising the steps of:
s1, acquiring power load historical data and weather data at corresponding moments, wherein the power load data is a time sequence of fixed time intervals, and the weather data is a time sequence comprising temperature, humidity and radiation intensity; decomposing each time sequence by utilizing EPT (Ethernet passive tree), namely integrated fragment transformation, and obtaining trend term and periodic term components of each sequence;
s2, setting time convolution network model parameters, and constructing a load prediction neural network model; constructing a training sample set based on the trend item and the periodic item components of each sequence;
s3, introducing a quantile loss optimization target, and optimizing model parameters; obtaining load prediction models under different fractional loss optimization targets with certain intervals;
s4, taking a model prediction result under a median loss optimization target as an actual load prediction result, and acquiring real-time load prediction distribution based on the model prediction result and the kernel density estimation under different median;
in the step S1, the sampling interval of the power load data is 15min;
the specific method for decomposing each time sequence by EPT transformation is as follows:
the multi-source time series data including the power load data and the weather data is expressed as:wherein->For the length of the time series, +.>Representing power load data, +.>Indicating the temperature and the%>Indicating humidity, & gt>Representing the intensity of the radiation;
EPT decomposition was performed on each time series, expressed by the following formula:
wherein,and->Respectively representing time series high frequency and low frequency components;
the multisource time series after EPT decomposition can be expressed asIncluding high frequency periodic components and low frequency trend term components for each time series;
the step S3 specifically includes:
s31, introducing a quantile loss optimization target, wherein the quantile loss optimization target is expressed as follows:
wherein,for time convolution network model parameter weights, +.>Bias for model parameters->Representing a time convolution neural network model, mapping an original multi-source time sequence segment to a load predicted value, and dividing the original multi-source time sequence segment into a number of bits>;
S32, updating model parameters by utilizing Adam optimization quantile loss, wherein the model parameters are expressed as follows:
wherein the method comprises the steps ofFor a set of time-convolution network model parameters, +.>And->Is->First and second moment estimates of the introduced momentum in the sub-model parameter update, +.>And->For decay rate parameter, +.>Optimizing goal for loss->Derivation of model parameters, ++>For learning rate->For bias term constant->;
S33, taking 0.05 as an interval, obtaining a prediction model under 20 different quantile loss optimization targets from 0 to 1And (3) corresponding model load prediction results:
。
2. the method according to claim 1, wherein the step S2 specifically comprises:
setting time convolution network model parameters, including an input layer, two hidden layers and an output layer;
based on the set time convolution network parameters, constructing an end-to-end load prediction neural network model, inputting the model into an EPT decomposed multi-source time sequence, and outputting the model into a load under a specific prediction step length;
constructing a training sample set, wherein a model input in the training sample set is a multi-source time sequence segment, which is expressed asWherein->Segment length entered for model, +.>The number of the samples; model output label +.>Representing->Actual load values after a time step.
3. The method according to claim 2, wherein in the step S2, the number of channels of the input layer is 8, and the number of channels of the two hidden layers and the output layer is 64, 1, respectively; the two hidden and output layer convolution kernel sizes are 32, 16, 1, respectively.
4. A method according to claim 3, wherein said step S4 specifically comprises: the result of the median, i.e. 0.5 quantile, prediction model is output as the actual load prediction result, i.e.;
Model prediction results under different quantiles, assuming that the quantiles continuously take values from 0 to 1, regarding load prediction valuesThe conditional density of (2) can be expressed as follows:
further discretizing, and estimating the available load prediction distribution result by utilizing the Gaussian kernel density.
5. The method according to any one of claims 1-4, further comprising the step of:
s5, checking the effect and feasibility of the trained model in the test set.
6. A readable storage medium, wherein a program or instructions is stored on the readable storage medium, which when executed by a processor, implement the steps of the method of claim 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310762859.5A CN116544931B (en) | 2023-06-27 | 2023-06-27 | Power load distribution prediction method based on integrated fragment transformation and time convolution network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310762859.5A CN116544931B (en) | 2023-06-27 | 2023-06-27 | Power load distribution prediction method based on integrated fragment transformation and time convolution network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116544931A CN116544931A (en) | 2023-08-04 |
CN116544931B true CN116544931B (en) | 2023-12-01 |
Family
ID=87452739
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310762859.5A Active CN116544931B (en) | 2023-06-27 | 2023-06-27 | Power load distribution prediction method based on integrated fragment transformation and time convolution network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116544931B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118395384B (en) * | 2024-06-21 | 2024-08-30 | 湖南慧明谦数字能源技术有限公司 | Multi-dimensional decomposition and intelligent fusion power load prediction method and related equipment |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111582943A (en) * | 2020-05-13 | 2020-08-25 | 江南大学 | CNN and LSTM-based power system load prediction method |
CN111815065A (en) * | 2020-07-21 | 2020-10-23 | 东北大学 | Short-term power load prediction method based on long-term and short-term memory neural network |
CN114330814A (en) * | 2021-11-10 | 2022-04-12 | 国电南瑞南京控制系统有限公司 | Short-term load prediction method based on VMD decomposition and improved double-layer BILSTM network |
CN114493755A (en) * | 2021-12-28 | 2022-05-13 | 电子科技大学 | Self-attention sequence recommendation method fusing time sequence information |
CN115293326A (en) * | 2022-07-05 | 2022-11-04 | 深圳市国电科技通信有限公司 | Training method and device of power load prediction model and power load prediction method |
CN115688993A (en) * | 2022-10-20 | 2023-02-03 | 浙江工业大学 | Short-term power load prediction method suitable for power distribution station area |
WO2023035564A1 (en) * | 2021-09-08 | 2023-03-16 | 广东电网有限责任公司湛江供电局 | Load interval prediction method and system based on quantile gradient boosting decision tree |
CN115808650A (en) * | 2022-10-31 | 2023-03-17 | 南方医科大学 | Electrical characteristic tomography method, system, device and medium based on transient linearization |
CN115952901A (en) * | 2022-12-27 | 2023-04-11 | 香港中文大学(深圳) | Power load prediction method based on ensemble learning |
WO2023088212A1 (en) * | 2021-11-16 | 2023-05-25 | 西安热工研究院有限公司 | Online unit load prediction method based on ensemble learning |
CN116264388A (en) * | 2022-12-26 | 2023-06-16 | 国网浙江省电力有限公司桐乡市供电公司 | Short-term load prediction method based on GRU-LightGBM model fusion and Bayesian optimization |
-
2023
- 2023-06-27 CN CN202310762859.5A patent/CN116544931B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111582943A (en) * | 2020-05-13 | 2020-08-25 | 江南大学 | CNN and LSTM-based power system load prediction method |
CN111815065A (en) * | 2020-07-21 | 2020-10-23 | 东北大学 | Short-term power load prediction method based on long-term and short-term memory neural network |
WO2023035564A1 (en) * | 2021-09-08 | 2023-03-16 | 广东电网有限责任公司湛江供电局 | Load interval prediction method and system based on quantile gradient boosting decision tree |
CN114330814A (en) * | 2021-11-10 | 2022-04-12 | 国电南瑞南京控制系统有限公司 | Short-term load prediction method based on VMD decomposition and improved double-layer BILSTM network |
WO2023088212A1 (en) * | 2021-11-16 | 2023-05-25 | 西安热工研究院有限公司 | Online unit load prediction method based on ensemble learning |
CN114493755A (en) * | 2021-12-28 | 2022-05-13 | 电子科技大学 | Self-attention sequence recommendation method fusing time sequence information |
CN115293326A (en) * | 2022-07-05 | 2022-11-04 | 深圳市国电科技通信有限公司 | Training method and device of power load prediction model and power load prediction method |
CN115688993A (en) * | 2022-10-20 | 2023-02-03 | 浙江工业大学 | Short-term power load prediction method suitable for power distribution station area |
CN115808650A (en) * | 2022-10-31 | 2023-03-17 | 南方医科大学 | Electrical characteristic tomography method, system, device and medium based on transient linearization |
CN116264388A (en) * | 2022-12-26 | 2023-06-16 | 国网浙江省电力有限公司桐乡市供电公司 | Short-term load prediction method based on GRU-LightGBM model fusion and Bayesian optimization |
CN115952901A (en) * | 2022-12-27 | 2023-04-11 | 香港中文大学(深圳) | Power load prediction method based on ensemble learning |
Also Published As
Publication number | Publication date |
---|---|
CN116544931A (en) | 2023-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Mi et al. | Short‐term power load forecasting method based on improved exponential smoothing grey model | |
CN110163429B (en) | Short-term load prediction method based on similarity day optimization screening | |
CN111127246A (en) | Intelligent prediction method for transmission line engineering cost | |
Jiang et al. | Day‐ahead renewable scenario forecasts based on generative adversarial networks | |
CN116544931B (en) | Power load distribution prediction method based on integrated fragment transformation and time convolution network | |
CN115347571B (en) | Photovoltaic power generation short-term prediction method and device based on transfer learning | |
CN114330935B (en) | New energy power prediction method and system based on multiple combination strategies integrated learning | |
Lu et al. | Wind power uncertainty modeling considering spatial dependence based on pair-copula theory | |
CN110807508B (en) | Bus peak load prediction method considering complex weather influence | |
CN114004430A (en) | Wind speed forecasting method and system | |
CN114330934A (en) | Model parameter self-adaptive GRU new energy short-term power generation power prediction method | |
Wang | Enhancing energy efficiency with smart grid technology: a fusion of TCN, BiGRU, and attention mechanism | |
CN117688846A (en) | Reinforced learning prediction method and system for building energy consumption and storage medium | |
CN117708710A (en) | Short-term lightweight load prediction method for power distribution area | |
CN116613732A (en) | Multi-element load prediction method and system based on SHAP value selection strategy | |
Wen et al. | Short-term load forecasting with bidirectional LSTM-attention based on the sparrow search optimisation algorithm | |
CN116454875A (en) | Regional wind farm mid-term power probability prediction method and system based on cluster division | |
Ghimire et al. | Probabilistic-based electricity demand forecasting with hybrid convolutional neural network-extreme learning machine model | |
Uher et al. | Forecasting electricity consumption in Czech Republic | |
Shen et al. | An interval analysis scheme based on empirical error and MCMC to quantify uncertainty of wind speed | |
Wei et al. | Short term load forecasting based on PCA and LS-SVM | |
CN113962432A (en) | Wind power prediction method and system integrating three-dimensional convolution and light-weight convolution threshold unit | |
CN113254857A (en) | SSA-ELM-based short-term power load prediction method | |
Zhao et al. | Knowledge-Informed Uncertainty-Aware Machine Learning for Time Series Forecasting of Dynamical Engineered Systems | |
CN118300104B (en) | Distributed photovoltaic power prediction method, system, electronic equipment and storage medium based on graph neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |