CN115545361A - Method, system and medium for predicting climate environment of power grid transmission line - Google Patents

Method, system and medium for predicting climate environment of power grid transmission line Download PDF

Info

Publication number
CN115545361A
CN115545361A CN202211534275.4A CN202211534275A CN115545361A CN 115545361 A CN115545361 A CN 115545361A CN 202211534275 A CN202211534275 A CN 202211534275A CN 115545361 A CN115545361 A CN 115545361A
Authority
CN
China
Prior art keywords
layer
prediction
decoding
attention
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211534275.4A
Other languages
Chinese (zh)
Other versions
CN115545361B (en
Inventor
李磊
周正
胡钰林
廖荣涛
王逸兮
叶宇轩
王晟玮
胡欢君
张剑
宁昊
董亮
刘芬
郭岳
罗弦
张岱
李想
陈家璘
冯浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Information and Telecommunication Branch of State Grid Hubei Electric Power Co Ltd
Original Assignee
Wuhan University WHU
Information and Telecommunication Branch of State Grid Hubei Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU, Information and Telecommunication Branch of State Grid Hubei Electric Power Co Ltd filed Critical Wuhan University WHU
Priority to CN202211534275.4A priority Critical patent/CN115545361B/en
Publication of CN115545361A publication Critical patent/CN115545361A/en
Application granted granted Critical
Publication of CN115545361B publication Critical patent/CN115545361B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Marketing (AREA)
  • Biophysics (AREA)
  • General Business, Economics & Management (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Primary Health Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to a method, a system and a medium for predicting the climate environment of a power grid transmission line, wherein the method comprises the steps of obtaining historical climate environment data to construct an input sample; constructing a transformer prediction model of a sparse attention mechanism, and predicting an input sample to obtain an original prediction result; comparing the original prediction result with real data to obtain an error sequence, and inputting the error sequence into an ARIMA model to perform error prediction to obtain an error correction sequence; and adding the original prediction result and the error correction sequence to obtain a final prediction result. The method and the device effectively improve the stability and accuracy of overall prediction, effectively reduce the computation amount of self-attention operation in the traditional transform model, accelerate the model training and predicting speed, and reduce the hardware requirement for deploying the model.

Description

Method, system and medium for predicting climate environment of power grid transmission line
Technical Field
The application relates to the field of power systems, in particular to a method, a system and a medium for predicting the climate environment of a power grid transmission line based on ARIMA correction Transformer.
Background
In the power system, the climate environment information of the power grid transmission line has important reference value for planning, scheduling and maintaining of a power network, and prediction of the climate environment information of the line in advance can assist scheduling decision and assist construction of a smart power grid. Therefore, the method for predicting the climate environment of the power grid transmission line has important significance.
The existing prediction models mainly comprise a traditional linear model and an emerging recurrent neural network model. The traditional linear model has poor performance on the nonlinear part in the environmental data, and the prediction performance depends on the selection of parameters. However, a series of models based on the recurrent neural network is difficult to capture the long-term dependency relationship between the historical data, and the calculation amount is large. The use of hybrid models has received a great deal of attention due to the limitations of single predictive models. Combining different models for prediction is considered to be an effective way to improve the prediction result.
Disclosure of Invention
The embodiment of the application aims to provide a method, a system and a medium for predicting the climate environment of a power grid transmission line, so that the computation amount of self-attention operation in a traditional transform model is effectively reduced, the model training and predicting speed is accelerated, and the hardware requirement for deploying the model is reduced.
In order to achieve the above purpose, the present application provides the following technical solutions:
in a first aspect, an embodiment of the present application provides a method for predicting a climate environment of a transmission line of a power grid, including the following specific steps:
acquiring historical climate environment data to construct an input sample;
constructing a transformer prediction model of a sparse attention mechanism, and predicting an input sample to obtain an original prediction result;
comparing the original prediction result with real data to obtain an error sequence, inputting the error sequence into an ARIMA model for error prediction to obtain an error correction sequence, wherein the error correction sequence gives out possible deviation of the original prediction result of the transformer and the actual data;
and adding the original prediction result and the error correction sequence, and improving the defect of linear characteristic deficiency of the transform model through the error correction sequence of the ARIMA model to obtain a final predicted future environment data sequence with higher accuracy and stability.
The acquisition of the historical climate environment data to construct the input sample is specifically that the historical climate environment data comprises a time sequence formed by historical numerical values of temperature, humidity and wind speed information around the transmission line, the historical numerical values are normalized and stored as vectors
Figure 27862DEST_PATH_IMAGE001
As model input.
The transform prediction model of the sparse attention mechanism comprises an encoder and a decoder, wherein the encoder performs feature extraction on input data; the decoder performs multi-head sparse self-attention operation by using the characteristics extracted by the encoder, and finally realizes the climate environment prediction of the transmission line, wherein the input of the encoding layer is
Figure 401075DEST_PATH_IMAGE001
The input of the decoding layer is
Figure 530705DEST_PATH_IMAGE001
And output of the coding layer
Figure 423706DEST_PATH_IMAGE002
The encoder is composed of a plurality of identical encoding layers, the decoder is composed of a plurality of identical decoding layers, except that the input of the first encoding layer is
Figure 980589DEST_PATH_IMAGE001
Besides, all the coding layers have the output of more than one coding layer as their input, and the output of the last coding layer is the output of the coder
Figure 106677DEST_PATH_IMAGE002
Except for the first decoding layer input of
Figure 508839DEST_PATH_IMAGE001
And
Figure 912139DEST_PATH_IMAGE002
besides, all decoding layers take the output of the above one as input, and the output of the last decoding layer is a predicted sequence of the transform prediction model for future climate data.
The coding layer of the transform prediction model comprises a one-dimensional convolution layer, a sparse self-attention layer and a feedforward network layer, and the operation of feature extraction of the coding layer is specifically to input a vector
Figure 279404DEST_PATH_IMAGE001
After one-dimensional convolution layer and adding position code to obtain
Figure 502575DEST_PATH_IMAGE003
Using three weight matrices respectively
Figure 974007DEST_PATH_IMAGE004
Figure 356447DEST_PATH_IMAGE005
And
Figure 255133DEST_PATH_IMAGE006
and
Figure 44229DEST_PATH_IMAGE003
obtaining multichannel query matrixes by matrix multiplication respectively
Figure 381669DEST_PATH_IMAGE007
Key matrix
Figure 602304DEST_PATH_IMAGE008
And value matrix
Figure 937470DEST_PATH_IMAGE009
By sparse in order to reduce the computational load of the model and prevent overfittingCalculating the three matrixes by self attention to obtain self attention, and mapping the calculated self attention into an original input dimension through a linear layer
Figure 135233DEST_PATH_IMAGE003
Adding, and passing the result of the addition through a feedforward layer to obtain the output of the encoder
Figure 338681DEST_PATH_IMAGE002
The operation of the sparse self-attention operation is: selecting matrix
Figure 305500DEST_PATH_IMAGE007
The largest numerical value
Figure 545989DEST_PATH_IMAGE010
Setting other elements to 0 by each element to obtain a matrix after sparsification
Figure 371994DEST_PATH_IMAGE011
Wherein
Figure 723341DEST_PATH_IMAGE010
For a preset number of appropriate size, then the matrix
Figure 810245DEST_PATH_IMAGE011
Figure 346269DEST_PATH_IMAGE008
Figure 518624DEST_PATH_IMAGE009
As an input, a self-attention operation is performed, with the formula
Figure 673662DEST_PATH_IMAGE012
Wherein
Figure 723395DEST_PATH_IMAGE013
The function is activated for the softmax and,
Figure 571266DEST_PATH_IMAGE014
is a matrix
Figure 89972DEST_PATH_IMAGE007
The dimension (c) of (a) is,
Figure 48700DEST_PATH_IMAGE015
is the calculated self-attention.
The decoding layer of the transformer prediction model comprises a decoding one-dimensional convolution layer, a decoding sparse self-attention layer, a decoding feedforward network layer and a decoding linear layer, and the specific operation of predicting the decoding layer is that the first input data is input
Figure 657667DEST_PATH_IMAGE001
After decoding the one-dimensional convolutional layer and one layer of the sparse self-attention layer, mapping the layer into a query matrix by decoding the linear layer
Figure 801073DEST_PATH_IMAGE016
And thinned to obtain
Figure 682441DEST_PATH_IMAGE017
Is reused to the second input data
Figure 444861DEST_PATH_IMAGE002
Respectively mapped to key matrix by decoding linear layer
Figure 469186DEST_PATH_IMAGE018
And value matrix
Figure 658859DEST_PATH_IMAGE019
Then calculating attention as
Figure 293103DEST_PATH_IMAGE020
Wherein
Figure 452689DEST_PATH_IMAGE021
Is composed of
Figure 223199DEST_PATH_IMAGE016
The calculated attention is decoded into a linear layer and a feedforward layer to obtain a prediction sequence of the transform prediction model on future environment data.
In a second aspect, the present application provides a system for predicting a climate environment of a transmission line of an electric grid, including,
the input sample construction module is used for acquiring historical climate environment data and constructing an input sample;
the system comprises a transform prediction model construction module, a transform prediction model prediction module and a prediction module, wherein the transform prediction model construction module is used for constructing a transform prediction model of a sparse attention mechanism and predicting an input sample to obtain an original prediction result;
the error correction sequence acquisition module is used for comparing an original prediction result with real data to obtain an error sequence and inputting the error sequence into an ARIMA model for error prediction to obtain an error correction sequence;
and the prediction result acquisition module is used for adding the original prediction result and the error correction sequence to obtain a final prediction result.
In a third aspect, the present application provides a computer-readable storage medium, which stores program codes, and when the program codes are executed by a processor, the method for predicting the climate environment of the power grid transmission line is implemented.
Compared with the prior art, the invention has the beneficial effects that:
(1) According to the method, the time domain characteristics and long-term dependence information of the input long-term sequence are effectively extracted by using a one-dimensional convolutional neural network and a sparse self-attention mechanism through a transform-based prediction model, and the problem that the long-term sequence data is difficult to effectively process in the prior art is solved;
(2) According to the method, through combination of a forecasting model based on a transformer and an ARIMA, through combination of the nonlinear feature extraction capability of the transformer model and the linear feature extraction capability of the ARIMA, the ARIMA is utilized to carry out error correction on the forecasting model based on the transformer, and the stability and accuracy of overall forecasting are effectively improved;
(3) By introducing sparse self-attention operation, the method effectively reduces the computation amount of self-attention operation in the traditional transform model, accelerates the model training and predicting speed, and reduces the hardware requirement for model deployment.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a flow chart of a method of an embodiment of the present application;
FIG. 2 is a schematic flow chart of a method implemented in an embodiment of the present application;
FIG. 3 is a schematic diagram of an adaptive dynamic convolution AdaConv module according to an embodiment of the present application;
FIG. 4 is a block diagram of a system according to an embodiment of the present application;
FIG. 5 is a comparison graph of the predicted effect of the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
Referring to fig. 1, fig. 1 provides a method for predicting a climate environment of a transmission line of a power grid according to an embodiment of the present application, which includes the following specific steps:
acquiring historical climate environment data to construct an input sample;
constructing a transform prediction model of a sparse attention mechanism, and predicting an input sample to obtain an original prediction result;
comparing the original prediction result with real data to obtain an error sequence, inputting the error sequence into an ARIMA model for error prediction to obtain an error correction sequence, wherein the error correction sequence gives out possible deviation of the original prediction result of the transformer and the actual data;
and adding the original prediction result and the error correction sequence, and improving the defect of linear characteristic deficiency of the transform model through the error correction sequence of the ARIMA model to obtain a final predicted future environment data sequence with higher accuracy and stability.
The method for acquiring the historical climate environment data to construct the input sample specifically comprises the steps of acquiring a time sequence formed by historical numerical values of temperature, humidity and wind speed information around the transmission line, normalizing the historical numerical values, and storing the normalized numerical values as vectors
Figure 318194DEST_PATH_IMAGE001
As model input.
As shown in fig. 2 and fig. 3, the transform prediction model of the sparse attention mechanism includes an encoder and a decoder, where the encoder performs feature extraction on input data; the decoder performs multi-head sparse self-attention operation by using the characteristics extracted by the encoder, and finally realizes the climate environment prediction of the transmission line, wherein the input of the encoding layer is
Figure 49520DEST_PATH_IMAGE001
The input of the decoding layer is
Figure 153743DEST_PATH_IMAGE001
And output of the coding layer
Figure 778759DEST_PATH_IMAGE002
The encoder is composed of a plurality of identical encoding layers, the decoder is composed of a plurality of identical decoding layers, except that the input of the first encoding layer is
Figure 434868DEST_PATH_IMAGE001
Besides, all the coding layers have the output of more than one coding layer as their input, and the output of the last coding layer is the output of the coder
Figure 512546DEST_PATH_IMAGE002
Except for the first decoding layer as
Figure 154880DEST_PATH_IMAGE001
And
Figure 8304DEST_PATH_IMAGE002
besides, all decoding layers take the output of one decoding layer as input, and the output of the last decoding layer is a prediction sequence of a transform prediction model for future climate data.
The coding layer of the transform prediction model comprises a one-dimensional convolution layer, a sparse self-attention layer and a feedforward network layer, and the operation of feature extraction of the coding layer is specifically to input a vector
Figure 710681DEST_PATH_IMAGE001
After one-dimensional convolution layer and adding position code to obtain
Figure 541233DEST_PATH_IMAGE003
Using three weight matrices respectively
Figure 315154DEST_PATH_IMAGE004
Figure 55708DEST_PATH_IMAGE005
And
Figure 663407DEST_PATH_IMAGE006
and with
Figure 715677DEST_PATH_IMAGE003
Obtaining multichannel query matrixes by matrix multiplication respectively
Figure 824447DEST_PATH_IMAGE007
Key matrix
Figure 544141DEST_PATH_IMAGE008
And value matrix
Figure 696643DEST_PATH_IMAGE009
In order to reduce the calculation amount of the model and prevent overfitting, the three matrixes are operated by sparse self-attention to obtain the self-attention, and the self-attention obtained by calculation is mapped into the original input dimension through a linear layer and then is subjected to cross-correlation
Figure 236209DEST_PATH_IMAGE003
Adding the obtained result and passing the result through feedforward layer to obtain the output of coder
Figure 758457DEST_PATH_IMAGE002
The operation of the sparse self-attention operation is as follows: selecting matrix
Figure 457292DEST_PATH_IMAGE007
Maximum median value
Figure 141214DEST_PATH_IMAGE010
Setting other elements as 0 by each element to obtain a matrix after sparsification
Figure 309021DEST_PATH_IMAGE011
In which
Figure 369381DEST_PATH_IMAGE010
For a preset number of appropriate size, then the matrix
Figure 922722DEST_PATH_IMAGE011
Figure 43125DEST_PATH_IMAGE008
Figure 557283DEST_PATH_IMAGE009
As an input, a self-attention operation is performed, with the formula
Figure 806954DEST_PATH_IMAGE012
Wherein
Figure 90167DEST_PATH_IMAGE013
The function is activated for the softmax and,
Figure 771685DEST_PATH_IMAGE014
is a matrix
Figure 507559DEST_PATH_IMAGE007
The dimension (c) of (a) is,
Figure 440880DEST_PATH_IMAGE015
is the calculated self-attention.
The decoding layer of the transform prediction model comprises a decoding one-dimensional convolutional layer, a decoding sparse self-attention layer, a decoding feedforward network layer and a decoding linear layer, and the specific operation of predicting the decoding layer is to use the first input data
Figure 453967DEST_PATH_IMAGE001
After decoding the one-dimensional convolutional layer and one layer of the sparse self-attention layer, mapping the layer into a query matrix by decoding the linear layer
Figure 181751DEST_PATH_IMAGE016
And is thinned to obtain
Figure 529556DEST_PATH_IMAGE017
Is reused to the second input data
Figure 266568DEST_PATH_IMAGE002
Respectively mapped into key matrix by decoding linear layer
Figure 258795DEST_PATH_IMAGE018
And value matrix
Figure 531382DEST_PATH_IMAGE019
Then calculate attention, in the formula
Figure 241849DEST_PATH_IMAGE020
Wherein
Figure 297399DEST_PATH_IMAGE021
Is composed of
Figure 3186DEST_PATH_IMAGE016
And (4) obtaining a prediction sequence of the transform prediction model for future environment data after decoding the linear layer and the feedforward layer according to the calculated attention.
The ARIMA model is established by the following steps:
drawing the historical data time sequence, observing whether the historical data time sequence is stable or not, and if the historical data time sequence is a non-stable sequence, carrying out difference to obtain a stable time sequence;
and solving an autocorrelation coefficient ACF and a partial autocorrelation coefficient PACF for the obtained stationary time sequence, wherein the formula for calculating the autocorrelation coefficient ACF is as follows:
Figure 338353DEST_PATH_IMAGE022
wherein N is the length of the sequence,
Figure 50963DEST_PATH_IMAGE023
in order to be the number of lags in the sequence,
Figure 270723DEST_PATH_IMAGE024
is the average value of the sequence,
Figure 345864DEST_PATH_IMAGE025
is the sequence of
Figure 461718DEST_PATH_IMAGE026
And (4) point. The partial autocorrelation coefficient PACF is typically calculated using a least squares method.
Determining appropriate model parameters from ACF and PACF
Figure 740253DEST_PATH_IMAGE027
. Wherein
Figure 91600DEST_PATH_IMAGE028
To predict the lag number of the time series data itself employed in the model,
Figure 286827DEST_PATH_IMAGE029
to obtain the number of differences required to smooth the time series,
Figure 698217DEST_PATH_IMAGE030
is the lag number of the prediction error employed in the prediction model.
As shown in fig. 4, an embodiment of the present application provides a system for predicting a climate environment of a transmission line of a power grid, including,
the input sample construction module 1 is used for acquiring historical climate environment data and constructing an input sample;
the transformer prediction model building module 2 is used for building a transformer prediction model of a sparse attention mechanism and predicting an input sample to obtain an original prediction result;
the error correction sequence acquisition module 3 is used for comparing the original prediction result with the real data to obtain an error sequence and inputting the error sequence into an ARIMA model for error prediction to obtain an error correction sequence;
and the prediction result acquisition module 4 is used for adding the original prediction result and the error correction sequence to obtain a final prediction result.
As shown in fig. 5, by applying the method of the present application, the air temperature of the power transmission line is predicted, and based on the comparative analysis between the measured temperature and the predicted temperature, it can be known that the predicted temperature and the measured temperature have a small difference, and the prediction accuracy is high.
The embodiment of the application provides a computer readable storage medium, which stores program codes, and when the program codes are executed by a processor, the steps of the power grid transmission line climate environment prediction method are realized.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made to the present application by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (8)

1. A power grid transmission line climate environment prediction method is characterized by comprising the following specific steps:
acquiring historical climate environment data to construct an input sample;
constructing a transformer prediction model of a sparse attention mechanism, and predicting an input sample to obtain an original prediction result;
comparing the original prediction result with real data to obtain an error sequence, inputting the error sequence into an ARIMA model for error prediction to obtain an error correction sequence, wherein the error correction sequence gives out possible deviation of the original prediction result of the transformer and the actual data;
and adding the original prediction result and the error correction sequence, and improving the defect of linear characteristic deficiency of the transform model through the error correction sequence of the ARIMA model to obtain a final predicted future environment data sequence with higher accuracy and stability.
2. Method for predicting the climate environment of a grid transmission line according to claim 1, wherein the historical climate loop is obtainedThe input sample for environment data construction is specifically that historical climate environment data comprises a time sequence formed by historical numerical values of temperature, humidity and wind speed information around the transmission line, the historical numerical values are normalized and stored as vectors
Figure 118970DEST_PATH_IMAGE001
As model input.
3. The method for predicting the climate environment of the transmission line of the power grid according to claim 1, wherein the transform prediction model of the sparse attention mechanism comprises an encoder and a decoder, and the encoder performs feature extraction on input data; the decoder performs multi-head sparse self-attention operation by using the characteristics extracted by the encoder to finally realize the climate environment prediction of the transmission line, and the input of the encoding layer is
Figure 274008DEST_PATH_IMAGE001
The input of the decoding layer is
Figure 215419DEST_PATH_IMAGE001
And output of the coding layer
Figure 922344DEST_PATH_IMAGE002
The encoder is composed of a plurality of identical encoding layers, the decoder is composed of a plurality of identical decoding layers, except that the first encoding layer is input as
Figure 581995DEST_PATH_IMAGE001
Besides, all the coding layers have the output of more than one coding layer as their input, and the output of the last coding layer is the output of the coder
Figure 540724DEST_PATH_IMAGE002
Except for the first decoding layer input of
Figure 212008DEST_PATH_IMAGE001
And
Figure 965200DEST_PATH_IMAGE002
besides, all decoding layers take the output of one decoding layer as input, and the output of the last decoding layer is a prediction sequence of a transform prediction model for future climate data.
4. The method as claimed in claim 3, wherein the coding layers of the transform prediction model include a one-dimensional convolutional layer, a sparse self-attention layer and a feedforward network layer, and the coding layers perform feature extraction specifically by inputting vector quantities
Figure 971202DEST_PATH_IMAGE001
After one-dimensional convolution layer and adding position code to obtain
Figure 733622DEST_PATH_IMAGE003
Using three weight matrices respectively
Figure 384046DEST_PATH_IMAGE004
Figure 959339DEST_PATH_IMAGE005
And
Figure 328003DEST_PATH_IMAGE006
and
Figure 362956DEST_PATH_IMAGE003
obtaining multichannel query matrixes by matrix multiplication respectively
Figure 523678DEST_PATH_IMAGE007
Key matrix
Figure 884253DEST_PATH_IMAGE008
And value matrix
Figure 474634DEST_PATH_IMAGE009
In order to reduce the calculation amount of the model and prevent overfitting, the three matrixes are operated by sparse self-attention to obtain the self-attention, and the self-attention obtained by calculation is mapped into the original input dimension through a linear layer and then is subjected to cross-correlation
Figure 454222DEST_PATH_IMAGE003
Adding, and passing the result of the addition through a feedforward layer to obtain the output of the encoder
Figure 79239DEST_PATH_IMAGE002
5. The method for predicting the climate environment of the transmission line of the power grid according to claim 4, wherein the sparse self-attention operation comprises the following operations: selecting matrix
Figure 610714DEST_PATH_IMAGE007
The largest numerical value
Figure 813026DEST_PATH_IMAGE010
Setting other elements to 0 by each element to obtain a matrix after sparsification
Figure 455359DEST_PATH_IMAGE011
In which
Figure 308784DEST_PATH_IMAGE010
For a preset number of appropriate size, then the matrix
Figure 11160DEST_PATH_IMAGE011
Figure 576134DEST_PATH_IMAGE008
Figure 146793DEST_PATH_IMAGE009
As an input, a self-attention operation is performed, with the formula
Figure 215243DEST_PATH_IMAGE012
Wherein
Figure 698308DEST_PATH_IMAGE013
The function is activated for the softmax function,
Figure 750578DEST_PATH_IMAGE014
is a matrix
Figure 859348DEST_PATH_IMAGE007
The dimension (c) of (a) is,
Figure 313463DEST_PATH_IMAGE015
is the calculated self-attention.
6. The grid transmission line climate environment prediction method according to claim 4, wherein decoding layers of the transform prediction model comprise a decoding one-dimensional convolution layer, a decoding sparse self-attention layer, a decoding feed-forward network layer and a decoding linear layer, and the decoding layers perform prediction by specifically applying first input data
Figure 465964DEST_PATH_IMAGE001
After decoding the one-dimensional convolutional layer and one layer of the sparse self-attention layer, mapping the layer into a query matrix by decoding the linear layer
Figure 271109DEST_PATH_IMAGE016
And is thinned to obtain
Figure 793358DEST_PATH_IMAGE017
Is reused to the second input data
Figure 492192DEST_PATH_IMAGE002
Respectively mapped to key matrix by decoding linear layer
Figure 707273DEST_PATH_IMAGE018
And value matrix
Figure 734135DEST_PATH_IMAGE019
Then calculate attention, in the formula
Figure 935440DEST_PATH_IMAGE020
Wherein
Figure 98568DEST_PATH_IMAGE021
Is composed of
Figure 750129DEST_PATH_IMAGE016
And (4) obtaining a prediction sequence of the transform prediction model for future environment data after decoding the linear layer and the feedforward layer according to the calculated attention.
7. A power grid transmission line climate environment prediction system is characterized by comprising,
the input sample construction module is used for acquiring historical climate environment data and constructing an input sample;
the system comprises a transform prediction model construction module, a transform prediction model prediction module and a prediction module, wherein the transform prediction model construction module is used for constructing a transform prediction model of a sparse attention mechanism and predicting an input sample to obtain an original prediction result;
the error correction sequence acquisition module is used for comparing an original prediction result with real data to obtain an error sequence and inputting the error sequence into an ARIMA model for error prediction to obtain an error correction sequence;
and the prediction result acquisition module is used for adding the original prediction result and the error correction sequence to obtain a final prediction result.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored a program code, which, when executed by a processor, carries out the steps of the grid transmission line climate environment prediction method according to any of claims 1-6.
CN202211534275.4A 2022-12-02 2022-12-02 Method, system and medium for predicting climate environment of power grid transmission line Active CN115545361B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211534275.4A CN115545361B (en) 2022-12-02 2022-12-02 Method, system and medium for predicting climate environment of power grid transmission line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211534275.4A CN115545361B (en) 2022-12-02 2022-12-02 Method, system and medium for predicting climate environment of power grid transmission line

Publications (2)

Publication Number Publication Date
CN115545361A true CN115545361A (en) 2022-12-30
CN115545361B CN115545361B (en) 2023-05-09

Family

ID=84722391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211534275.4A Active CN115545361B (en) 2022-12-02 2022-12-02 Method, system and medium for predicting climate environment of power grid transmission line

Country Status (1)

Country Link
CN (1) CN115545361B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117490908A (en) * 2023-12-31 2024-02-02 武汉华康世纪医疗股份有限公司 Negative pressure detection method and system for negative pressure ward

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819136A (en) * 2021-01-20 2021-05-18 南京邮电大学 Time sequence prediction method and system based on CNN-LSTM neural network model and ARIMA model
CN112990587A (en) * 2021-03-24 2021-06-18 北京市腾河智慧能源科技有限公司 Method, system, equipment and medium for accurately predicting power consumption of transformer area
CN113723669A (en) * 2021-08-09 2021-11-30 贵州电网有限责任公司 Power transmission line icing prediction method based on Informmer model
CN113919233A (en) * 2021-10-29 2022-01-11 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Urban VOCs pollution total amount time sequence prediction method, system, storage medium and equipment
US20220045509A1 (en) * 2020-08-05 2022-02-10 Wuhan University Method and system of predicting electric system load based on wavelet noise reduction and emd-arima
CN114239971A (en) * 2021-12-20 2022-03-25 浙江大学 Daily precipitation prediction method based on Transformer attention mechanism
CN114239718A (en) * 2021-12-15 2022-03-25 杭州电子科技大学 High-precision long-term time sequence prediction method based on multivariate time sequence data analysis
CN114580710A (en) * 2022-01-28 2022-06-03 西安电子科技大学 Environment monitoring method based on Transformer time sequence prediction
CN114943368A (en) * 2022-04-26 2022-08-26 天津大学 Sea surface wind speed prediction method based on Transformer
CN115049169A (en) * 2022-08-16 2022-09-13 国网湖北省电力有限公司信息通信公司 Regional power consumption prediction method, system and medium based on combination of frequency domain and spatial domain

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220045509A1 (en) * 2020-08-05 2022-02-10 Wuhan University Method and system of predicting electric system load based on wavelet noise reduction and emd-arima
CN112819136A (en) * 2021-01-20 2021-05-18 南京邮电大学 Time sequence prediction method and system based on CNN-LSTM neural network model and ARIMA model
CN112990587A (en) * 2021-03-24 2021-06-18 北京市腾河智慧能源科技有限公司 Method, system, equipment and medium for accurately predicting power consumption of transformer area
CN113723669A (en) * 2021-08-09 2021-11-30 贵州电网有限责任公司 Power transmission line icing prediction method based on Informmer model
CN113919233A (en) * 2021-10-29 2022-01-11 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Urban VOCs pollution total amount time sequence prediction method, system, storage medium and equipment
CN114239718A (en) * 2021-12-15 2022-03-25 杭州电子科技大学 High-precision long-term time sequence prediction method based on multivariate time sequence data analysis
CN114239971A (en) * 2021-12-20 2022-03-25 浙江大学 Daily precipitation prediction method based on Transformer attention mechanism
CN114580710A (en) * 2022-01-28 2022-06-03 西安电子科技大学 Environment monitoring method based on Transformer time sequence prediction
CN114943368A (en) * 2022-04-26 2022-08-26 天津大学 Sea surface wind speed prediction method based on Transformer
CN115049169A (en) * 2022-08-16 2022-09-13 国网湖北省电力有限公司信息通信公司 Regional power consumption prediction method, system and medium based on combination of frequency domain and spatial domain

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HUIJUAN WU: "Multistep short-term wind speed forecasting using transformer", 《ENERGY》 *
YANG LI等: "Transformer with Sparse Attention Mechanism for Industrial Time Series Forecasting", 《JOURNAL OF PHYSICS》 *
孙冬梅等: "基于ARIMA模型误差修正的小波神经网络风速短期预测", 《计算机与应用化学》 *
田中大 等: ""基于ARIMA补偿ELM的网络流量预测方法", 《信息与控制》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117490908A (en) * 2023-12-31 2024-02-02 武汉华康世纪医疗股份有限公司 Negative pressure detection method and system for negative pressure ward
CN117490908B (en) * 2023-12-31 2024-04-09 武汉华康世纪医疗股份有限公司 Negative pressure detection method and system for negative pressure ward

Also Published As

Publication number Publication date
CN115545361B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
Xiaoyun et al. Short-term prediction of wind power based on deep long short-term memory
CN111091233A (en) Wind power plant short-term wind power prediction modeling method based on wavelet analysis and multi-model AdaBoost depth network
CN111191783B (en) Self-adaptive quantization method and device, equipment and medium
CN113158582A (en) Wind speed prediction method based on complex value forward neural network
CN115049169B (en) Regional power consumption prediction method, system and medium based on combination of frequency domain and spatial domain
CN115545361A (en) Method, system and medium for predicting climate environment of power grid transmission line
Yu et al. Forecasting a short‐term wind speed using a deep belief network combined with a local predictor
CN111612274A (en) Tidal water level forecasting method based on space-time correlation
CN115618248A (en) Load abnormity identification method and device for public building, storage medium and equipment
CN110738363A (en) photovoltaic power generation power prediction model and construction method and application thereof
JP7369868B2 (en) Methods for processing solar radiation prediction, methods for training stacked generalized models, and apparatus thereof
CN116504230A (en) Data closed-loop method, device, computer equipment and computer readable storage medium
Zadorozhnyi Fractal queues simulation peculiarities
CN115600666A (en) Self-learning method and device for power transmission and distribution line defect detection model
CN114881344A (en) Training method, device and medium for building energy consumption prediction model
CN116109339A (en) Prediction method, prediction device and storage medium for accessory demand
CN116051160A (en) Prediction method, prediction device and storage medium for accessory demand
CN114398235A (en) Memory recovery trend early warning device and method based on fusion learning and hypothesis testing
CN110210518B (en) Method and device for extracting dimension reduction features
Bahij et al. A comparison study of machine learning methods for energy consumption forecasting in industry
CN112732777A (en) Position prediction method, apparatus, device and medium based on time series
CN110543549A (en) semantic equivalence judgment method and device
CN115496002B (en) Multi-dimensional feature interactive line dynamic capacity increasing method, system and medium
CN114819425A (en) Intelligent prediction method and system for regional power consumption and storage medium
CN117633282A (en) Query method and device for financial products, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant