CN115758641A - Power transmission line icing prediction method based on auto-former-progressive decomposition model - Google Patents

Power transmission line icing prediction method based on auto-former-progressive decomposition model Download PDF

Info

Publication number
CN115758641A
CN115758641A CN202211465767.2A CN202211465767A CN115758641A CN 115758641 A CN115758641 A CN 115758641A CN 202211465767 A CN202211465767 A CN 202211465767A CN 115758641 A CN115758641 A CN 115758641A
Authority
CN
China
Prior art keywords
sequence
decoding
factor
coding
period
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211465767.2A
Other languages
Chinese (zh)
Inventor
吴建蓉
文屹
张啟黎
张迅
王冕
范强
黄军凯
赵超
何锦强
丁志敏
李锐海
黄增浩
龚博
李�昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China South Power Grid International Co ltd
Guizhou Power Grid Co Ltd
Original Assignee
China South Power Grid International Co ltd
Guizhou Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China South Power Grid International Co ltd, Guizhou Power Grid Co Ltd filed Critical China South Power Grid International Co ltd
Priority to CN202211465767.2A priority Critical patent/CN115758641A/en
Publication of CN115758641A publication Critical patent/CN115758641A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a power transmission line icing prediction method based on an auto-former-progressive decomposition model, which relates to the technical field of power transmission line icing monitoring, wherein a multivariable data set is constructed by acquiring historical wire tension, temperature, humidity and wind speed data of a power transmission line, and is divided into a training set, a verification set and a test set; an initial icing prediction model is built and trained on the basis of an auto-former-progressive decomposition model to obtain an icing prediction model meeting a preset error, then a predicted wire tension is obtained on the basis of a wire tension, a temperature, a humidity and a wind speed which are obtained in advance and the final icing prediction model, and the predicted wire thickness is obtained on the basis of the predicted wire tension. The method solves the technical problem that when the ice coating of the line is predicted by the existing method for predicting the ice coating of the power transmission line, the prediction result is inaccurate because only the influence of a single variable is considered.

Description

Power transmission line icing prediction method based on auto-former-progressive decomposition model
Technical Field
The invention relates to the technical field of power transmission line icing monitoring, in particular to a power transmission line icing prediction method based on an auto-former-progressive decomposition model.
Background
Icing is one of the most common disasters causing a power system, and the load of the power transmission line after icing is increased, so that accidents such as ice flash tripping, conductor galloping, tower collapse disconnection and the like of the power transmission line are caused, the safe and stable operation of a power grid and the operation reliability of a power supply system can be threatened in severe cases, and the construction and development of the power system are restricted. By predicting the icing condition of the power transmission line and effectively taking ice melting measures in time, the loss caused by large-area paralysis of the power grid due to icing can be effectively reduced.
The existing power transmission line icing prediction methods can be divided into two types, one is a prediction method based on a physical process, and the other is a prediction method based on data driving. The method learns the internal rules and the expression levels of sample data through iterative training of a data set, obtains a complex mapping relation between input and output, and is more robust than the traditional prediction based on a physical process.
Although the data-driven prediction method can obtain a complex mapping relationship between input and output through iterative training of a data set, the method still has the following limitations: the traditional prediction method based on data driving generally decomposes an acquired data sequence first and then predicts single variables respectively, and cannot acquire data dependency among multiple variables. In actual conditions, the icing condition of the power transmission line is influenced by various meteorological factors (various variables) at the same time, a complex nonlinear relation exists between the meteorological factors and the icing thickness of the power transmission line, a traditional prediction model prediction result is limited by a decomposition effect, interaction among the meteorological factors is ignored, and the icing condition of the power transmission line cannot be accurately predicted.
Disclosure of Invention
The invention provides a power transmission line icing prediction method based on an auto-former-progressive decomposition model, which is used for solving the technical problem that the prediction result is inaccurate because only the influence of a single variable is considered when the existing power transmission line icing prediction method is used for predicting line icing.
The invention provides a power transmission line icing prediction method based on an auto-former-progressive decomposition model, which comprises the following steps:
s1, collecting historical data of a power transmission line, and constructing a multi-variable data set based on the historical data; the historical data comprises wire tension, temperature, humidity and wind speed;
s2, dividing the multi-variable data set into a training set, a verification set and a test set;
s3, constructing an initial icing prediction model based on an auto-former-progressive decomposition model, and training the initial icing prediction model by using the training set to obtain a first training icing prediction model;
s4, verifying the first training icing prediction model by using the verification set, judging whether the verification error of the first training icing prediction model meets a first preset threshold value, if so, testing the first training icing prediction model by using the test set, and if not, returning to the step S3; judging whether the test error of the first training icing prediction model meets a second preset threshold value, if so, taking the first training icing prediction model as a final icing prediction model, and if not, returning to the step S3;
s5, setting a prediction step length, obtaining a predicted wire tension based on the wire tension, the temperature, the humidity and the wind speed which are obtained in advance and the final icing prediction model, and obtaining a predicted wire icing thickness based on the predicted wire tension.
Preferably, step S1 specifically includes:
s11, collecting historical data of the power transmission line, and performing abnormal value processing and missing value filling on the historical data to obtain first historical data;
s12, constructing a first multivariate sequence data set based on the first historical data, wherein the first multivariate sequence data set specifically comprises the following steps:
Figure BDA0003957441260000021
wherein,
Figure BDA0003957441260000022
representing the value of the ith sequence at time t, d x Indicating that sequence X has d-dimensional features, X t Representing the input sequence at time t, L x Is the length of the input history sequence x;
and S13, carrying out normalization processing on the first multivariate sequence data set to obtain a multivariate sequence data set.
Preferably, step S3 specifically includes:
s31, constructing an initial icing prediction model based on an auto-former-progressive decomposition model; wherein the initial icing prediction model comprises an encoder and a decoder; wherein the encoder includes a number of encoding layers and the decoder includes a number of decoding layers;
s32, acquiring a preset number of samples in the training set, recording the samples as input sequences, and performing deep decomposition on the input sequences to obtain an encoder input sequence and a decoder input sequence; wherein the encoder input sequence comprises an encoder input timestamp sequence and an encoder input character sequence, and the decoder input sequence comprises a decoder input timestamp sequence and a decoder input character sequence;
s33, coding the encoder input time stamp sequence to obtain a first coding time sequence; coding the input character sequence of the coder to obtain a first coded character sequence; adding the first coding time sequence and the first coding character sequence to obtain a first coding sequence;
s34, performing information aggregation on the first coding sequence to obtain a first coding aggregation sequence; adding the first coding sequence and the first coding aggregation sequence, and then performing sequence decomposition to obtain a first coding period factor and a first coding trend factor;
s35, inputting the first coding period factor and the first coding trend factor into a preset full-connection layer respectively to obtain a first coding period item and a first coding trend item, and adding the first coding period item, the first coding trend item and the first coding period factor to perform sequence decomposition to obtain a second coding period factor;
s36, decoding the time stamp sequence input by the decoder to obtain a first decoding time sequence; decoding the input character sequence of the decoder to obtain a first decoded character sequence; adding the first decoded time sequence and the first decoded character sequence to obtain a first decoded sequence;
s37, performing information aggregation on the first decoding sequence to obtain a first decoding aggregation sequence; adding the first decoding sequence and the first decoding aggregation sequence, and then performing sequence decomposition to obtain a first decoding period factor and a first decoding trend factor;
s38, performing information aggregation on the second coding period factor and the first decoding period factor to obtain a second coding aggregation sequence; adding the second decoding aggregation sequence and the first decoding period factor, and then performing sequence decomposition to obtain a second decoding period factor and a second decoding trend factor;
s39, inputting the second decoding period factor and the second decoding trend factor into a preset full-connection layer respectively to obtain a second decoding period item and a second decoding trend item; adding the second decoding period factor, the second decoding period item and the second decoding trend item, and then performing sequence decomposition to obtain a third decoding period factor and a third decoding trend factor;
s40, acquiring a decoding trend factor based on the first decoding trend factor, the second decoding trend factor, the third decoding trend factor and a pre-acquired zeroth decoding trend factor;
s41, adding the third decoding period factor and the decoding trend factor to be used as input of a preset full-connection layer to obtain a prediction sequence; wherein the first pre-sequence comprises a predicted wire tension;
s42, judging whether the iteration times of the initial icing prediction model meet a preset iteration threshold value, if so, outputting the initial icing prediction model as a first training icing prediction model; if not, the process returns to step S32.
Preferably, in step S34, the first coding sequence is subjected to information aggregation to obtain a first coding aggregation sequence; adding the first coding sequence and the first coding aggregation sequence, and then performing sequence decomposition to obtain a first coding period factor and a first coding trend factor, wherein the method specifically comprises the following steps:
Figure BDA0003957441260000041
wherein the input is performed in the first coding layer
Figure BDA0003957441260000042
A period factor decomposed by the 1 st sequence of the l coding layer for the first coding sequence
Figure BDA0003957441260000043
A first coding period factor; trend factor decomposed with 1 st sequence of l coding layer
Figure BDA0003957441260000044
Is a first encoding tendency factor; autoCorrelation (-) represents information aggregation processing; series decomp (·) represents the sequence decomposition process.
Preferably, in step S35, the first coding period factor and the first coding tendency factor are respectively input to a preset full link layer to obtain a first coding period item and a first coding tendency item, and the first coding period item, the first coding tendency item and the first coding period factor are added and then sequence decomposition is performed to obtain a second coding period factor, which specifically includes:
Figure BDA0003957441260000045
wherein, the period factor decomposed by the 1 st sequence of the l coding layer
Figure BDA0003957441260000046
A first coding period factor; tendency factor decomposed with 1 st sequence of the l-th coding layer
Figure BDA0003957441260000047
Is the first coding tendency factor(ii) a Periodic factor decomposed with 2 nd sequence of l coding layer
Figure BDA0003957441260000048
A second coding period factor; tendency factor decomposed with 2 nd sequence of l-th coding layer
Figure BDA0003957441260000049
Is a second encoding tendency factor; feed forward (·) denotes full join processing; series decomp (·) represents the sequence decomposition process.
Preferably, in step S37, the first decoding sequence is subjected to information aggregation to obtain a first decoding aggregation sequence; adding the first decoding sequence and the first decoding aggregation sequence, and then performing sequence decomposition to obtain a first decoding period factor and a first decoding trend factor, wherein the method specifically comprises the following steps:
Figure BDA00039574412600000410
wherein, the input of the first decoding layer
Figure BDA0003957441260000051
Is a first decoded sequence; periodic factor decomposed with 1 st sequence of l decoding layer
Figure BDA0003957441260000052
Is a first decoding period factor; the trend factor of the 1 st sequence decomposition of the l decoding layer is
Figure BDA0003957441260000053
A first decoding tendency factor; autCorelation (·) represents information aggregation processing; series decomp (·) represents the sequence decomposition process.
Preferably, in step S38, the second coding period factor and the first decoding period factor are subjected to information aggregation to obtain a second coding aggregation sequence; adding the second decoding aggregation sequence and the first decoding period factor, and then performing sequence decomposition to obtain a second decoding period factor and a second decoding trend factor, which specifically comprises:
Figure BDA0003957441260000054
wherein the period factor is decomposed by the 2 nd sequence of the l coding layer
Figure BDA0003957441260000055
A second coding period factor; periodic factor decomposed with 1 st sequence of l decoding layer
Figure BDA0003957441260000056
A first decoding period factor; periodic factor decomposed with 2 nd sequence of l decoding layer
Figure BDA0003957441260000057
A second decoding period factor; trend factor decomposed with 2 nd sequence of the l decoding layer
Figure BDA0003957441260000058
Is a second decoding tendency factor; autoCorrelation (·) represents information aggregation processing; series decomp (·) represents the sequence decomposition process. Preferably, in step S39, the second decoding period factor and the second decoding trend factor are respectively input into a preset full link layer, so as to obtain a second decoding period item and a second decoding trend item; adding the second decoding period factor, the second decoding period term and the second decoding trend term, and then performing sequence decomposition to obtain a third decoding period factor and a third decoding trend factor, which specifically comprises:
Figure BDA0003957441260000059
wherein, the period factor decomposed by the 2 nd sequence of the l decoding layer
Figure BDA00039574412600000510
Is a second decoding period factor; trend factor decomposed with 2 nd sequence of the l decoding layer
Figure BDA00039574412600000511
Is a second decoding tendency factor; the period factor of the 3 rd sequence decomposition of the l decoding layer is
Figure BDA00039574412600000512
A third decoding period factor; the trend factor of the 3 rd sequence decomposition of the l decoding layer is
Figure BDA00039574412600000513
A third decoding tendency factor; feed forward (·) denotes full join processing; series decomp (·) represents the sequence decomposition process.
Preferably, in step S40, the obtaining of the decoding tendency factor based on the first decoding tendency factor, the second decoding tendency factor, the third decoding tendency factor and the pre-obtained zeroth decoding tendency factor specifically includes:
Figure BDA00039574412600000514
wherein, the trend factor is decomposed by the 1 st sequence of the l decoding layer
Figure BDA0003957441260000061
A first decoding tendency factor; trend factor decomposed with 2 nd sequence of l decoding layer
Figure BDA0003957441260000062
Is a second decoding tendency factor; trend factor decomposed with 3 rd sequence of l decoding layer
Figure BDA0003957441260000063
Is a third decoding tendency factor; trending factors of sequence decomposition with l-1 decoding layer
Figure BDA0003957441260000064
Decoding the trend factor for the zeroth time;
Figure BDA0003957441260000065
represent
Figure BDA0003957441260000066
I ∈ {1,2,3}.
Preferably, the predicting of the icing thickness of the wire based on the predicted wire tension specifically comprises:
and calculating a model according to the predicted wire tension and the preset wire icing thickness to obtain the predicted wire icing thickness.
According to the technical scheme, the invention has the following advantages: the method comprises the steps that a multivariable data set is constructed by collecting historical wire tension, temperature, humidity and wind speed data of a power transmission line and based on the historical wire tension, temperature, humidity and wind speed data; the method comprises the steps of constructing an initial icing prediction model based on an auto-former-progressive decomposition model, inputting a time sequence formed by historical wire tension, temperature, humidity and wind speed data as a model, training the initial icing prediction model to obtain a final icing prediction model, and decomposing the time sequence containing a plurality of variables and combining the time sequence into a new time sequence to process a trend item and a period item of the time sequence respectively, so that the period and the seasonal trend of the time sequence can be effectively captured, and the accuracy of long-term prediction is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a method for predicting icing of a power transmission line based on an auto-transformer-progressive decomposition model according to an embodiment of the present invention;
fig. 2 is a model structure diagram of an icing prediction model provided in an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a power transmission line icing prediction method based on an auto-former-progressive decomposition model, relates to the technical field of power transmission line icing monitoring, and solves the technical problem that when the existing power transmission line icing prediction method is used for predicting line icing, only the influence of a single variable is considered, so that the prediction result is inaccurate by analyzing the dependency relationship among various meteorological factors influencing the icing condition of the power transmission line.
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Embodiment 1 of the present application provides a method for predicting icing of a power transmission line, please refer to fig. 1, in embodiment 1, the method includes:
s1, collecting historical data of a power transmission line, and constructing a multi-variable data set based on the historical data; wherein the historical data comprises wire tension, temperature, humidity and wind speed.
The method comprises the steps of collecting temperature data, humidity data, wind speed data and wire tension data of the power transmission line, and constructing a sequence data set taking [ wire tension data- { temperature data, humidity data and wind speed data } ] as a sequence. The sequence data set contains [ wire tension data- { temperature data, humidity data, wind speed data } ] sequences under a plurality of time nodes.
The wire tension data in the sequence corresponds to the temperature data, the humidity data and the wind speed data, namely the wire tension data at the time t corresponds to the temperature data, the humidity data and the wind speed data at the time t to form a sequence.
And S2, dividing the multi-variable data set into a training set, a verification set and a test set.
During the development of the model, it is desirable that the trained model perform well on new, unseen data. To simulate new, unseen data, the available data is data segmented into a training set, a validation set, and a test set. The training set is typically a larger subset of the data (e.g., 60% of the original data), and the validation set and test set are typically smaller subsets (e.g., 20% of the original data).
And S3, constructing an initial icing prediction model based on an auto-former-progressive decomposition model, and training the initial icing prediction model by using the training set to obtain a first training icing prediction model.
Conventional time series decomposition refers to decomposing a time series into several components, each component representing a type of potential time pattern, such as a period term (Season), a Trend term (Trend). Due to future agnostics in the prediction problem, it is common to decompose past sequences and then predict the components (parameters) in the sequences, respectively. However, the prediction result obtained by the prediction method is limited by the decomposition effect, and the interaction among the components in the future is neglected by the prediction method, so that the prediction accuracy is not high.
The time series prediction model based on the Transformer captures the dependence between the moments through an attention-free mechanism, reduces the influence of the decomposition effect on the prediction result, and has low sensitivity to the time sequence dependence between complex time patterns in a long sequence.
The auto-transformer-progressive decomposition model adopts a deep decomposition architecture, and takes sequence decomposition as an internal unit of the auto-transformer, and the internal unit is embedded into an encoder-decoder. In the prediction process, the model alternately performs prediction result optimization and sequence decomposition, namely, a trend item and a period item are gradually separated from an implicit variable to realize progressive decomposition, so that not only can components (parameters) in the sequence be predicted, but also the time sequence dependency relationship among complex time modes in the long sequence can be captured, therefore, an initial icing prediction model is constructed based on an auto-former-progressive decomposition model, the prediction on various meteorological factors/icing data can be realized at the same time, the dependency relationship among the various meteorological factors/icing data in the long sequence can be captured, and the prediction accuracy is improved.
It should be noted that the input of the initial icing prediction model is a [ wire tension data- { temperature data, humidity data, wind speed data } ] sequence, and the output of the model is predicted wire tension data in the [ predicted wire tension data-prediction { temperature data, humidity data, wind speed data } ] sequence.
S4, verifying the first training icing prediction model by using the verification set, judging whether the verification error of the first training icing prediction model meets a first preset threshold value, if so, testing the first training icing prediction model by using the test set, and if not, returning to the step S3; and if the test error of the first training icing prediction model meets a second preset threshold value, taking the first training icing prediction model as a final icing prediction model, and if not, returning to the step S3.
It can be understood that parameters of the model need to be adjusted continuously in the training process, the generalization ability of the model is improved by adjusting the parameters, the trained icing prediction model is optimized by using the verification set, after the optimal parameters are determined, the test set is used as the generalization error estimation of the model, the generalization ability of the first trained icing prediction model is considered to meet the prediction requirement by comparing the relation between the model error of the first trained icing prediction model on the test set and a second preset error threshold value, when the model error of the first trained icing prediction model meets the preset threshold value, the first trained icing prediction model can be directly deployed in a real scene for use, when the model error of the first trained icing prediction model does not meet the preset threshold value, the generalization ability of the first trained icing prediction model is considered not to meet the prediction requirement, and the model parameters need to be further returned to the model training step for adjusting.
And S5, setting a prediction step length, acquiring the predicted wire tension based on the wire tension, the temperature, the humidity and the wind speed which are acquired in advance and the final icing prediction model, and acquiring the predicted wire icing thickness based on the predicted wire tension.
It can be understood that the icing related data of the power transmission line at the future time can be predicted through the historical data of the power transmission line by the final icing prediction model obtained in the steps S3-S4; the prediction step size can be understood as a prediction duration, for example: the method is characterized in that four characteristic data of temperature, humidity, wind speed and wire tension in the past N hours are adopted to predict the wire (maximum) tension in the future M hours, wherein M is a preset step length.
The preset wire icing thickness calculation model specifically comprises the following steps:
Figure BDA0003957441260000091
wherein d is the predicted icing thickness of the lead; rho is the ice density (0.9 x 10-3 kg/(m x mm 2)) when the lead ice type is rime according to the power design rule; d is the original diameter of the wire; q. q.s ice Loading per unit length of the wire after icing (see the prior art: CN 113686286A method for realization); and F is the predicted wire tension.
According to the technical scheme, the invention has the following advantages: the method comprises the steps that a multivariable data set is constructed by collecting historical wire tension, temperature, humidity and wind speed data of a power transmission line and based on the historical wire tension, temperature, humidity and wind speed data; the method comprises the steps of establishing an initial icing prediction model based on an auto-former-progressive decomposition model, taking a time sequence formed by historical wire tension, temperature, humidity and wind speed data as model input, training the initial icing prediction model to obtain a final icing prediction model, and acquiring the influence of a plurality of variables on icing prediction and improving icing prediction accuracy by the initial icing prediction model established based on the auto-former-progressive decomposition model.
On the basis of example 1, the present application provides another preferred example 2.
The step S1 specifically includes:
s11, collecting historical data of the power transmission line, and performing abnormal value processing and missing value filling on the historical data to obtain first historical data.
And preprocessing the collected historical data of the power transmission line, including performing abnormal value processing and missing value filling on the historical icing data. It will be appreciated that the quality of the data will have a greater impact on the quality of the generated model. The acquired data are preprocessed, so that the quality of the acquired model can be improved.
S12, constructing a first multivariate sequence data set based on the first historical data, wherein the first multivariate sequence data set specifically comprises the following steps:
Figure BDA0003957441260000101
wherein,
Figure BDA0003957441260000102
representing the value of the ith sequence at time t, d x Indicating that sequence X has d-dimensional features, X t Representing the input sequence at time t, L x Is the length of the input history sequence x; r represents a real number set;
and S13, carrying out normalization processing on the first multivariate sequence data set to obtain a multivariate sequence data set.
In order to improve the accuracy of the model and accelerate the convergence rate of the model training, the present embodiment further performs normalization on the preprocessed sequence data set, and preferably, in the present embodiment, normalization is performed on the sequence data set by using normalization.
On the basis of example 1 or 2, the present application provides another preferred example 3. For ease of understanding, please refer to fig. 2.
The step S3 specifically includes:
s31, constructing an initial icing prediction model based on an auto-former-progressive decomposition model; wherein the initial icing prediction model comprises an encoder and a decoder; wherein the encoder includes a number of encoding layers and the decoder includes a number of decoding layers.
It should be noted that the initial icing prediction model includes an encoder and a decoder, the encoder includes a plurality of encoding layers, the decoder includes a plurality of decoding layers, after the iteration number, the batch processing sample number, the learning rate, the input sequence length, the length of the sequence to be predicted, and the like of the model are set, a training set is input into the initial icing prediction model, the training set is firstly encoded through the plurality of encoding layers of the encoder to obtain an encoding sequence, then the encoding sequence is decoded through the plurality of decoding layers of the decoder to obtain a decoding sequence, and a final prediction sequence (result) is obtained according to the decoding sequence.
S32, acquiring a preset number of samples in the training set, recording the samples as input sequences, and performing deep decomposition on the input sequences to obtain encoder input sequences and decoder input sequences; wherein the encoder input sequence comprises an encoder input time stamp sequence and an encoder input character sequence, and the decoder input sequence comprises a decoder input time stamp sequence and a decoder input character sequence.
The time sequence represents a group of data point sequences arranged according to the time occurrence sequence, and the group of time sequences can be disassembled into a time stamp sequence only containing time data and a character sequence containing other data. For example, the sequence includes the features: the time stamp sequence is the time, and the character sequence is the wind speed, the temperature, the humidity and the tension.
S33, coding the encoder input time stamp sequence to obtain a first coding time sequence; carrying out encoding processing on the character sequence input by the encoder to obtain a first encoded character sequence; and adding the first coding time sequence and the first coding character sequence to obtain a first coding sequence.
It can be understood that the variables in the encoder input sequence (the encoder input timestamp sequence and the encoder input character sequence) are not continuous, and discrete variables in the encoder input sequence are continuous based on embedding, so that the expression capability of the encoder input sequence can be improved, and further the generalization capability of the model is improved.
S34, carrying out information aggregation on the first coding sequence to obtain a first coding aggregation sequence; and adding the first coding sequence and the first coding aggregation sequence, and then performing sequence decomposition to obtain a first coding period factor and a first coding trend factor.
Figure BDA0003957441260000111
Wherein, the input of the first coding layer
Figure BDA0003957441260000112
A period factor decomposed by the 1 st sequence of the l coding layer for the first coding sequence
Figure BDA0003957441260000113
Is a first coding period factor; trend factor decomposed with 1 st sequence of l coding layer
Figure BDA0003957441260000114
Is a first encoding tendency factor; autoCorrelation (·) represents information aggregation processing; series decomp (·) represents the sequence decomposition process.
The concrete implementation of series Decomp (. Cndot.) is as follows:
the moving average AvgPool (-) is adopted to eliminate the periodic fluctuation, and the Padding operation Padding (-) is used to keep the sequence length unchanged, highlighting the long-term trend. Chi shape t =AvgPool(Padding(χ)),χ s =χ-χ t . χ represents the input sequence after encoding, wherein
Figure BDA0003957441260000115
L denotes the length of the input sequence, d denotes the characteristic dimension of the input sequence,
Figure BDA0003957441260000116
representing a set of real numbers, χ s Denotes the period portion χ t The trend part is represented.
Using X s ,X t = SeriesDecomp (X) represents the above-described processing procedure.
Wherein, autocalibration) is realized as follows:
by passing
Figure BDA0003957441260000117
Representing a sequence x t And its tau-step lag sequence { x t-τ -time delay similarity between, wherein;
Figure BDA0003957441260000118
Figure BDA0003957441260000121
wherein τ represents time delay; l represents the number of time delays in the initial time delay sequence; tau. 1 ,…τ k To represent
Figure BDA0003957441260000122
A time delay sequence formed by the first k time delays corresponding to the maximum value, wherein,
Figure BDA0003957441260000123
representing the time delay similarity between the sequence Q and the tau step lag sequence K;
Figure BDA0003957441260000124
is composed of
Figure BDA0003957441260000125
The result after normalization function processing;
Figure BDA0003957441260000126
denotes subjecting the sequence V to τ i And (4) step delay operation.
Figure BDA0003957441260000127
Is shown in all
Figure BDA0003957441260000128
Take out a specified number of maximums
Figure BDA0003957441260000129
The time delay tau corresponding to the value;
s35, inputting the first coding period factor and the first coding trend factor into a preset full connection layer respectively to obtain a first coding period item and a first coding trend item, and adding the first coding period item, the first coding trend item and the first coding period factor to perform sequence decomposition to obtain a second coding period factor.
Figure BDA00039574412600001210
Wherein, the period factor decomposed by the 1 st sequence of the l coding layer
Figure BDA00039574412600001211
Is a first coding period factor; trend factor decomposed with 1 st sequence of l coding layer
Figure BDA00039574412600001212
Is a first encoding tendency factor; periodic factor decomposed with 2 nd sequence of l-th coding layer
Figure BDA00039574412600001213
A second coding period factor; trend factor decomposed with 2 nd sequence of l coding layer
Figure BDA00039574412600001214
A second encoding tendency factor; feed forward (·) denotes full join processing; series Decomp (·) represents a sequence decomposition process.
S36, decoding the decoder input time sequence to obtain a first decoding time sequence; decoding the input character sequence of the decoder to obtain a first decoded character sequence; adding the first decoded time sequence and the first decoded character sequence to obtain a first decoded sequence.
Similarly, the variables in the decoder input sequence (decoder input timestamp sequence and decoder input character sequence) are not continuous, and discrete variables in the decoder input sequence are continuous based on embedding, so that the expression capability of the decoder input sequence can be improved, and the generalization capability of the model can be further improved.
S37, performing information aggregation on the first decoding sequence to obtain a first decoding aggregation sequence; and adding the first decoding sequence and the first decoding aggregation sequence, and then performing sequence decomposition to obtain a first decoding period factor and a first decoding trend factor.
Figure BDA0003957441260000131
Wherein, the input of the first decoding layer
Figure BDA0003957441260000132
Is a first decoded sequence; periodic factor decomposed with 1 st sequence of the l decoding layer
Figure BDA0003957441260000133
Is a first decoding period factor; the trend factor of the 1 st sequence decomposition of the l decoding layer is
Figure BDA0003957441260000134
A first decoding tendency factor; autoCorrelation (-) represents information aggregation processing; series decomp (·) represents the sequence decomposition process.
S38, performing information aggregation on the second coding period factor and the first decoding period factor to obtain a second coding aggregation sequence; and adding the second decoding aggregation sequence and the first decoding period factor, and then performing sequence decomposition to obtain a second decoding period factor and a second decoding trend factor.
Figure BDA0003957441260000135
Wherein the period factor is decomposed by the 2 nd sequence of the l coding layer
Figure BDA0003957441260000136
A second coding period factor; periodic factor decomposed with 1 st sequence of l decoding layer
Figure BDA0003957441260000137
Is a first decoding period factor; periodic factor decomposed with 2 nd sequence of l decoding layer
Figure BDA0003957441260000138
Is a second decoding period factor; trend factor decomposed with 2 nd sequence of the l decoding layer
Figure BDA0003957441260000139
Is a second decoding tendency factor; autoCorrelation (-) represents information aggregation processing; series decomp (·) represents the sequence decomposition process.
S39, inputting the second decoding period factor and the second decoding trend factor into a preset full-connection layer respectively to obtain a second decoding period item and a second decoding trend item; and adding the second decoding period factor, the second decoding period item and the second decoding trend item, and then performing sequence decomposition to obtain a third decoding period factor and a third decoding trend factor.
Figure BDA00039574412600001310
Wherein, the period factor decomposed by the 2 nd sequence of the l decoding layer
Figure BDA00039574412600001311
Is a second decoding period factor; at the 2 nd sequence of the l decoding layerTendency factor of decomposition
Figure BDA00039574412600001312
Is a second decoding tendency factor; the period factor of the 3 rd sequence decomposition of the l decoding layer is
Figure BDA00039574412600001313
A third decoding period factor; the trend factor of the 3 rd sequence decomposition of the l decoding layer is
Figure BDA00039574412600001314
A third decoding tendency factor; feed forward (·) denotes full join processing; series decomp (·) represents the sequence decomposition process.
And S40, acquiring the decoding trend factors based on the first decoding trend factor, the second decoding trend factor, the third decoding trend factor and the pre-acquired zeroth decoding trend factor.
Figure BDA00039574412600001315
Wherein, the trend factor is decomposed by the 1 st sequence of the l decoding layer
Figure BDA00039574412600001316
Is a first decoding tendency factor; trend factor decomposed with 2 nd sequence of l decoding layer
Figure BDA0003957441260000141
Is a second decoding tendency factor; trend factor decomposed with 3 rd sequence of l decoding layer
Figure BDA0003957441260000142
Is a third decoding tendency factor; trending factors of sequence decomposition with l-1 decoding layer
Figure BDA0003957441260000143
Decoding the trend factor for the zeroth time;
Figure BDA0003957441260000144
to represent
Figure BDA0003957441260000145
I e {1,2,3}.
S41, adding the third decoding period factor and the decoding trend factor to be used as input of a preset full-connection layer to obtain a prediction sequence; wherein the first pre-sequence comprises a predicted wire tension.
S42, judging whether the iteration times of the initial icing prediction model meet a preset iteration threshold value, if so, outputting the initial icing prediction model as a first training icing prediction model; if not, the process returns to step S32.
It should be noted that, while the above steps S34-S35 only show the processing effect of a single encoder, and at the same time, the steps S37-S39 only show the processing effect of a single decoder, in practical applications, the number of layers of the encoder and the decoder can be set according to practical requirements, and the output of the previous encoder is used as the input of the next encoder and decoder, for example: in the above step S40
Figure BDA0003957441260000146
The present embodiment does not specifically limit the number of layers of the encoder and decoder with respect to a trend factor indicating the sequence decomposition of the last decoded layer of the previous decoder.
According to the technical scheme, the invention has the following advantages: the method comprises the steps that historical wire tension, temperature, humidity and wind speed data of a power transmission line are collected, and a multi-variable data set is constructed on the basis of the historical wire tension, temperature, humidity and wind speed data; the method comprises the steps of constructing an initial icing prediction model based on an auto-former-progressive decomposition model, inputting a time sequence formed by historical wire tension, temperature, humidity and wind speed data as a model, training the initial icing prediction model to obtain a final icing prediction model, and processing a trend item and a period item of a time sequence respectively by decomposing the time sequence containing a plurality of variables and then combining the time sequence into a new time sequence based on the auto-former-progressive decomposition model, so that the period and seasonal trend of the time sequence can be effectively captured, and the accuracy of long-term prediction is improved.
The above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A power transmission line icing prediction method based on an auto-former-progressive decomposition model is characterized by comprising the following steps:
s1, collecting historical data of a power transmission line, and constructing a multi-variable data set based on the historical data; wherein the historical data comprises wire tension, temperature, humidity and wind speed;
s2, dividing the multi-variable data set into a training set, a verification set and a test set;
s3, constructing an initial icing prediction model based on an auto-former-progressive decomposition model, and training the initial icing prediction model by using the training set to obtain a first training icing prediction model;
s4, verifying the first training icing prediction model by using the verification set, judging whether the verification error of the first training icing prediction model meets a first preset threshold value, if so, testing the first training icing prediction model by using the test set, and if not, returning to the step S3; judging whether the test error of the first training icing prediction model meets a second preset threshold value, if so, taking the first training icing prediction model as a final icing prediction model, and if not, returning to the step S3;
s5, setting a prediction step length, obtaining a predicted wire tension based on the wire tension, the temperature, the humidity and the wind speed which are obtained in advance and the final icing prediction model, and obtaining a predicted wire icing thickness based on the predicted wire tension.
2. The prediction method according to claim 1, wherein step S1 specifically comprises:
s11, collecting historical data of the power transmission line, and performing abnormal value processing and missing value filling on the historical data to obtain first historical data;
s12, constructing a first multivariate sequence data set based on the first historical data, wherein the first multivariate sequence data set specifically comprises the following steps:
Figure FDA0003957441250000011
wherein,
Figure FDA0003957441250000012
representing the value of the ith sequence at time t, d x Indicating that sequence X has d-dimensional features, X t Representing the input sequence at time t, L x Is the length of the input history sequence x;
and S13, carrying out normalization processing on the first multivariate sequence data set to obtain a multivariate sequence data set.
3. The prediction method according to claim 1, wherein the step S3 specifically comprises:
s31, constructing an initial icing prediction model based on an auto-former-progressive decomposition model; wherein the initial icing prediction model comprises an encoder and a decoder; wherein the encoder includes a number of encoding layers and the decoder includes a number of decoding layers;
s32, acquiring a preset number of samples in the training set, recording the samples as input sequences, and performing deep decomposition on the input sequences to obtain an encoder input sequence and a decoder input sequence; wherein the encoder input sequence comprises an encoder input timestamp sequence and an encoder input character sequence, and the decoder input sequence comprises a decoder input timestamp sequence and a decoder input character sequence;
s33, coding the encoder input time stamp sequence to obtain a first coding time sequence; coding the input character sequence of the coder to obtain a first coded character sequence; adding the first coding time sequence and the first coding character sequence to obtain a first coding sequence;
s34, performing information aggregation on the first coding sequence to obtain a first coding aggregation sequence; adding the first coding sequence and the first coding aggregation sequence, and then performing sequence decomposition to obtain a first coding period factor and a first coding trend factor;
s35, inputting the first coding period factor and the first coding trend factor into a preset full-connection layer respectively to obtain a first coding period item and a first coding trend item, and adding the first coding period item, the first coding trend item and the first coding period factor to perform sequence decomposition to obtain a second coding period factor;
s36, decoding the time stamp sequence input by the decoder to obtain a first decoding time sequence; decoding the input character sequence of the decoder to obtain a first decoded character sequence; adding the first decoded time sequence and the first decoded character sequence to obtain a first decoded sequence;
s37, performing information aggregation on the first decoding sequence to obtain a first decoding aggregation sequence; adding the first decoding sequence and the first decoding aggregation sequence, and then performing sequence decomposition to obtain a first decoding period factor and a first decoding trend factor;
s38, performing information aggregation on the second coding period factor and the first decoding period factor to obtain a second coding aggregation sequence; adding the second decoding aggregation sequence and the first decoding period factor, and then performing sequence decomposition to obtain a second decoding period factor and a second decoding trend factor;
s39, inputting the second decoding period factor and the second decoding trend factor into a preset full-connection layer respectively to obtain a second decoding period item and a second decoding trend item; adding the second decoding period factor, the second decoding period item and the second decoding trend item, and then performing sequence decomposition to obtain a third decoding period factor and a third decoding trend factor;
s40, acquiring a decoding trend factor based on the first decoding trend factor, the second decoding trend factor, the third decoding trend factor and a pre-acquired zeroth decoding trend factor;
s41, adding the third decoding period factor and the decoding trend factor to be used as input of a preset full connection layer, and obtaining a prediction sequence; wherein the first pre-sequence comprises a predicted wire tension;
s42, judging whether the iteration times of the initial icing prediction model meet a preset iteration threshold value, if so, outputting the initial icing prediction model as a first training icing prediction model; if not, the process returns to step S32.
4. The prediction method according to claim 3, wherein in step S34, the first coding sequence is subjected to information aggregation to obtain a first coding aggregation sequence; adding the first coding sequence and the first coding aggregation sequence, and then performing sequence decomposition to obtain a first coding period factor and a first coding trend factor, wherein the method specifically comprises the following steps:
Figure FDA0003957441250000031
wherein the input is performed in the first coding layer
Figure FDA0003957441250000032
A period factor decomposed by the 1 st sequence of the l coding layer for the first coding sequence
Figure FDA0003957441250000033
A first coding period factor; trend factor decomposed with 1 st sequence of l coding layer
Figure FDA0003957441250000034
Is a first encoding tendency factor; autoCorrelation represents information aggregation processing; series Decomp (·) represents a sequence decomposition process.
5. The prediction method according to claim 3, wherein in step S35, the step of inputting the first coding period factor and the first coding tendency factor into a preset fully-connected layer respectively to obtain a first coding period item and a first coding tendency item, and adding the first coding period item, the first coding tendency item, and the first coding period factor and then performing sequence decomposition to obtain a second coding period factor specifically comprises:
Figure FDA0003957441250000035
wherein the period factor is decomposed by the 1 st sequence of the l coding layer
Figure FDA0003957441250000036
A first coding period factor; trend factor decomposed with 1 st sequence of l coding layer
Figure FDA0003957441250000037
Is a first encoding tendency factor; periodic factor decomposed with 2 nd sequence of l-th coding layer
Figure FDA0003957441250000038
A second coding period factor; trend factor decomposed with 2 nd sequence of l coding layer
Figure FDA0003957441250000039
A second encoding tendency factor; feed forward (·) denotes full join processing; seriesDecomp (. Circle.) represents the sequence decomposition process.
6. The prediction method according to claim 3, wherein in step S37, the first decoded sequence is subjected to information aggregation to obtain a first decoded aggregated sequence; adding the first decoding sequence and the first decoding aggregation sequence, and then performing sequence decomposition to obtain a first decoding period factor and a first decoding trend factor, wherein the method specifically comprises the following steps:
Figure FDA0003957441250000041
wherein, the input of the first decoding layer
Figure FDA0003957441250000042
Is a first decoded sequence; periodic factor decomposed with 1 st sequence of l decoding layer
Figure FDA0003957441250000043
Is a first decoding period factor; the trend factor of the 1 st sequence decomposition of the l decoding layer is
Figure FDA0003957441250000044
A first decoding tendency factor; autoCorrelation represents information aggregation processing; series decomp (·) represents the sequence decomposition process.
7. The prediction method according to claim 3, wherein in step S38, the second encoding period factor and the first decoding period factor are subjected to information aggregation to obtain a second encoding aggregation sequence; adding the second decoding aggregation sequence and the first decoding period factor, and then performing sequence decomposition to obtain a second decoding period factor and a second decoding trend factor, wherein the second decoding period factor and the second decoding trend factor specifically comprise:
Figure FDA0003957441250000045
wherein the period factor is decomposed by the 2 nd sequence of the l coding layer
Figure FDA0003957441250000046
A second coding period factor; periodic factor decomposed with 1 st sequence of l decoding layer
Figure FDA0003957441250000047
Is a first decoding period factor; periodic factor decomposed with 2 nd sequence of l decoding layer
Figure FDA0003957441250000048
Is a second decoding period factor; trend factor decomposed with 2 nd sequence of the l decoding layer
Figure FDA0003957441250000049
Is a second decoding tendency factor; autoCorrelation (-) represents information aggregation processing; series decomp (·) represents the sequence decomposition process.
8. The prediction method according to claim 3, wherein in step S39, the second decoding period factor and the second decoding trend factor are respectively input into a preset fully-connected layer to obtain a second decoding period term and a second decoding trend term; adding the second decoding period factor, the second decoding period term and the second decoding trend term, and then performing sequence decomposition to obtain a third decoding period factor and a third decoding trend factor, which specifically comprises:
Figure FDA00039574412500000410
wherein, the period factor decomposed by the 2 nd sequence of the l decoding layer
Figure FDA00039574412500000411
Is a second decoding period factor; trend factor decomposed with 2 nd sequence of the l decoding layer
Figure FDA00039574412500000412
Is a second decoding tendency factor; the period factor of the 3 rd sequence decomposition of the l decoding layer is
Figure FDA00039574412500000413
A third decoding period factor; the trend factor of the 3 rd sequence decomposition of the l decoding layer is
Figure FDA00039574412500000414
A third decoding tendency factor; feed forward (·) denotes full join processing; series decomp (·) represents the sequence decomposition process.
9. The prediction method according to claim 3, wherein in step S40, the obtaining of the decoding tendency factor based on the first decoding tendency factor, the second decoding tendency factor, the third decoding tendency factor and a zeroth decoding tendency factor obtained in advance specifically comprises:
Figure FDA0003957441250000051
wherein, the trend factor is decomposed by the 1 st sequence of the l decoding layer
Figure FDA0003957441250000052
Is a first decoding tendency factor; trend factor decomposed with 2 nd sequence of l decoding layer
Figure FDA0003957441250000053
A second decoding tendency factor; trend factor decomposed with 3 rd sequence of l decoding layer
Figure FDA0003957441250000054
Is a third decoding tendency factor; trend factor of sequence decomposition with l-1 decoding layer
Figure FDA0003957441250000055
Decoding the trend factor for the zeroth code;
Figure FDA0003957441250000056
represent
Figure FDA0003957441250000057
I ∈ {1,2,3}.
10. The prediction method according to claim 1, wherein the predicting the ice coating thickness based on the predicted wire tension is specifically:
and obtaining the predicted icing thickness of the wire according to the predicted wire tension and a preset wire icing thickness calculation model.
CN202211465767.2A 2022-11-22 2022-11-22 Power transmission line icing prediction method based on auto-former-progressive decomposition model Pending CN115758641A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211465767.2A CN115758641A (en) 2022-11-22 2022-11-22 Power transmission line icing prediction method based on auto-former-progressive decomposition model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211465767.2A CN115758641A (en) 2022-11-22 2022-11-22 Power transmission line icing prediction method based on auto-former-progressive decomposition model

Publications (1)

Publication Number Publication Date
CN115758641A true CN115758641A (en) 2023-03-07

Family

ID=85334972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211465767.2A Pending CN115758641A (en) 2022-11-22 2022-11-22 Power transmission line icing prediction method based on auto-former-progressive decomposition model

Country Status (1)

Country Link
CN (1) CN115758641A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116128168A (en) * 2023-04-17 2023-05-16 南京信息工程大学 Weather prediction method based on causal expansion convolution and Autoformer

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116128168A (en) * 2023-04-17 2023-05-16 南京信息工程大学 Weather prediction method based on causal expansion convolution and Autoformer

Similar Documents

Publication Publication Date Title
CN107423414B (en) Information transfer model-based process industry complex electromechanical system fault tracing method
CN111507046B (en) Method and system for predicting remaining service life of electric gate valve
CN112910690A (en) Network traffic prediction method, device and equipment based on neural network model
CN112257263B (en) Equipment residual life prediction system based on self-attention mechanism
CN114239718B (en) High-precision long-term time sequence prediction method based on multi-element time sequence data analysis
CN115758641A (en) Power transmission line icing prediction method based on auto-former-progressive decomposition model
CN110956309A (en) Flow activity prediction method based on CRF and LSTM
CN114692950A (en) Wind power prediction method
CN114841268B (en) Abnormal power customer identification method based on Transformer and LSTM fusion algorithm
CN116579505B (en) Electromechanical equipment cross-domain residual life prediction method and system without full life cycle sample
CN116128158A (en) Oil well efficiency prediction method of mixed sampling attention mechanism
CN114120637A (en) Intelligent high-speed traffic flow prediction method based on continuous monitor
CN110648055A (en) Electric power accident event and cause relation construction method based on convolutional neural network
CN114694379B (en) Traffic flow prediction method and system based on self-adaptive dynamic graph convolution
CN114841072A (en) Differential fusion Transformer-based time sequence prediction method
CN116050595A (en) Attention mechanism and decomposition mechanism coupled runoff amount prediction method
CN115859777A (en) Method for predicting service life of product system in multiple fault modes
CN115062832A (en) Waste household appliance recovery amount prediction method based on multi-time scale attention network
CN118037112A (en) Tread quality prediction model construction method based on data driving
CN117894389A (en) SSA-optimized VMD and LSTM-based prediction method for concentration data of dissolved gas in transformer oil
CN117725373A (en) Sparse acquisition-oriented agricultural machinery track completion method and device and electronic equipment
CN115713044B (en) Method and device for analyzing residual life of electromechanical equipment under multi-condition switching
CN117494545A (en) Circuit breaker aging fault rate prediction method based on hybrid learning method
CN115348485A (en) Method and device for processing equipment monitoring data, computer equipment and program product
CN115883424A (en) Method and system for predicting traffic data between high-speed backbone networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination