CN115964932A - Gas prediction method based on EMD-BilSTM-Attention mechanism transformer digital twin model - Google Patents

Gas prediction method based on EMD-BilSTM-Attention mechanism transformer digital twin model Download PDF

Info

Publication number
CN115964932A
CN115964932A CN202211463565.4A CN202211463565A CN115964932A CN 115964932 A CN115964932 A CN 115964932A CN 202211463565 A CN202211463565 A CN 202211463565A CN 115964932 A CN115964932 A CN 115964932A
Authority
CN
China
Prior art keywords
bilstm
attention mechanism
time
attention
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211463565.4A
Other languages
Chinese (zh)
Inventor
程远
张涛
王超
付钰惠
何苗苗
佟岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Dongke Electric Power Co Ltd
Original Assignee
Liaoning Dongke Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Dongke Electric Power Co Ltd filed Critical Liaoning Dongke Electric Power Co Ltd
Priority to CN202211463565.4A priority Critical patent/CN115964932A/en
Publication of CN115964932A publication Critical patent/CN115964932A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Abstract

A gas prediction method of a transformer digital twin model based on an EMD-BilSTM-Attention mechanism comprises the following steps: 1) Preprocessing a dissolved gas sequence in the transformer oil by adopting empirical mode decomposition: decomposing the time sequence of the characteristic parameters by adopting EDM to obtain an intrinsic modal component and a residual component; 2) Adopting a bidirectional long-short time memory nerve injected with an attention mechanism as a training model, and taking preprocessed data as input to obtain a prediction result: inputting the intrinsic modal components obtained in the step 1) into a BilSTM neural network of a time attention mechanism, training and predicting the future value of each modal component; 3) And (3) superposing and reconstructing each component prediction result in the step 2) to obtain a final prediction result of the concentration of the dissolved gas in the oil. Through the method, the invention provides the gas prediction method which improves the prediction efficiency and shortens the prediction time.

Description

Gas prediction method based on EMD-BilSTM-Attention mechanism transformer digital twin model
Technical Field
The invention belongs to the field of fault prediction of oil-immersed three-phase transformers, and particularly relates to a gas prediction method based on an EMD-BilSTM-Attention mechanism transformer digital twin model.
Background
The word twin technology gradually develops from the original aerospace field to various manufacturing industries, and shows good application prospect in the intelligent manufacturing field. The digital twin technology is used as a bridge to connect a physical world and a virtual world, and takes complex physical simulation, real-time data sharing and analysis, data processing and the like as key technologies to construct digital twin organisms of the physical world and the virtual world and display the physical state of the physical world in real time.
The transformer is used as an important component of a transformer substation, and the transformer is used as a core device of a power system, and the operation condition of the transformer has an important influence on the safety and stability of the power system. The transformer fault has the characteristics of both latent and mutant types, when the latent fault is suddenly changed, a major accident is caused, and in order to eliminate hidden dangers, the running state of the transformer needs to be predicted in advance and corresponding maintenance measures are taken in time. At present, a great deal of research is carried out on DAG technology by domestic and foreign scholars, which mainly comprises the following steps: statistical prediction, combinatorial model prediction, and artificial intelligence prediction. The statistical prediction method mainly comprises a gray model and a time series model, and the precision of the prediction method mainly depends on the distribution characteristics of an experimental data set. The first is to optimize and analyze important characteristic parameters in the model through an optimization algorithm. And secondly, predicting by combining a prediction model through data preprocessing, thereby improving the model prediction precision. The data preprocessing method mainly comprises empirical mode decomposition, convolution neural network and the like. And (4) artificial intelligence prediction, wherein data are analyzed through an artificial intelligence technology. Random forests, recurrent neural networks, support vector machines, and the like are common. However, the traditional artificial intelligence prediction has limited processing capacity for long-order problems, and the deviation of the prediction result is overlarge. Along with the development of artificial intelligence, the circulating nerve is gradually improved, and a long-time and short-time memory neural network and a gate control circulating unit neural network are highlighted. The BilSTM neural network is an improved LSTM neural network, mainly solves the problem that each unit gate of the LSTM neural network is not connected in front and at the back, further improves the accuracy of feature extraction, and improves the data prediction precision.
Disclosure of Invention
The invention aims at the time sequence of gas content in oil, environment temperature, top layer oil temperature and the like of a transformer in a period of time as a research subject, and input is subjected to empirical mode decomposition through MATLAB to obtain each order of modes. And predicting each mode by adding an attention mechanism bidirectional long and short term memory network, and finally reconstructing a prediction result to obtain a final prediction result.
In order to solve the technical problems, the invention creates the technical scheme as follows: a gas prediction method based on an EMD-BilSTM-Attention mechanism transformer digital twin model comprises the following steps:
1) Preprocessing a dissolved gas sequence in the transformer oil by adopting empirical mode decomposition: decomposing the time sequence of the characteristic parameters by adopting EDM to obtain intrinsic mode components and residual components;
2) And (3) adopting a bidirectional long-time and short-time memory nerve with an attention mechanism as a training model, and taking preprocessed data as input to obtain a prediction result: inputting the intrinsic modal components obtained in the step 1) into a BilSTM neural network of a time attention mechanism, training and predicting the future value of each modal component;
3) And 3) superposing and reconstructing each component prediction result in the step 2) to obtain a final prediction result of the concentration of the dissolved gas in the oil.
In the step 1), the specific method is as follows:
decomposing a dissolved gas time sequence QUOTEx (t) x (t) in transformer oil into a plurality of stable modal components IMF by using Empirical Mode Decomposition (EMD) based on MATLAB software, wherein the number of extreme points of all IMFs is the same as or different from the number of zero-crossing points of the IMFs by one, and the mean value of an envelope curve of each IMF is equal to 0;
1.1 ) obtain the upper and lower envelope mean m of the original sequence 1 (t) and calculating x (t) and m 1 Difference h of (t) 1 (t)。
h 1 (t)=x(t)-m 1 (t) (1)
1.2 If h) 1 (t) IMF Condition is met, then h 1 (t) is the first order IMF, denoted as c 1 (t) and obtaining a residual component r 1 (t), otherwise repeat step 1.1).
r 1 (t)=x(t)-c 1 (t) (2)
1.3 To r) 1 (t) repeating steps 1.1), 1.2) until r n (t) is a monotonic function and cannot be decomposed until the original sequence QUOTEx (t) x (t) can be expressed as
Figure BDA0003956432730000021
In the formula, r n (t) represents the residual signal.
In the step 2), the specific method is as follows:
a Python design time attention mechanism BiLSTM neural network was used. The obtained modal components are input into a BilSTM neural network of a time attention mechanism, and the future value of each modal component is trained and predicted. And superposing and reconstructing the prediction results of all components to obtain the final prediction result of the concentration of the dissolved gas in the oil. The mechanism of the BilSTM neural network and the temporal attention mechanism is as follows:
FIG. 1 illustrates a circular cell structure of an LSTM network having three gates, input gate i t Forgetting door f t And an output gate o t Wherein, c t 、c t 、h t Internal, candidate, and external states of the LSTM cell, respectively.
Input gate determines candidate state c at current time t How much information to keep, defineIs composed of
i t =σ(W i ·[h t-1 ,x t ]+b i ) (4)
Forget gate to determine internal state c of last moment t-1 How much information needs to be forgotten, which is defined as
f t =σ(W i ·[h t-1 ,x t ]+b f ) (5)
The state of the LSTM cell is then controlled according to equations (4) and (5)
c t =tanh(W c ·[h t-1 ,x t ]+b c ) (6)
c t =f t *c t-1 +i t *c t (7)
The output gate determines the internal state c of the current time t How much information to transfer to external state h t This process can be defined as:
o t =σ(W o ·[h t-1 ,x t ]+b o ) (8)
h t =o t *tanh(c t ) (9)
in formulae (4) to (9): σ is sigmoid activation function, W i 、W f 、W c 、W o Tanh is used to scale the values to the range of-1 to 1 for the corresponding weight matrix;
the BilSTM neural network is an improved LSTM neural network, mainly solves the problem that each unit gate of the LSTM neural network is not connected in front and at the back, further improves the accuracy of feature extraction, and improves the data prediction precision. The LSTM neural network forward and backward combination network is used for carrying out LSTM neural network training twice forward and backward on a time sequence to be analyzed, and an output result is also a fit of the results output by the LSTM neural network twice forward and backward. The general framework diagram of BilSTM is shown in FIG. 2.
The attention state transition can be realized by equation (10), equation (11), and equation (12):
the additive attention scoring mechanism is
e ij =u s tanh(wh ij +b) (10)
Attention distribution is calculated
Figure BDA0003956432730000031
Information weighted sum calculation
Figure BDA0003956432730000041
In the formula: i is the characteristic category of the input data; j is the group of input data; h is ij Hidden layer state of the j-th group of data of i kinds of characteristics; p is the data length; a is a ij The attention degree of the j group data of the i types of characteristics to the target prediction quantity; c. C i The weighted and summed fractional vector is obtained; e.g. of the type ij An attention scoring mechanism is carried out on the hidden layer after one-time full-connection operation; u. of s And w and b are respectively a time sequence of random initialization, an attention weight matrix and a bias item matrix.
The beneficial effects of the invention are as follows: the invention provides a gas prediction method based on an EMD-BilSTM-Attention mechanism transformer digital twin model. The gas time sequence in the transformer oil is predicted through an EMD-BilSTM-Attention mechanism, the change condition of the gas in the oil in a future period of time is predicted, and data basis is provided for fault diagnosis of the transformer.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram of the LSTM structure;
figure 3 is a schematic diagram of the Bilist architecture;
FIG. 4 is a predictive global block diagram;
fig. 5 modal predictor and true values 4 to 5;
FIG. 6 total true and predicted values.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without any creative effort belong to the protection scope of the present invention.
The embodiment of the invention provides a gas prediction method based on an EMD-BilSTM-Attention mechanism transformer digital twin model. The specific implementation mode is as follows, 1) EDM is adopted to decompose the time series of the characteristic parameters to obtain intrinsic modes and residual components. 2) Inputting the modal components obtained in the step 1) into a BilSTM neural network of a time attention mechanism, training and predicting the future value of each modal component. 3) Superposing and reconstructing each component prediction result in the step 2) to obtain a final prediction result of the concentration of the dissolved gas in the oil; 4) The model performance was evaluated in comparison to the actual data. Fig. 4 is an overall block diagram.
1) And decomposing the time sequence of the characteristic parameters by adopting EDM to obtain an intrinsic mode and residual components.
Monitoring data of main transformer during 12 months to 2 months in 2010 and 2011, wherein monitoring indexes comprise H2 (mu L/L), CH4 (mu L/L), C2H4 (mu L/L), CO2 (mu L/L), total hydrocarbon (mu L/L), ambient temperature (DEG C) and oil
Temperature (. Degree. C.). 240 sets of historical data were used as training samples and 70 sets of historical data were used as test samples. And forming the content of the certain gas at different moments into a column vector to form a time sequence.
Based on an EDM mechanism, performing modal decomposition on the time sequence in MATLAB to obtain a modal component and a residual component, wherein the specific method comprises the following steps:
(1) Obtaining the mean value m of the upper envelope and the lower envelope of the original sequence 1 (t) and calculating x (t) and m 1 Difference h of (t) 1 (t)。
h 1 (t)=x(t)-m 1 (t) (1)
(2) If h is 1 (t) IMF Condition is met, then h 1 (t) is the first order IMF, denoted as c 1 (t) and obtaining a residual component r 1 (t), otherwise repeat step 1).
r 1 (t)=x(t)-c 1 (t) (2)
(3) To r 1 (t) repeating steps 1) and 2) until r n (t) is a monotonic function and cannot be decomposed until the original sequence QUOTEx (t) x (t) can be expressed as
Figure BDA0003956432730000051
In the formula, r n (t) represents the residual signal.
2) Using Python programming language to design a BilSTM neural network with a time attention mechanism, inputting the modal components obtained in the step 1) into the designed BilSTM neural network with the time attention mechanism, setting training of the neural network and predicting the future value setting iteration number of each modal component. Setting a BiGRU training parameter batch size batsize and a learning rate alpha, and setting the number of layers of a structural parameter hidden layer and the number of layer neuron num.
3) And 3) superposing and reconstructing each component prediction result in the step 2) to obtain a final prediction result of the concentration of the dissolved gas in the oil.
Fig. 4 shows the 1 st to 4 th modality predicted values and the real values. FIG. 5 shows the 5 th mode and the overall true and predicted values
4) The model performance was evaluated in comparison to the actual data.
The invention adopts goodness-of-fit test to evaluate the model prediction result. The definition is as follows:
Figure BDA0003956432730000052
in the formula: y is i And
Figure BDA0003956432730000053
respectively actual value and predictive value>
Figure BDA0003956432730000054
Is the average value and n is the sample length. />

Claims (4)

1. A gas prediction method based on an EMD-BilSTM-Attention mechanism transformer digital twin model is characterized by comprising the following steps:
1) Preprocessing a dissolved gas sequence in the transformer oil by adopting empirical mode decomposition: decomposing the time sequence of the characteristic parameters by adopting EDM to obtain intrinsic mode components and residual components;
2) Adopting a bidirectional long-short time memory nerve injected with an attention mechanism as a training model, and taking preprocessed data as input to obtain a prediction result: inputting the intrinsic modal components obtained in the step 1) into a BilSTM neural network of a time attention mechanism, training and predicting the future value of each modal component;
3) And 3) superposing and reconstructing each component prediction result in the step 2) to obtain a final prediction result of the concentration of the dissolved gas in the oil.
2. The gas prediction method based on the EMD-BilSTM-Attention mechanism transformer digital twin model according to claim 1, wherein the specific method in the step 1) is as follows:
decomposing a dissolved gas time sequence QUOTEx (t) x (t) in transformer oil into a plurality of stable modal components IMFs by using empirical mode decomposition EMD based on MATLAB software, wherein the number of all IMF extreme points is the same as the number of self zero-crossing points or differs by one, and the mean value of each IMF envelope curve is equal to 0;
1.1 ) obtain the upper and lower envelope mean m of the original sequence 1 (t) and calculating x (t) and m 1 Difference h of (t) 1 (t)。
h 1 (t)=x(t)-m 1 (t) (1)
1.2 If h) 1 (t) if IMF condition is satisfied, then h 1 (t) is the first order IMF, denoted as c 1 (t) and obtaining a residual component r 1 (t), otherwise repeat step 1.1).
r 1 (t)=x(t)-c 1 (t) (2)
1.3 R to r 1 (t) repeating steps 1.1), 1.2) until r n (t) is a monotonic function and cannot be dividedUntil the solution, the original sequence QUOTEx (t) x (t) can be represented as
Figure FDA0003956432720000011
In the formula, r n (t) represents the residual signal.
3. The gas prediction method based on the EMD-BilSTM-Attention mechanism transformer digital twin model according to claim 1, wherein in the step 2), the specific method is as follows:
using a BiLSTM neural network of a Python design time attention mechanism, inputting each obtained modal component into the BiLSTM neural network of the time attention mechanism, training and predicting a future value of each modal component; and superposing and reconstructing the prediction results of all components to obtain the final prediction result of the concentration of the dissolved gas in the oil.
4. The gas prediction method based on the EMD-BilSTM-Attention mechanism transformer digital twin model as claimed in claim 3, wherein the mechanism of the BilSTM neural network and the time Attention mechanism is as follows:
cyclic unit structure of LSTM network: three gates, i.e. input gates i t Forgetting door f t And an output gate o t Wherein c is t 、c t 、h t Internal, candidate, and external states of the LSTM cell, respectively.
Input gate determines candidate state c at current time t How much information is retained, defined as
i t =σ(W i ·[h t-1 ,x t ]+b i ) (4)
Forgetting to gate to determine the internal state c of the last moment t-1 How much information needs to be forgotten, which is defined as
f t =σ(W i ·[h t-1 ,x t ]+b f ) (5)
The state of the LSTM cell is then controlled according to equations (4) and (5)
c t =tanh(W c ·[h t-1 ,x t ]+b c ) (6)
c t =f t *c t-1 +i t *c t (7)
The output gate determines the internal state c of the current time t How much information to transfer to external state h t This process can be defined as:
o t =σ(W o ·[h t-1 ,x t ]+b o ) (8)
h t =o t *tanh(c t ) (9)
in formulae (4) to (9): σ is sigmoid activation function, W i 、W f 、W c 、W o Tanh is used to scale the values to the range of-1 to 1 for the corresponding weight matrix;
the BilSTM neural network is an improved LSTM neural network, which is a network formed by combining the LSTM neural networks forwards and backwards, and carries out LSTM neural network training twice forwards and backwards on a time sequence needing to be analyzed, and the output result is also a fitting of the output results of the LSTM neural networks twice forwards and backwards, and the attention state conversion can be realized by formulas (10), (11) and (12):
the additive attention scoring mechanism is
e ij =u s tanh(wh ij +b)(10)
Calculating the attention distribution
Figure FDA0003956432720000021
Information weighted sum calculation
Figure FDA0003956432720000031
In the formula: i is the feature class of the input data; j is the group of input data; h is ij Hiding for the j-th group of data of i featuresThe stratum status; p is the data length; a is a ij The attention degree of the j group data of the i types of characteristics to the target prediction quantity; c. C i The weighted and summed fractional vector is obtained; e.g. of the type ij An attention scoring mechanism is carried out on the hidden layer after one-time full connection operation; u. of s And w and b are respectively a time sequence of random initialization, an attention weight matrix and a bias item matrix.
CN202211463565.4A 2022-11-22 2022-11-22 Gas prediction method based on EMD-BilSTM-Attention mechanism transformer digital twin model Pending CN115964932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211463565.4A CN115964932A (en) 2022-11-22 2022-11-22 Gas prediction method based on EMD-BilSTM-Attention mechanism transformer digital twin model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211463565.4A CN115964932A (en) 2022-11-22 2022-11-22 Gas prediction method based on EMD-BilSTM-Attention mechanism transformer digital twin model

Publications (1)

Publication Number Publication Date
CN115964932A true CN115964932A (en) 2023-04-14

Family

ID=87362344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211463565.4A Pending CN115964932A (en) 2022-11-22 2022-11-22 Gas prediction method based on EMD-BilSTM-Attention mechanism transformer digital twin model

Country Status (1)

Country Link
CN (1) CN115964932A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116187205A (en) * 2023-04-24 2023-05-30 北京智芯微电子科技有限公司 Running state prediction method and device for digital twin body of power distribution network and training method
CN116204794A (en) * 2023-05-04 2023-06-02 国网江西省电力有限公司电力科学研究院 Method and system for predicting dissolved gas in transformer oil by considering multidimensional data
CN117079736A (en) * 2023-10-17 2023-11-17 河北金锁安防工程股份有限公司 Gas concentration prediction method and system for intelligent gas sensing

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116187205A (en) * 2023-04-24 2023-05-30 北京智芯微电子科技有限公司 Running state prediction method and device for digital twin body of power distribution network and training method
CN116187205B (en) * 2023-04-24 2023-08-15 北京智芯微电子科技有限公司 Running state prediction method and device for digital twin body of power distribution network and training method
CN116204794A (en) * 2023-05-04 2023-06-02 国网江西省电力有限公司电力科学研究院 Method and system for predicting dissolved gas in transformer oil by considering multidimensional data
CN116204794B (en) * 2023-05-04 2023-09-12 国网江西省电力有限公司电力科学研究院 Method and system for predicting dissolved gas in transformer oil by considering multidimensional data
CN117079736A (en) * 2023-10-17 2023-11-17 河北金锁安防工程股份有限公司 Gas concentration prediction method and system for intelligent gas sensing
CN117079736B (en) * 2023-10-17 2024-02-06 河北金锁安防工程股份有限公司 Gas concentration prediction method and system for intelligent gas sensing

Similar Documents

Publication Publication Date Title
CN113297801B (en) Marine environment element prediction method based on STEOF-LSTM
Shamshirband et al. A survey of deep learning techniques: application in wind and solar energy resources
Gao et al. Interpretable deep learning model for building energy consumption prediction based on attention mechanism
Shen et al. Wind speed prediction of unmanned sailboat based on CNN and LSTM hybrid neural network
Ma et al. A hybrid attention-based deep learning approach for wind power prediction
CN115964932A (en) Gas prediction method based on EMD-BilSTM-Attention mechanism transformer digital twin model
Tian et al. Multi-step short-term wind speed prediction based on integrated multi-model fusion
CN112348271A (en) Short-term photovoltaic power prediction method based on VMD-IPSO-GRU
CN108647839A (en) Voltage-stablizer water level prediction method based on cost-sensitive LSTM Recognition with Recurrent Neural Network
CN112733444A (en) Multistep long time sequence prediction method based on CycleGAN neural network
CN112766078B (en) GRU-NN power load level prediction method based on EMD-SVR-MLR and attention mechanism
Zhang et al. Multi-head attention-based probabilistic CNN-BiLSTM for day-ahead wind speed forecasting
CN110598854A (en) GRU model-based transformer area line loss rate prediction method
CN111475546A (en) Financial time sequence prediction method for generating confrontation network based on double-stage attention mechanism
CN116167527B (en) Pure data-driven power system static safety operation risk online assessment method
Lu Research on GDP forecast analysis combining BP neural network and ARIMA model
Bian et al. Load forecasting of hybrid deep learning model considering accumulated temperature effect
CN111141879B (en) Deep learning air quality monitoring method, device and equipment
CN114444561A (en) PM2.5 prediction method based on CNNs-GRU fusion deep learning model
CN113326966A (en) CEEMD-LSTM-based multi-load prediction method for comprehensive energy system
Zhu et al. Uncertainty quantification of proton-exchange-membrane fuel cells degradation prediction based on Bayesian-Gated Recurrent Unit
Li et al. A new multipredictor ensemble decision framework based on deep reinforcement learning for regional gdp prediction
CN115456312A (en) Short-term power load prediction method and system based on octyl geometric modal decomposition
Wang et al. A novel wind power prediction model improved with feature enhancement and autoregressive error compensation
Zhang et al. Research on water quality prediction method based on AE-LSTM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination