CN116090328A - Method and system for predicting operation performance trend of voltage transformer - Google Patents

Method and system for predicting operation performance trend of voltage transformer Download PDF

Info

Publication number
CN116090328A
CN116090328A CN202211370447.9A CN202211370447A CN116090328A CN 116090328 A CN116090328 A CN 116090328A CN 202211370447 A CN202211370447 A CN 202211370447A CN 116090328 A CN116090328 A CN 116090328A
Authority
CN
China
Prior art keywords
voltage transformer
performance
parameters
parameter
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211370447.9A
Other languages
Chinese (zh)
Inventor
李琪林
严平
彭德中
史强
李金嵩
蔡君懿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Marketing Service Center Of State Grid Sichuan Electric Power Co
Original Assignee
Marketing Service Center Of State Grid Sichuan Electric Power Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Marketing Service Center Of State Grid Sichuan Electric Power Co filed Critical Marketing Service Center Of State Grid Sichuan Electric Power Co
Priority to CN202211370447.9A priority Critical patent/CN116090328A/en
Publication of CN116090328A publication Critical patent/CN116090328A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R35/00Testing or calibrating of apparatus covered by the other groups of this subclass
    • G01R35/02Testing or calibrating of apparatus covered by the other groups of this subclass of auxiliary devices, e.g. of instrument transformers according to prescribed transformation ratio, phase angle, or wattage rating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/02Reliability analysis or reliability optimisation; Failure analysis, e.g. worst case scenario performance, failure mode and effects analysis [FMEA]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/12Timing analysis or timing optimisation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Power Engineering (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The invention discloses a method and a system for predicting the running performance trend of a voltage transformer, which are used for capturing the correlation of a multi-element driving time sequence of the environmental parameter and the performance parameter by acquiring the historical data of the environmental parameter and the performance parameter in the area where the voltage transformer is positioned; acquiring the time sequence correlation of hidden information in the correlation of the multi-element driving time sequence by adopting the time convolution attention to generate a voltage transformer performance parameter predicted value; constructing a multi-attention generation countermeasure network, training the multi-attention generation countermeasure network by adopting a back propagation algorithm, and updating parameters in the network according to the returned gradient information; the environment parameters and the performance parameters in the area where the voltage transformer is located are input into the trained countermeasure network, future predicted values of the performance parameters in the area where the voltage transformer is located are output, the prediction of the running performance change trend of the capacitive voltage transformer is realized, and the accuracy and the efficiency of judging the abnormal state of the capacitive voltage transformer are improved.

Description

Method and system for predicting operation performance trend of voltage transformer
Technical Field
The invention relates to the technical field of electrical measurement, in particular to a method and a system for predicting the running performance trend of a voltage transformer.
Background
The voltage transformer is also called as transformer (PT) for instrument, which is an instrument for changing high voltage into low voltage and keeping a certain relation with the original in phase. The types of the photoelectric voltage transformer are classified into electromagnetic type, capacitance type and novel photoelectric voltage transformer according to the working principle. In the current practical application, the capacitive voltage transformer is mainly used, and the capacitive voltage transformer is gradually replaced by the capacitive voltage transformer due to the advantages of high impact insulation strength, simplicity in manufacturing, small size, light weight, remarkable economical efficiency and the like, and is widely applied to 110 kV-500 kV power grid scenes.
The power grid system has the scheme about the running state monitoring method of the capacitive voltage transformer, and has corresponding evaluation methods for real-time insulation, parameter information, safety and the like of equipment, but the methods only realize real-time perception of the running performance of the equipment at present, can not accurately predict the performance change trend of the equipment, and can not accurately predict and evaluate the existing potential safety hazard of the equipment.
Disclosure of Invention
The invention aims to provide a method and a system for predicting the running performance trend of a voltage transformer, which aims to obtain the time sequence correlation of hidden information in the correlation of a multi-element driving time sequence to generate a predicted value of the performance parameter of the voltage transformer by capturing the correlation of the multi-element driving time sequence of environmental parameters and performance parameters, train a built multi-attention generation countermeasure network, realize the prediction of the running performance trend of the voltage transformer and improve the accuracy and efficiency of judging the abnormal state of the voltage transformer.
The invention is realized by the following technical scheme:
the invention provides a method for predicting the running performance trend of a voltage transformer, which specifically comprises the following steps:
s1, acquiring historical data of environmental parameters and performance parameters in an area where a voltage transformer is located, preprocessing the historical data, and obtaining correlation of multi-element driving time sequences of the environmental parameters and the performance parameters;
s2, acquiring time sequence correlation of hidden information potential space in correlation of a multi-element driving time sequence by adopting time convolution attention, and generating a voltage transformer performance parameter predicted value according to the time sequence correlation;
s3, constructing a multi-attention generation countermeasure network, performing countermeasure training on the multi-attention generation countermeasure network by adopting a back propagation algorithm, and updating parameters in the network in a countermeasure mode according to the returned gradient information;
s4, inputting the environment parameters and the performance parameters in the area where the voltage transformer is located into the trained countermeasure network, and outputting future predicted values of the performance parameters in the area where the voltage transformer is located.
According to the invention, through capturing the correlation of the multi-element driving time sequence of the environmental parameter and the performance parameter, the time sequence correlation of the hidden information in the correlation of the multi-element driving time sequence is obtained to generate the performance parameter predicted value of the voltage transformer, the construction of the multi-attention generation countermeasure network is trained, the environmental parameter and the performance parameter in the area where the voltage transformer is positioned are input into the trained countermeasure network, the future predicted value of the performance parameter in the area where the voltage transformer is positioned is output, the prediction of the running performance change trend of the capacitive voltage transformer is realized, and the accuracy and the efficiency of the abnormal state judgment of the capacitive voltage transformer are improved.
Further, the preprocessing the historical data to obtain the correlation of the multi-component driving time sequence of the environmental parameter and the performance parameter specifically includes:
acquiring environment parameters and performance parameter sequences of a kth voltage transformer with a time length of T and vectors of the environment parameters and the performance parameters of n voltage transformers at the time T, and determining potential space;
the time step of the encoder result is automatically selected by a time attention mechanism, and all environment parameters and performance parameter target sequences in the past T time step are obtained.
Further, the determining the potential space specifically includes:
obtaining a learnable parameter of an environmental parameter and a performance parameter, and obtaining an attention weight in time T according to a given T input sequence;
and according to the attention weight, the environmental parameter and the performance parameter time series data, multiplying the attention weight by the voltage transformer performance parameter series to obtain a potential space.
Further, the generating the predicted value of the performance parameter of the voltage transformer according to the time sequence correlation specifically includes:
acquiring given T target values, and determining all environmental parameters and performance parameter target sequences in the past T time steps;
and determining a predicted value and a predicted target sequence according to the potential space and the target sequence.
Further, the performance parameter predicted value of the voltage transformer comprises a ratio error, a phase error, a dielectric loss value and a capacitance of the capacitive voltage transformer, wherein the ratio error and the phase error are used for reflecting the metering performance of the capacitive voltage transformer, and the dielectric loss value and the capacitance are used for reflecting the insulation performance of the capacitive voltage transformer;
the specific calculation step of the parameter predicted value comprises the following steps:
acquiring a secondary voltage effective value, a secondary voltage phase, a leakage current phase and a leakage current effective value of the same capacitive voltage transformer, acquiring a secondary voltage effective value and a secondary voltage phase of a reference voltage transformer under the same voltage class, and acquiring a phase and an angular frequency;
determining a ratio error according to the effective value of the secondary voltage of the same capacitive voltage transformer and the effective value of the secondary voltage of the reference voltage transformer under the same voltage class;
determining a phase error according to the secondary voltage phase of the same capacitive voltage transformer and the secondary voltage phase of the reference voltage transformer under the same voltage level;
determining a dielectric loss value according to the secondary voltage effective value and the leakage current phase of the same capacitive voltage transformer;
and determining the capacitance value according to the effective value of the secondary voltage and the effective value of the leakage current of the same capacitive voltage transformer.
Further, the training of the multi-attention generation countermeasure network by adopting the back propagation algorithm specifically comprises the following steps:
acquiring environment parameters and performance parameter actual values in the areas where the class labels and the voltage transformers are located, and training by combining a predicted target sequence to obtain the prediction of each time step and the least square loss between the sequence labels;
obtaining the least square loss between the predicted value and the actual value label according to the least square loss between the potential space and the predicted and sequence labels of each time step;
constructing a least square loss function according to the least square loss between the prediction and the sequence labels of each time step and the least square loss between the prediction value and the actual value labels;
training the network parameters of the generator according to the least square loss function between the predicted value and the actual value label, and judging the true and false of the predicted value through the discriminator to train the network parameters of the discriminator, thereby completing the countermeasure training of generating the countermeasure network.
A second aspect of the present invention provides a voltage transformer operation performance trend prediction system, including: encoder, generator and arbiter:
s1, acquiring historical data of environmental parameters and performance parameters in an area where a voltage transformer is located, and capturing correlation of a multi-element driving time sequence of the environmental parameters and the performance parameters through an encoder;
s2, acquiring the time sequence correlation of hidden information potential space in the correlation of the multi-element driving time sequence by adopting time convolution attention through a generator, and generating a voltage transformer performance parameter predicted value according to the time sequence correlation;
s3, constructing a multi-attention generation countermeasure network according to the discriminator, performing countermeasure training on the multi-attention generation countermeasure network by adopting a back propagation algorithm, and updating network parameters of the generator and the discriminator according to the returned gradient information;
s4, inputting the environment parameters and the performance parameters in the area where the voltage transformer is located into the trained countermeasure network, and outputting future predicted values of the performance parameters in the area where the voltage transformer is located.
Further, the encoder is used for acquiring environmental parameters and performance parameters of the capacitive voltage transformer, obtaining correlation of multi-element driving time sequences of the environmental parameters and the performance parameters, and transmitting acquired data to the generator in real time;
the encoder includes an input attention module and a self-attention module:
the input attention module and the self attention module are used for processing the voltage transformer performance parameter change sequence and generating potential space capable of maintaining relation information.
Further, the generator is configured to receive the environmental parameter and the performance parameter of the capacitive voltage transformer acquired by the encoder, use the time convolution attention to select the time correlation of the hidden information, calculate the ratio error, the phase error, the dielectric loss value and the capacitance of the capacitive voltage transformer, and transmit the predicted value of the performance parameter of the capacitive voltage transformer to the discriminator.
Further, the discriminator is used for receiving the predicted value of the performance parameter of the capacitive voltage transformer generated by the generator in real time, training the network parameter of the generator according to the least square loss function between the received predicted value and the actual value label, discriminating the performance parameter test data of the predicted value and the actual voltage transformer through the discriminator, training the network parameter of the discriminator, and completing the countermeasure training for generating the countermeasure network.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the method comprises the steps of acquiring the correlation of the multi-element driving time sequence of the environmental parameter and the performance parameter, acquiring the time sequence correlation of hidden information in the correlation of the multi-element driving time sequence to generate a predicted value of the performance parameter of the voltage transformer, training the construction of a multi-attention generation countermeasure network, realizing the prediction of the running performance change trend of the capacitive voltage transformer, and improving the accuracy and efficiency of the abnormal state judgment of the capacitive voltage transformer.
Drawings
In order to more clearly illustrate the technical solutions of the exemplary embodiments of the present invention, the drawings that are needed in the examples will be briefly described below, it being understood that the following drawings only illustrate some examples of the present invention and therefore should not be considered as limiting the scope, and that other related drawings may be obtained from these drawings without inventive effort for a person skilled in the art. In the drawings:
FIG. 1 is a flow chart of a method for predicting the operation performance trend of a voltage transformer in an embodiment of the invention;
FIG. 2 is a diagram of a multi-attention generation countermeasure network framework in an embodiment of the invention;
FIG. 3 is a block diagram of a generator network in an embodiment of the invention;
FIG. 4 is a block diagram of a generator in an embodiment of the invention;
fig. 5 is a diagram of a network architecture of a arbiter in an embodiment of the present invention.
Detailed Description
For the purpose of making apparent the objects, technical solutions and advantages of the present invention, the present invention will be further described in detail with reference to the following examples and the accompanying drawings, wherein the exemplary embodiments of the present invention and the descriptions thereof are for illustrating the present invention only and are not to be construed as limiting the present invention.
Example 1
As shown in fig. 1, the embodiment provides a method for predicting the operation performance trend of a voltage transformer, which specifically includes the following steps:
s1, acquiring historical data of environmental parameters and performance parameters in an area where a voltage transformer is located, preprocessing the historical data, and obtaining correlation of multi-element driving time sequences of the environmental parameters and the performance parameters;
s2, acquiring time sequence correlation of hidden information potential space in correlation of a multi-element driving time sequence by adopting time convolution attention, and generating a voltage transformer performance parameter predicted value according to the time sequence correlation;
s3, constructing a multi-attention generation countermeasure network, performing countermeasure training on the multi-attention generation countermeasure network by adopting a back propagation algorithm, and updating parameters in the network in a countermeasure mode according to the returned gradient information;
s4, inputting the environment parameters and the performance parameters in the area where the voltage transformer is located into the trained countermeasure network, and outputting future predicted values of the performance parameters in the area where the voltage transformer is located.
According to the invention, through capturing the correlation of the multi-element driving time sequence of the environmental parameter and the performance parameter, the time sequence correlation of the hidden information in the correlation of the multi-element driving time sequence is obtained to generate the performance parameter predicted value of the voltage transformer, the construction of the multi-attention generation countermeasure network is trained, the environmental parameter and the performance parameter in the area where the voltage transformer is positioned are input into the trained countermeasure network, the future predicted value of the performance parameter in the area where the voltage transformer is positioned is output, the prediction of the running performance change trend of the capacitive voltage transformer is realized, and the accuracy and the efficiency of the abnormal state judgment of the capacitive voltage transformer are improved.
In some possible embodiments, preprocessing the historical data to obtain correlation of the multi-component driving time series of the environmental parameter and the performance parameter specifically includes:
acquiring environment parameters and performance parameter sequences of a kth voltage transformer with a time length of T and vectors of the environment parameters and the performance parameters of n voltage transformers at the time T, and determining potential space;
the time step of the encoder result is automatically selected by a time attention mechanism, and all environment parameters and performance parameter target sequences in the past T time step are obtained.
In some possible embodiments, determining the potential space specifically includes:
obtaining a learnable parameter of an environmental parameter and a performance parameter, and obtaining an attention weight in time T according to a given T input sequence;
and according to the attention weight, the environmental parameter and the performance parameter time series data, multiplying the attention weight by the voltage transformer performance parameter series to obtain a potential space.
In some possible embodiments, generating the voltage transformer performance parameter prediction value according to the time series correlation specifically includes:
acquiring given T target values, and determining all environmental parameters and performance parameter target sequences in the past T time steps;
and determining a predicted value and a predicted target sequence according to the potential space and the target sequence.
In some possible embodiments, the voltage transformer performance parameter predictors include a ratio error, a phase error, a dielectric loss value, and a capacitance of the capacitive voltage transformer, the ratio error and the phase error being used to reflect a metering performance of the capacitive voltage transformer, the dielectric loss value and the capacitance being used to reflect an insulation performance of the capacitive voltage transformer;
the specific calculation steps of the parameter predicted value comprise:
acquiring a secondary voltage effective value, a secondary voltage phase, a leakage current phase and a leakage current effective value of the same capacitive voltage transformer, acquiring a secondary voltage effective value and a secondary voltage phase of a reference voltage transformer under the same voltage class, and acquiring a phase and an angular frequency;
determining a ratio error according to the effective value of the secondary voltage of the same capacitive voltage transformer and the effective value of the secondary voltage of the reference voltage transformer under the same voltage class;
determining a phase error according to the secondary voltage phase of the same capacitive voltage transformer and the secondary voltage phase of the reference voltage transformer under the same voltage level;
determining a dielectric loss value according to the secondary voltage effective value and the leakage current phase of the same capacitive voltage transformer;
and determining the capacitance value according to the effective value of the secondary voltage and the effective value of the leakage current of the same capacitive voltage transformer.
The calculation formula is as follows:
Figure BDA0003924600530000061
ε U =θ cn
Figure BDA0003924600530000062
Figure BDA0003924600530000063
wherein U is c 、θ c 、θ Ic 、I c The effective value of the secondary voltage, the secondary voltage phase, the leakage current phase and the leakage current of the same capacitive voltage transformer are respectively; u (U) n 、θ n The effective value and the phase of the secondary voltage of the reference voltage transformer under the same voltage class are respectively; epsilon is the angular frequency.
In some possible embodiments, the multi-attention generation countermeasure network is trained by adopting a back propagation algorithm, and the method specifically comprises the following steps:
acquiring environment parameters and performance parameter actual values in the areas where the class labels and the voltage transformers are located, and training by combining a predicted target sequence to obtain the prediction of each time step and the least square loss between the sequence labels;
obtaining the least square loss between the predicted value and the actual value label according to the least square loss between the potential space and the predicted and sequence labels of each time step;
constructing a least square loss function according to the least square loss between the prediction and the sequence labels of each time step and the least square loss between the prediction value and the actual value labels;
training the network parameters of the generator according to the least square loss function between the predicted value and the actual value label, and judging the true and false of the predicted value through the discriminator to train the network parameters of the discriminator, thereby completing the countermeasure training of generating the countermeasure network.
Example 2
As shown in fig. 2, the present embodiment provides a voltage transformer operation performance trend prediction system, including: encoder, generator and arbiter:
s1, acquiring historical data of environmental parameters and performance parameters in an area where a voltage transformer is located, and capturing correlation of a multi-element driving time sequence of the environmental parameters and the performance parameters through an encoder;
s2, acquiring the time sequence correlation of hidden information potential space in the correlation of the multi-element driving time sequence by adopting time convolution attention through a generator, and generating a voltage transformer performance parameter predicted value according to the time sequence correlation;
s3, constructing a multi-attention generation countermeasure network according to the discriminator, performing countermeasure training on the multi-attention generation countermeasure network by adopting a back propagation algorithm, and updating network parameters of the generator and the discriminator according to the returned gradient information;
s4, inputting the environment parameters and the performance parameters in the area where the voltage transformer is located into the trained countermeasure network, and outputting future predicted values of the performance parameters in the area where the voltage transformer is located.
In some possible embodiments, the encoder is configured to collect environmental parameters and performance parameters of the capacitive voltage transformer, obtain correlation of a multi-component driving time sequence of the environmental parameters and the performance parameters, and transmit the collected data to the generator in real time;
the encoder includes an input attention module and a self-attention module:
inputting attention for extracting the correlation of the data such as air pressure, air temperature, humidity, power frequency and the like in the current time step;
self-attention is used to adjust the ratio of the sequence of environmental and performance parameters;
the input attention module and the self attention module are used for processing the voltage transformer performance parameter change sequence and generating potential space capable of maintaining relation information.
The capacitive voltage transformer comprises a capacitor type voltage transformer, a capacitor type voltage transformer and a capacitor type voltage transformer, wherein the environment parameters of the capacitor type voltage transformer are air temperature, air pressure, humidity, precipitation, power frequency and any environment parameter or any environment parameter combination in an external electric field; the performance parameters of the capacitive voltage transformer comprise the effective value and the phase of the secondary voltage of the capacitive voltage transformer, and the effective value and the phase of leakage current of the capacitive voltage transformer.
In some possible embodiments, the generator is configured to receive the environmental parameter and the performance parameter of the capacitive voltage transformer acquired by the encoder, use the time convolution attention to select the time correlation of the hidden information, calculate the ratio error, the phase error, the dielectric loss value, and the capacitance of the capacitive voltage transformer, and transmit the predicted value of the performance parameter of the capacitive voltage transformer to the arbiter.
In some possible embodiments, the discriminator is configured to receive the predicted value of the performance parameter of the capacitive voltage transformer generated in real time by the generator, train the output result of the discriminator to converge towards 0.5 for the real performance parameter data from the actual voltage transformer, discriminate the predicted value from the performance parameter test data of the actual voltage transformer in the countermeasures generating network, finally compare the output result of the discriminator to obtain the accuracy of the predicted value obtained by the analysis method of the invention, so the analysis method of the invention can be effectively applied to the prediction of the running performance variation trend of the capacitive voltage transformer.
Specifically, firstly, we collect the historical data of the environmental parameters and performance parameters in the region of the voltage transformer, and set up
Figure BDA0003924600530000071
Environmental parameter and performance parameter sequence of kth voltage transformer with a time length of T are represented by +.>
Figure BDA0003924600530000072
And the vectors of the environmental parameters and the performance parameters of the n voltage transformers at the time t are represented. We encode the time series data Xt to calculate the result as the potential space Z:
Z=E(x 1 ,x 2 ,…,x T )
wherein E (·) represents the coding network. In the generator (i.e. decoder) stage, a time-awareness mechanism is used to automatically select the time step of the encoder result. Given T target values, i.e. y= (Y) 1 ,y 2 ,…,y T ) Where T is the length of the window size we define. Y represents all environmental and performance parameter target sequences in the past T time step. Then, the predicted value is calculated by utilizing the potential space Z and the target sequence Y
Figure BDA0003924600530000073
Epsilon represents the predicted time step. Predicting the target sequence from the previous reading +.>
Figure BDA0003924600530000074
Figure BDA0003924600530000075
Where D (-) represents the decoding network (i.e., the generator). We combine the encoder network and the decoder network as a model MARNN to predict short-term time series of performance parameters. Is thatBetter long-term prediction results are obtained, and the performance parameter true value is used
Figure BDA0003924600530000076
And predictive value->
Figure BDA0003924600530000077
Training the arbiter network and adding the class label l= (L) T+1 ,L T+2 ,…,L T+ε ) As a condition variable to guide the discriminator network. Specifically, the arbiter network is trained to minimize the least squares loss between the predictions and sequence tags for each time step:
D loss =LS(y,L)
where LS (·) represents the least squares function. L represents a vector of 1 or 0 of the series. The generator is trained to "spoof" the discriminator to classify its output as real data, that is, it is desirable to minimize the least squares loss between the discriminator's predictions of the generated data and the "real" tag:
G loss =D loss (Z,1)
as shown in fig. 3, the encoder network processes the sequence of voltage transformer performance parameter changes with the input attention module and the self attention module, calculates the results with LSTM functions, and generates a potential space that can maintain relationship information. The main purpose of the attention mechanism is to select information from a vast array of information that is more critical to the current goal. In time-series prediction, when long-series data is input into a codec model, the former information is covered with the latter information. Thus, the target value can be better predicted by important information in the environmental parameter and performance parameter sequence data extracted by the attention mechanism. Given T input sequences
Figure BDA0003924600530000081
T is the time window. Note that the weight is calculated by the following formula.
Figure BDA0003924600530000082
Figure BDA0003924600530000083
Wherein,,
Figure BDA0003924600530000084
W e and U e Representing all the learnable parameters. Previous hidden state h t-1 And cell state s t-1 Is the intermediate stage result of the encoder LSTM unit.
Figure BDA0003924600530000085
Is a record of the attention weight at time t. The SoftMax function is used to ensure that the attention weight sum is 1. Considering adaptively extracting sequences, we multiply the attention weights with the environmental and performance parameter time series data:
Figure BDA0003924600530000086
we note the importance of dynamically adjusted environmental and performance parameter sequences, such that each has a unique adjustment factor. An attention layer with an attention matrix is used to capture the similarity of any marker to all neighboring markers in the input sequence of environmental and performance parameters. The input drive sequence is represented as
Figure BDA0003924600530000087
The attention mechanism calculates as follows:
g t =tanh(W g x t +b g )
Figure BDA0003924600530000088
wherein,,
Figure BDA0003924600530000089
representing sigmoid function, W g And W is α Is a parameter which can be learned, b g And b α Is the bias vector. The attention weights are then multiplied by the attributes of the sequence of voltage transformer performance parameters to show the different importance of the different attributes:
Figure BDA00039246005300000810
we will
Figure BDA0003924600530000091
Transpose to and->
Figure BDA0003924600530000092
The same shape is used for the next connection. By using the f1 function calculation, the result is defined as the potential space:
Figure BDA0003924600530000093
Figure BDA0003924600530000094
Figure BDA0003924600530000095
wherein f 1 The (-) function is an LSTM unit.
Figure BDA0003924600530000096
Representing a connection of two hidden states. And putting the output Z into a generating network for further calculation.
In the encoder (generator) stage, self-attention and input attention modules are employed to better obtain the environmental and performance parameter sequence dependencies. In the decoder stage, the convolutional layer is first used to enhance the modulus by applying a convolutional filter on the row vector of the potential space Z of the encoderModel time learning, as shown in fig. 4, is the structure of the encoder (generator). Convolution operation generates H i C Given by the formula:
Figure BDA0003924600530000097
where k filters C in our model, 1 xw represents the size of the kernel.
The time attention is then used to adaptively select the concealment states of all time step dependent encoders. Attention weight per time step t
Figure BDA0003924600530000098
From the previous hidden state d t-1 And cell state s 'of LSTM cell' t-1 The calculation results are that:
Figure BDA0003924600530000099
Figure BDA00039246005300000910
wherein [ d ] t-1 ;s′ t-1 ]Is a concatenation of the previously hidden state of the LSTM cell and the cell state.
Figure BDA00039246005300000911
And U d Representing the learnable parameters. The attention mechanism uses convolution operation +.>
Figure BDA00039246005300000912
And attention weight->
Figure BDA00039246005300000913
Computing context vector c t Representing the importance of the encoder concealment status:
Figure BDA00039246005300000914
then we will context vector c t With a given target sequence y t-1 And combining:
Figure BDA00039246005300000915
wherein [ y ] t-1 ;c t ]Is the target sequence y t-1 And a weighted sum context vector c t-1 Is a series of (a) and (b).
Figure BDA00039246005300000916
And->
Figure BDA00039246005300000917
Is a parameter that maps the connection to the decoder input size. Non-linear functions as LSTM cells can be used to update decoder hidden state d at time t t :
Figure BDA00039246005300000918
As shown in fig. 5. The discriminator network consists of a three-layer convolutional network, and the one-dimensional convolutional network can better capture interesting features from the whole data and distinguish the data. It is well known that a arbiter is used to determine whether data is from real data or generator data, and should give as accurate a determination as possible. The generator is used for generating data, and the generated data is used as a confusion discriminator as far as possible. In order to reduce the loss of least squares, the generator must pull the generated data away from the decision boundary to the decision boundary under the premise of a confusion arbiter. We believe that using least squares as the loss function can effectively improve the quality and stability of the data generation. The least squares loss function is expressed as follows:
Figure BDA0003924600530000101
Figure BDA0003924600530000102
where D (-) represents the discriminator network, G (-) represents the generator network, and the potential space Z is generated by the encoder network. The constants a and b represent the true data and the generated data tag, respectively, and c is a value determined by the generator for the performance parameter data that the arbiter deems the generated data to be true.
The loss function of the model i propose is the mean square error, which can be expressed as:
Figure BDA0003924600530000103
wherein the method comprises the steps of
Figure BDA0003924600530000104
Is the predictive vector, y T Is a true vector. Where N is the number of training samples. Adam optimizers and back propagation algorithms are used to train our models.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. The method for predicting the operation performance trend of the voltage transformer is characterized by comprising the following steps of:
s1, acquiring historical data of environmental parameters and performance parameters in an area where a voltage transformer is located, preprocessing the historical data, and obtaining correlation of multi-element driving time sequences of the environmental parameters and the performance parameters;
s2, acquiring time sequence correlation of hidden information potential space in correlation of a multi-element driving time sequence by adopting time convolution attention, and generating a voltage transformer performance parameter predicted value according to the time sequence correlation;
s3, constructing a multi-attention generation countermeasure network, performing countermeasure training on the multi-attention generation countermeasure network by adopting a back propagation algorithm, and updating parameters in the network in a countermeasure mode according to the returned gradient information;
s4, inputting the environment parameters and the performance parameters in the area where the voltage transformer is located into the trained countermeasure network, and outputting future predicted values of the performance parameters in the area where the voltage transformer is located.
2. The method for predicting the operation performance trend of the voltage transformer according to claim 1, wherein the preprocessing of the historical data to obtain the correlation between the environment parameter and the multi-component driving time sequence of the performance parameter specifically comprises:
acquiring environment parameters and performance parameter sequences of a kth voltage transformer with a time length of T and vectors of the environment parameters and the performance parameters of n voltage transformers at the time T, and determining potential space;
the time step of the encoder result is automatically selected by a time attention mechanism, and all environment parameters and performance parameter target sequences in the past T time step are obtained.
3. The method for predicting the operation performance trend of a voltage transformer according to claim 2, wherein the determining the potential space specifically comprises:
obtaining a learnable parameter of an environmental parameter and a performance parameter, and obtaining an attention weight in time T according to a given T input sequence;
and according to the attention weight, the environmental parameter and the performance parameter time series data, multiplying the attention weight by the voltage transformer performance parameter series to obtain a potential space.
4. The method for predicting the operation performance trend of a voltage transformer according to claim 1, wherein the generating the predicted value of the performance parameter of the voltage transformer according to the time sequence correlation specifically comprises:
acquiring given T target values, and determining all environmental parameters and performance parameter target sequences in the past T time steps;
and determining a predicted value and a predicted target sequence according to the potential space and the target sequence.
5. The method for predicting the operation performance trend of a voltage transformer according to claim 4, wherein the predicted value of the performance parameter of the voltage transformer comprises a ratio error, a phase error, a dielectric loss value and a capacitance of the capacitive voltage transformer, the ratio error and the phase error are used for reflecting the metering performance of the capacitive voltage transformer, and the dielectric loss value and the capacitance are used for reflecting the insulation performance of the capacitive voltage transformer;
the specific calculation step of the parameter predicted value comprises the following steps:
acquiring a secondary voltage effective value, a secondary voltage phase, a leakage current phase and a leakage current effective value of the same capacitive voltage transformer, acquiring a secondary voltage effective value and a secondary voltage phase of a reference voltage transformer under the same voltage class, and acquiring a phase and an angular frequency;
determining a ratio error according to the effective value of the secondary voltage of the same capacitive voltage transformer and the effective value of the secondary voltage of the reference voltage transformer under the same voltage class;
determining a phase error according to the secondary voltage phase of the same capacitive voltage transformer and the secondary voltage phase of the reference voltage transformer under the same voltage level;
determining a dielectric loss value according to the secondary voltage effective value and the leakage current phase of the same capacitive voltage transformer;
and determining the capacitance value according to the effective value of the secondary voltage and the effective value of the leakage current of the same capacitive voltage transformer.
6. The method for predicting the operation performance trend of a voltage transformer according to claim 4, wherein the training of the multi-attention generation countermeasure network by using a back propagation algorithm specifically comprises:
acquiring environment parameters and performance parameter actual values in the areas where the class labels and the voltage transformers are located, and training by combining a predicted target sequence to obtain the prediction of each time step and the least square loss between the sequence labels;
obtaining the least square loss between the predicted value and the actual value label according to the least square loss between the potential space and the predicted and sequence labels of each time step;
constructing a least square loss function according to the least square loss between the prediction and the sequence labels of each time step and the least square loss between the prediction value and the actual value labels;
training the network parameters of the generator according to the least square loss function between the predicted value and the actual value label, and judging the true and false of the predicted value through the discriminator to train the network parameters of the discriminator, thereby completing the countermeasure training of generating the countermeasure network.
7. A voltage transformer operational performance trend prediction system, comprising: encoder, generator and arbiter:
s1, acquiring historical data of environmental parameters and performance parameters in an area where a voltage transformer is located, and capturing correlation of a multi-element driving time sequence of the environmental parameters and the performance parameters through an encoder;
s2, acquiring the time sequence correlation of hidden information potential space in the correlation of the multi-element driving time sequence by adopting time convolution attention through a generator, and generating a voltage transformer performance parameter predicted value according to the time sequence correlation;
s3, constructing a multi-attention generation countermeasure network according to the discriminator, performing countermeasure training on the multi-attention generation countermeasure network by adopting a back propagation algorithm, and updating network parameters of the generator and the discriminator according to the returned gradient information;
s4, inputting the environment parameters and the performance parameters in the area where the voltage transformer is located into the trained countermeasure network, and outputting future predicted values of the performance parameters in the area where the voltage transformer is located.
8. The system for predicting the operational performance trend of a voltage transformer according to claim 7, wherein the encoder is configured to collect environmental parameters and performance parameters of the capacitive voltage transformer, obtain correlation of multiple driving time sequences of the environmental parameters and the performance parameters, and transmit the collected data to the generator in real time;
the encoder includes an input attention module and a self-attention module:
the input attention module and the self attention module are used for processing the voltage transformer performance parameter change sequence and generating potential space capable of maintaining relation information.
9. The system of claim 7, wherein the generator is configured to receive the environmental parameter and the performance parameter of the capacitive voltage transformer obtained by the encoder, calculate a ratio error, a phase error, a dielectric loss value, and a capacitance of the capacitive voltage transformer using a time correlation of the time convolution attention selection hiding information, and transmit the predicted value of the performance parameter of the capacitive voltage transformer to the discriminator.
10. The system of claim 7, wherein the discriminator is configured to receive the predicted value of the performance parameter of the capacitive voltage transformer generated by the generator in real time, train the generator network parameter according to the least squares loss function between the received predicted value and the tag of the actual value, and train the discriminator network parameter by discriminating the predicted value and the performance parameter test data of the actual voltage transformer by the discriminator, thereby completing the countermeasure training for generating the countermeasure network.
CN202211370447.9A 2022-11-03 2022-11-03 Method and system for predicting operation performance trend of voltage transformer Pending CN116090328A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211370447.9A CN116090328A (en) 2022-11-03 2022-11-03 Method and system for predicting operation performance trend of voltage transformer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211370447.9A CN116090328A (en) 2022-11-03 2022-11-03 Method and system for predicting operation performance trend of voltage transformer

Publications (1)

Publication Number Publication Date
CN116090328A true CN116090328A (en) 2023-05-09

Family

ID=86185705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211370447.9A Pending CN116090328A (en) 2022-11-03 2022-11-03 Method and system for predicting operation performance trend of voltage transformer

Country Status (1)

Country Link
CN (1) CN116090328A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118642030A (en) * 2024-08-13 2024-09-13 国网福建省电力有限公司 Error prediction method, device and storage medium for capacitive voltage transformer

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118642030A (en) * 2024-08-13 2024-09-13 国网福建省电力有限公司 Error prediction method, device and storage medium for capacitive voltage transformer

Similar Documents

Publication Publication Date Title
CN109164343B (en) Transformer fault diagnosis method based on characteristic information quantization and weighted KNN
CN109492830A (en) A kind of mobile pollution source concentration of emission prediction technique based on space-time deep learning
CN115223049B (en) Knowledge distillation and quantification method for large model compression of electric power scene edge calculation
CN114509266B (en) Bearing health monitoring method based on fault feature fusion
CN114676742A (en) Power grid abnormal electricity utilization detection method based on attention mechanism and residual error network
CN110765878A (en) Short-term rainfall prediction method
CN116434777B (en) Transformer fault diagnosis method and system based on multistage attention and feature fusion
CN115983087A (en) Method for detecting time sequence data abnormity by combining attention mechanism and LSTM and terminal
CN115204032A (en) ENSO prediction method and device based on multi-channel intelligent model
CN116090328A (en) Method and system for predicting operation performance trend of voltage transformer
CN117473411A (en) Bearing life prediction method based on improved transducer model
El Bakali et al. Data-Based Solar Radiation Forecasting with Pre-Processing Using Variational Mode Decomposition
CN116306302A (en) Multi-working-condition residual life prediction method for key components of wind driven generator
CN117371207A (en) Extra-high voltage converter valve state evaluation method, medium and system
CN113536662B (en) Electronic transformer error state prediction method based on firefly optimized LightGBM algorithm
CN117113852A (en) GIS life prediction method based on electric, chemical, structural and vibration parameters
CN114818817A (en) Weak fault recognition system and method for capacitive voltage transformer
CN117219124B (en) Switch cabinet voiceprint fault detection method based on deep neural network
CN117037847B (en) End-to-end community noise monitoring method and device and related components
Dang et al. seq2graph: Discovering dynamic non-linear dependencies from multivariate time series
CN117520809A (en) Transformer fault diagnosis method based on EEMD-KPCA-CNN-BiLSTM
Mantzari et al. Solar radiation: Cloudiness forecasting using a soft computing approach.
CN116561569A (en) Industrial power load identification method based on EO feature selection and AdaBoost algorithm
CN115659148A (en) Load decomposition method and system based on deep learning attention mechanism
CN111796173B (en) Partial discharge pattern recognition method, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination