CN114581699A - Transformer state evaluation method based on deep learning model in consideration of multi-source information - Google Patents
Transformer state evaluation method based on deep learning model in consideration of multi-source information Download PDFInfo
- Publication number
- CN114581699A CN114581699A CN202210101809.8A CN202210101809A CN114581699A CN 114581699 A CN114581699 A CN 114581699A CN 202210101809 A CN202210101809 A CN 202210101809A CN 114581699 A CN114581699 A CN 114581699A
- Authority
- CN
- China
- Prior art keywords
- data
- transformer
- deep learning
- learning model
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013136 deep learning model Methods 0.000 title claims abstract description 45
- 238000011156 evaluation Methods 0.000 title claims abstract description 40
- 238000013528 artificial neural network Methods 0.000 claims abstract description 47
- 230000002457 bidirectional effect Effects 0.000 claims abstract description 22
- 125000004122 cyclic group Chemical group 0.000 claims abstract description 20
- 238000012549 training Methods 0.000 claims abstract description 16
- 238000012360 testing method Methods 0.000 claims abstract description 14
- 238000007781 pre-processing Methods 0.000 claims abstract description 5
- 238000002372 labelling Methods 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 15
- 238000000034 method Methods 0.000 claims description 14
- 230000000306 recurrent effect Effects 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 8
- 238000002329 infrared spectrum Methods 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 description 10
- 238000010606 normalization Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 239000007789 gas Substances 0.000 description 3
- 210000002569 neuron Anatomy 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- CURLTUGMZLYLDI-UHFFFAOYSA-N Carbon dioxide Chemical compound O=C=O CURLTUGMZLYLDI-UHFFFAOYSA-N 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 229910002092 carbon dioxide Inorganic materials 0.000 description 1
- 239000001569 carbon dioxide Substances 0.000 description 1
- 239000012159 carrier gas Substances 0.000 description 1
- 238000004587 chromatography analysis Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000012847 principal component analysis method Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Testing Relating To Insulation (AREA)
Abstract
The invention discloses a transformer state evaluation method based on a deep learning model in consideration of multi-source information, which comprises the steps of collecting historical data of the running state of a transformer as a sample set, preprocessing and labeling data in a sample set, dividing labeled sample data into a training set and a test set, building a deep learning model based on a bidirectional independent cyclic neural network structure, training the built deep learning model by adopting the sample data in the training set to obtain the trained deep learning model, performing overfitting judgment on the trained deep learning model by adopting the sample data in the test set, and adjusting the hyper-parameters of the model if the overfitting phenomenon occurs in the trained model, and then retraining the model until no overfitting phenomenon occurs, obtaining an optimal model, collecting current signal data of the transformer to be evaluated, inputting the current signal data into the optimal model, and obtaining a transformer state evaluation result.
Description
Technical Field
The invention belongs to the technical field of power transformer state evaluation, and relates to a transformer state evaluation method based on a deep learning model in consideration of multi-source information.
Background
The transformer is one of the most critical devices in the power system, and the health of the transformer is related to the normal operation of the whole power grid. At present, most transformer fault diagnosis methods adopt a gas chromatograph to analyze components and concentrations of transformer oil-immersed gas, and although such analysis technologies are mature at present and widely applied to monitoring of transformer operation states at home and abroad, the problems of long analysis time, short service life, consumption of carrier gas, incapability of analyzing carbon dioxide concentration, relatively complex daily maintenance, potential safety hazard of gas compression and the like exist.
It is worth noting that, with the trend of power electronics of the power system becoming increasingly severe, the alternating current-direct current hybrid power distribution network developing increasingly, and the like, the influence factors of the transformer state are affected by the combined action of various external and internal factors such as temperature, voltage, current, vibration degree and the like, and the evaluation analysis result shows diversity, complexity, uncertainty and deep coupling. Moreover, due to the difference of the positions of the sensors, different phenomena may occur when the states of the transformers are consistent, and a single sensor can only detect one type of state information, so that the misjudgment rate is high.
Therefore, the method of oil chromatography or voltage and current signals is adopted only, and the method is easily influenced by environment and other factors due to the fact that only a single information source exists, and information islanding phenomenon exists, so that misjudgment is caused, and the accuracy of the state evaluation analysis result of the transformer is low. Therefore, the influence of multi-source information needs to be considered in the analysis of the transformer state evaluation, and the multi-source information is fully and efficiently utilized.
However, the above multi-source information includes multi-source, high-dimensional, and nonlinear influence factors, and the conventional shallow learning method is mainly adopted in the prior art for deducing the state of the transformer according to the chemical composition analysis of the transformer oil, such as: a threshold value method, a principal component analysis method and the like are set, deep relevant information cannot be mined by the traditional analysis method, and the problem of complex transformer state analysis under the multi-factor combined action is difficult to solve.
Disclosure of Invention
The invention aims to provide a transformer state evaluation method based on a deep learning model in consideration of multi-source information, and solves the problem that the existing transformer state evaluation method only considers a single influence factor to cause low accuracy of an evaluation result.
The invention adopts the technical scheme that a transformer state evaluation method based on a deep learning model in consideration of multi-source information comprises the following steps:
step 1, collecting historical data of a transformer running state as a sample set, preprocessing the data in the sample set, and then labeling the preprocessed data in the sample set, wherein each sample data comprises input data and output data, the input data is transformer signal data, and the output data is a transformer state type number corresponding to the input data;
step 2, dividing the labeled sample data, randomly selecting 80% of the labeled sample data as a training set, and selecting the rest 20% of the labeled sample data as a test set;
step 3, building a deep learning model based on a bidirectional independent circulation neural network structure, and training the built deep learning model by adopting sample data in a training set to obtain a trained deep learning model;
step 4, overfitting judgment is carried out on the trained deep learning model by adopting sample data in the test set, if the overfitting phenomenon does not occur to the trained model, the model is an optimal model, if the overfitting phenomenon occurs to the trained model, the hyper-parameters of the model are adjusted, then the model is retrained until the overfitting phenomenon does not occur, and the optimal model is obtained;
and 5, acquiring current signal data of the transformer to be evaluated and inputting the current signal data into the optimal model to obtain a transformer state evaluation result.
In the step 1, the historical data of the running state of the transformer comprises temperature signal data, humidity signal data, input voltage signal data, output voltage signal data, input current signal data, output current signal data, vibration signal data and transformer oil near infrared spectrum curve data of the transformer.
In step 1, preprocessing the data in the sample set, namely, normalizing the data in the sample set to obtain a normalized data output x*:
Wherein xmaxFor the maximum value, x, of each type of data in the input sample setminIs the minimum value of each type of data in the input sample set.
In the step 1, the transformer state types comprise nine types, namely normal, low-energy discharge, high-energy discharge, partial discharge, low-temperature overheat, medium-temperature overheat, high-temperature overheat, low-energy discharge and overheat and high-energy discharge and overheat, and the transformer state types are numbered as 1-9 corresponding to the transformer state types one by one.
The bidirectional independent cyclic neural network structure comprises a forward independent cyclic neural network part and a backward independent cyclic neural network part, and is used for respectively extracting historical time and future time dependence information, stacking forward and backward output data, acquiring bidirectional reference, and obtaining an accurate evaluation result ht:
Wherein,for the forward independent recurrent neural network to compute the results,the results are calculated for the backward independent recurrent neural network.
Forward independent circulation neural netExtracting historical dependency information of the current input time sequence, and inferring output data at the current time based on the input data at the current time and input data prior to the current time
Wherein,is and input xtThe weight matrix of the connection is then determined,is in the state of the previous cellThe weight matrix of the connection is then determined,for the bias matrix,. indicates a Hadamard operator, and σ is the activation function.
The backward independent cyclic neural network extracts the historical dependence information of the current input time sequence by using the same algorithm as the forward independent cyclic neural network, and the backward independent cyclic neural network needs to turn the input time sequence data in the time dimension by { x1,x2,...,xnN is a time step, and is inverted to { x }n,...,x2,x1And obtaining a reverse time dependency relationship of the data, and obtaining a future time dependency relationship of the data at the current moment.
The deep learning model framework structure of the bidirectional independent circulation neural network structure comprises three parts: the transformer fault state evaluation system comprises an input layer, a hidden layer and an output layer, wherein the input layer provides a data input interface for a model, input data are processed into matrix data, data distribution is optimized, the hidden layer comprises a plurality of layers of bidirectional independent cyclic neural networks, the characteristic learning is carried out on the matrix data processed and optimized by the input layer, the data characteristics are gradually abstracted through a hierarchical relation, and therefore the characteristics capable of representing the transformer fault state are extracted, and the output layer is used for outputting the evaluation result of the transformer state.
The output layer comprises a full connection part and a Softmax layer, the full connection layer is used for mapping the characteristic vector extracted from the hidden layer into a vector with dimension 1 and containing 9 numerical values, then after passing through a Softmax classifier, the probability corresponding to the 9 transformer states is obtained, the maximum probability value is taken and used as the state evaluation result of the transformer, and the state evaluation result of the transformer is a digital number corresponding to the transformer state types one by one.
And 6, performing overfitting judgment on the trained deep learning model by adopting sample data in the test set, if the test accuracy is greatly reduced by more than 5 percent, indicating that the overfitting phenomenon occurs in the trained deep learning model, adjusting the hyper-parameters of the model, and then retraining the model until the overfitting phenomenon does not occur to obtain the optimal model.
The method has the advantages that abstract features are extracted from multi-source information data layer by adopting a deep learning model based on a bidirectional independent circulation neural network model, the nonlinear mapping relation between the multi-source information and the transformer fault state is further dynamically modeled on the basis, the internal rules among data items and the coupling relation among multiple factors are analyzed and found, voltage signals, current signals, near infrared spectrum signals, vibration signals, temperature signals, humidity signals and the like are fully utilized, the action rules and the influence mechanisms between the transformer fault state and each type of multi-source factors are fully excavated, an analysis processing mode based on artificial experience rules is avoided, the transformer state multi-source information collaborative evaluation is realized, and the accuracy of the transformer state analysis result is improved.
Drawings
FIG. 1 is a schematic flow chart of a transformer state evaluation method based on a deep learning model in consideration of multi-source information according to the present invention;
FIG. 2 is a schematic structural diagram of a deep learning model based on a bidirectional independent recurrent neural network in the invention;
FIG. 3 is a schematic diagram of a bidirectional independent-cycle neural network according to the present invention;
FIG. 4 is a schematic structural diagram of a forward independent recurrent neural network and a backward independent recurrent neural network according to the present invention;
FIG. 5 is a diagram of an accuracy change trend in a model training process according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to a transformer state evaluation method based on a deep learning model in consideration of multi-source information, which comprises the following steps of:
step 1, collecting historical data of a transformer running state as a sample set, preprocessing the data in the sample set, and then labeling the preprocessed data in the sample set, wherein each sample data comprises input data and output data, the input data is transformer signal data, and the output data is a transformer state type number corresponding to the input data;
the historical data of the running state of the transformer comprises temperature signal data, humidity signal data, input voltage signal data, output voltage signal data, input current signal data, output current signal data, vibration signal data and transformer oil near infrared spectrum curve data of the transformer.
The method comprises the steps that a temperature sensor arranged on a transformer is used for collecting temperature signal data of the transformer, a temperature sensor arranged in the surrounding environment of the transformer is used for collecting environment temperature signal data, a humidity sensor arranged in the surrounding environment of the transformer is used for collecting environment humidity signal data, and a voltage sensor is used for collecting voltage signal data of an input end of the transformer; collecting voltage signal data at the output end of the transformer by using a voltage sensor, collecting current signal data at the input end of the transformer by using a current sensor, and collecting current signal data at the output end of the transformer by using the current sensor; the vibration signal data of the transformer is acquired by using the motion sensor, and the near infrared spectrum curve data of the transformer oil is acquired by using the near infrared spectrum sensor.
Because the types of the sensors are different, the range values of the acquired data are different and have larger difference, so that the data in the sample set need to be preprocessed, namely the data in the sample set is standardized to obtain the data output x after the standardization*:
Wherein xmaxFor each type of data maximum, x, in the input sample setminIs the minimum value of each type of data in the input sample set.
The transformer state types comprise nine types, namely normal, low-energy discharge, high-energy discharge, partial discharge, low-temperature overheat, medium-temperature overheat, high-temperature overheat, low-energy discharge and overheat and high-energy discharge and overheat, and the number of the transformer state types is 1-9 corresponding to the transformer state types one to one.
Step 2, dividing the labeled sample data, randomly selecting 80% of the labeled sample data as a training set, and using the rest 20% of the labeled sample data as a test set;
and step 3, building a deep learning model based on a bidirectional independent cyclic neural network structure, referring to FIG. 2,
the model frame structure comprises three parts: the system comprises an input layer, a hidden layer and an output layer, wherein the input layer provides a data input interface for a model, processes input data into matrix data which can be efficiently operated and processed in batches, and optimizes data distribution. The hidden layer comprises a plurality of layers of bidirectional independent cyclic neural networks, the characteristic learning is carried out on the matrix data processed and optimized by the input layer, the data characteristics are gradually abstracted through the hierarchical relation, so that the characteristic capable of representing the fault state of the transformer is extracted, and the output layer is used for outputting the evaluation result of the state of the transformer. Wherein, a batch normalization layer is arranged between the two-way independent circulation layers, so that the data output by the neurons of the layer are input to the neural network of the next layer after batch normalization processing.
The output layer comprises a full connection part and a Softmax layer, the full connection layer is used for mapping the characteristic vector extracted from the hidden layer into a vector with dimension 1 and containing 9 numerical values, then after passing through a Softmax classifier, the probability corresponding to the 9 transformer states is obtained, the maximum probability value is taken and used as the state evaluation result of the transformer, and the state evaluation result of the transformer is a digital number corresponding to the transformer state types one by one.
Referring to fig. 3, the bidirectional independent cyclic neural network structure includes a forward independent cyclic neural network and a backward independent cyclic neural network, and simultaneously considers the relation between the historical dependence and the future dependence, so as to extract the historical time and the future time dependence information, stack the forward and backward output data, obtain the bidirectional reference, and obtain the accurate evaluation result ht:
Wherein,for the forward independent recurrent neural network to compute the results,the results are calculated for the backward independent recurrent neural network.
And a Batch Normalization layer is arranged between the two-way independent circulation layers, so that data output by the neurons in the layer are processed by Batch Normalization and then are output to the neural network in the next layer.
The forward independent circular neural network has the same structure as the backward independent circular neural network, and as shown in fig. 4, "Weight" and "current + ReLU" in the figure respectively represent input processing and circular processing of each step with ReLU as an activation function.
The forward independent circulation neural network is used for extracting historical dependence information of the current input time sequence and is based on the input number at the current momentDeducing the output data of the current time according to the input data before the current time
Wherein,is and input xtThe weight matrix of the connection is then determined,is in the state of the previous cellThe weight matrix of the connection is then determined,for the bias matrix,. indicates a Hadamard operator, and σ is the activation function.
The backward independent circulation neural network uses the same algorithm as the forward independent circulation neural network to extract the historical dependence information of the current input time sequence, and needs to turn the input time sequence data in the time dimension by { x }1,x2,...,xnN is a time step, and is inverted to { x }n,...,x2,x1And obtaining a reverse time dependency relationship of the data, and obtaining a future time dependency relationship of the data at the current moment.
In this embodiment, 13 neural network layers are provided based on the bidirectional independent circulation neural network deep learning model, one input layer is used first, and then 10 hidden layers are used, and the hidden layers are sequentially set as a bidirectional independent circulation neural network layer, a batch normalization layer, a bidirectional independent circulation neural network layer, batch normalization, a bidirectional independent circulation neural network layer, a batch normalization, a full connection layer, and a Softmax layer.
Training a built deep learning model by adopting sample data in a training set, using a loss cost function as an evaluation index of the model accuracy, and further updating model parameters through an optimizer to obtain the trained deep learning model;
the loss cost function is a cross-entropy loss function, parameter updating is performed by using a back propagation algorithm, training is performed by using an Adam optimizer, and in the embodiment, the learning rate is 0.0001, the iteration number is 200, and the generation number is 100.
Fig. 5 shows the accuracy change trend in the model training process, and as the number of iterations increases, the model accuracy increases, and the accuracy rate reaches 100%.
And 4, performing overfitting judgment on the trained deep learning model by adopting sample data in the test set, if the test accuracy is greatly reduced by less than or equal to 5%, indicating that the trained deep learning model has no overfitting phenomenon and is an optimal model, and if the test accuracy is greatly reduced by more than 5%, indicating that the trained deep learning model has the overfitting phenomenon, adjusting the hyper-parameters of the model, and then retraining the model until the overfitting phenomenon does not occur, so as to obtain the optimal model.
The hyper-parameters of the model comprise the number of hidden layers, the learning rate, the training generation, the number of neurons and the type of an activation function;
and 5, acquiring current signal data of the transformer to be evaluated and inputting the current signal data into the optimal model to obtain a transformer state evaluation result.
Claims (10)
1. The transformer state evaluation method based on the deep learning model in consideration of multi-source information is characterized by comprising the following steps of:
step 1, collecting historical data of the running state of a transformer as a sample set, preprocessing the data in the sample set, and then labeling the preprocessed data in the sample set, wherein each sample data comprises input data and output data, the input data is signal data of the transformer, and the output data is a state type number of the transformer corresponding to the input data;
step 2, dividing the labeled sample data, randomly selecting 80% of the labeled sample data as a training set, and using the rest 20% of the labeled sample data as a test set;
step 3, building a deep learning model based on a bidirectional independent circulation neural network structure, and training the built deep learning model by adopting sample data in a training set to obtain a trained deep learning model;
step 4, overfitting judgment is carried out on the trained deep learning model by adopting sample data in the test set, if the overfitting phenomenon does not occur to the trained model, the model is an optimal model, if the overfitting phenomenon occurs to the trained model, the hyper-parameters of the model are adjusted, then the model is retrained until the overfitting phenomenon does not occur, and the optimal model is obtained;
and 5, acquiring current signal data of the transformer to be evaluated and inputting the current signal data into the optimal model to obtain a transformer state evaluation result.
2. The transformer state evaluation method based on the deep learning model in consideration of the multi-source information according to claim 1, wherein in the step 1, the historical data of the operation state of the transformer comprises temperature signal data, humidity signal data, input voltage signal data, output voltage signal data, input current signal data, output current signal data, vibration signal data and transformer oil near infrared spectrum curve data of the transformer.
3. The transformer state evaluation method based on the deep learning model in consideration of the multi-source information as claimed in claim 2, wherein in step 1, the data in the sample set is preprocessed, that is, the data in the sample set is normalized to obtain the normalized data output x*:
Wherein x ismaxFor the maximum value, x, of each type of data in the input sample setminIs the minimum value of each type of data in the input sample set.
4. The method for evaluating the state of the transformer based on the deep learning model in consideration of the multi-source information according to claim 3, wherein in the step 1, the types of the transformer states include nine types, namely normal, low-energy discharge, high-energy discharge, partial discharge, low-temperature overheat, medium-temperature overheat, high-temperature overheat, low-energy discharge and overheat, and high-energy discharge and overheat, and the number of the types of the transformer states is 1 to 9 corresponding to the types of the transformer states one to one.
5. The transformer state evaluation method based on the deep learning model in consideration of multi-source information as claimed in claim 4, wherein the bidirectional independent cyclic neural network structure comprises a forward independent cyclic neural network part and a backward independent cyclic neural network part, and is used for respectively extracting historical time and future time dependence information, stacking forward and backward output data, obtaining bidirectional reference, and obtaining an accurate evaluation result ht:
6. The method of claim 5, wherein the forward independent recurrent neural network is used for estimating the state of the transformer based on the deep learning model while considering the multi-source informationExtracting historical dependency information of the current input time sequence, and deducing current time output data based on the current time input data and input data before the current time
7. The transformer state evaluation method based on deep learning model considering multi-source information as claimed in claim 6, wherein the backward independent cyclic neural network uses the same algorithm as the forward independent cyclic neural network to extract the history dependency information of the current input time series, and the backward independent cyclic neural network needs to flip the input time series data in the time dimension by { x }1,x2,...,xnN is time step, and is inverted to { x }n,...,x2,x1And obtaining a reverse time dependency relationship of the data, and obtaining a future time dependency relationship of the data at the current moment.
8. The transformer state evaluation method based on the deep learning model in consideration of multi-source information according to claim 7, wherein the deep learning model framework structure of the bidirectional independent recurrent neural network structure comprises three parts: the transformer fault state evaluation system comprises an input layer, a hidden layer and an output layer, wherein the input layer provides a data input interface for a model, input data are processed into matrix data, data distribution is optimized, the hidden layer comprises a plurality of layers of bidirectional independent cyclic neural networks, the characteristic learning is carried out on the matrix data processed and optimized by the input layer, the data characteristics are gradually abstracted through a hierarchical relation, and therefore the characteristics capable of representing the transformer fault state are extracted, and the output layer is used for outputting the evaluation result of the transformer state.
9. The method for evaluating the state of the transformer based on the deep learning model under the consideration of the multi-source information as claimed in claim 8, wherein the output layer comprises a full connection layer and a Softmax layer, the full connection layer is used for mapping the feature vector extracted from the hidden layer into a vector with dimension 1 and containing 9 numerical values, then after passing through a Softmax classifier, the probability corresponding to each of the 9 transformer states is obtained, the maximum probability value is taken as the state evaluation result of the transformer, and the state evaluation result of the transformer is a digital number corresponding to the transformer state type one by one.
10. The method for evaluating the state of the transformer based on the deep learning model in consideration of the multi-source information according to claim 1, wherein in the step 6, overfitting judgment is performed on the trained deep learning model by adopting sample data in a test set, if the test accuracy is greatly reduced by more than 5%, it is indicated that the overfitting phenomenon occurs on the trained deep learning model, the hyper-parameters of the model are adjusted, and then the model is retrained until the overfitting phenomenon does not occur, so that an optimal model is obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210101809.8A CN114581699A (en) | 2022-01-27 | 2022-01-27 | Transformer state evaluation method based on deep learning model in consideration of multi-source information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210101809.8A CN114581699A (en) | 2022-01-27 | 2022-01-27 | Transformer state evaluation method based on deep learning model in consideration of multi-source information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114581699A true CN114581699A (en) | 2022-06-03 |
Family
ID=81772454
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210101809.8A Pending CN114581699A (en) | 2022-01-27 | 2022-01-27 | Transformer state evaluation method based on deep learning model in consideration of multi-source information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114581699A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115713027A (en) * | 2022-10-31 | 2023-02-24 | 国网江苏省电力有限公司泰州供电分公司 | Transformer state evaluation method, device and system |
CN117571901A (en) * | 2023-11-17 | 2024-02-20 | 承德神源太阳能发电有限公司 | Method, system and equipment for early warning and overhauling faults of photovoltaic power station transformer |
-
2022
- 2022-01-27 CN CN202210101809.8A patent/CN114581699A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115713027A (en) * | 2022-10-31 | 2023-02-24 | 国网江苏省电力有限公司泰州供电分公司 | Transformer state evaluation method, device and system |
CN117571901A (en) * | 2023-11-17 | 2024-02-20 | 承德神源太阳能发电有限公司 | Method, system and equipment for early warning and overhauling faults of photovoltaic power station transformer |
CN117571901B (en) * | 2023-11-17 | 2024-06-11 | 承德神源太阳能发电有限公司 | Method, system and equipment for early warning and overhauling faults of photovoltaic power station transformer |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111914873B (en) | Two-stage cloud server unsupervised anomaly prediction method | |
CN109492193B (en) | Abnormal network data generation and prediction method based on deep machine learning model | |
CN111612651B (en) | Abnormal electric quantity data detection method based on long-term and short-term memory network | |
CN114581699A (en) | Transformer state evaluation method based on deep learning model in consideration of multi-source information | |
CN107807860B (en) | Power failure analysis method and system based on matrix decomposition | |
CN112147432A (en) | BiLSTM module based on attention mechanism, transformer state diagnosis method and system | |
CN112906764B (en) | Communication safety equipment intelligent diagnosis method and system based on improved BP neural network | |
CN115563563A (en) | Fault diagnosis method and device based on transformer oil chromatographic analysis | |
CN114167180B (en) | Oil-filled electrical equipment fault diagnosis method based on graph attention neural network | |
CN111881627A (en) | Nuclear power device fault diagnosis method and system | |
CN110879373A (en) | Oil-immersed transformer fault diagnosis method with neural network and decision fusion | |
CN116842337A (en) | Transformer fault diagnosis method based on LightGBM (gallium nitride based) optimal characteristics and COA-CNN (chip on board) model | |
CN116562114A (en) | Power transformer fault diagnosis method based on graph convolution neural network | |
CN117154263A (en) | Lithium battery cascade utilization charging and discharging system and control method | |
US20240184678A1 (en) | Deep Learning Method Integrating Prior Knowledge for Fault Diagnosis | |
CN112085108A (en) | Photovoltaic power station fault diagnosis algorithm based on automatic encoder and K-means clustering | |
Tang et al. | Prediction of bearing performance degradation with bottleneck feature based on LSTM network | |
CN114897138A (en) | System fault diagnosis method based on attention mechanism and depth residual error network | |
Liu et al. | Research on the strategy of locating abnormal data in IOT management platform based on improved modified particle swarm optimization convolutional neural network algorithm | |
CN112926686B (en) | BRB and LSTM model-based power consumption anomaly detection method and device for big power data | |
CN117113202A (en) | Power loop energy consumption detection method and equipment based on joint error stacking model | |
CN117150383A (en) | New energy automobile power battery fault classification method of SheffleDarkNet 37-SE | |
CN114282608B (en) | Hidden fault diagnosis and early warning method and system for current transformer | |
CN114358160A (en) | Data anomaly detection method in power system | |
CN111008584A (en) | Electric energy quality measurement deficiency repairing method of fuzzy self-organizing neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |