CN115563539A - Transformer fault diagnosis method and device, storage medium and equipment - Google Patents

Transformer fault diagnosis method and device, storage medium and equipment Download PDF

Info

Publication number
CN115563539A
CN115563539A CN202211278679.1A CN202211278679A CN115563539A CN 115563539 A CN115563539 A CN 115563539A CN 202211278679 A CN202211278679 A CN 202211278679A CN 115563539 A CN115563539 A CN 115563539A
Authority
CN
China
Prior art keywords
data
model
transformer
different states
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211278679.1A
Other languages
Chinese (zh)
Inventor
龚泽威一
于虹
马显龙
周帅
曹占国
代维菊
李超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of Yunnan Power Grid Co Ltd
Original Assignee
Electric Power Research Institute of Yunnan Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of Yunnan Power Grid Co Ltd filed Critical Electric Power Research Institute of Yunnan Power Grid Co Ltd
Priority to CN202211278679.1A priority Critical patent/CN115563539A/en
Publication of CN115563539A publication Critical patent/CN115563539A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/50Testing of electric apparatus, lines, cables or components for short-circuits, continuity, leakage current or incorrect line connections
    • G01R31/62Testing of transformers

Landscapes

  • Engineering & Computer Science (AREA)
  • Power Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Housings And Mounting Of Transformers (AREA)

Abstract

The embodiment of the invention discloses a transformer fault diagnosis method, a device, a storage medium and equipment, which comprise the following steps: acquiring gas production speed data and content data of characteristic gas dissolved in transformer oil in different states, determining a first data set and a second data set with different state labels, and processing the first data set and the second data set to obtain a multi-mode data set; dividing the training set into a training set and a test set, and dividing the training set into a model training set and a model verification set; inputting the model training set into an attention mechanism LSTM self-coding neural network for training to obtain a transformer fault diagnosis deep learning model, and inputting the model verification set into the transformer fault diagnosis deep learning model to obtain a hyper-parameter update deep learning model; and inputting the test set into a hyper-parameter updating deep learning model to obtain dimensional real vectors in different states, and determining the fault levels of the transformer in different states according to the dimensional real vectors in different states. The method can determine the degree of transformer failure and the like.

Description

Transformer fault diagnosis method and device, storage medium and equipment
Technical Field
The invention relates to the field of fault diagnosis, in particular to a transformer fault diagnosis method, a transformer fault diagnosis device, a storage medium and equipment.
Background
The transformer is a key basic device forming the power distribution network and plays a vital role in the safe and reliable operation of the power distribution network and the power supply reliability of users. Once a transformer fails, a large-area power failure may occur, which not only affects the normal production of enterprises, but also brings inconvenience to people's life and seriously harms national economic and social benefits, and therefore, the transformer needs to be diagnosed.
At present, for the diagnosis of transformer faults, the composition, content and gas production rate of gas dissolved in transformer oil need to be analyzed to determine whether the transformer has faults, but the fault diagnosis method at present has the following problems: the composition and content of the gas in the transformer oil and the gas production speed cannot be accurately obtained at any time (based on historical data such as test data, maintenance records and the like); the transformer can be subjected to fault diagnosis only when the transformer is required to be quit from operation; the fault diagnosis can be only carried out on the transformer, and the fault degree of the transformer cannot be determined; the diagnosis experience of a professional technician is excessively relied on; the efficiency of diagnosing the transformer fault is low.
Disclosure of Invention
Therefore, it is necessary to provide a transformer fault diagnosis method, apparatus, storage medium, and device for solving the above problems, so that the transformer can be subjected to fault diagnosis without stopping the transformer, the degree of the transformer fault can be determined, the fault diagnosis can be performed without the diagnosis experience of a professional, and the efficiency of the transformer fault diagnosis is high.
To achieve the above object, the present invention provides, in a first aspect, a transformer fault diagnosis method, including:
acquiring gas production speed data of characteristic gas dissolved in transformer oil under different states, determining a first data set with different state labels, acquiring content data of the characteristic gas dissolved in the transformer oil under different states, and determining a second data set with different state labels;
preprocessing the first data set and the second data set with different state labels to obtain a multi-modal data set;
dividing the multi-modal data set into a training set and a test set, and dividing the training set into a model training set and a model verification set;
establishing an attention machine system LSTM self-coding neural network, inputting the model training set into the attention machine system LSTM self-coding neural network for training to obtain a transformer fault diagnosis deep learning model, and inputting the model verification set into the transformer fault diagnosis deep learning model to obtain a super-parameter updating deep learning model;
and inputting the test set into the hyper-parameter updating deep learning model to obtain dimensional real vectors in different states, and determining the fault levels of the transformer in different states according to the dimensional real vectors in different states.
Optionally, the preprocessing the first data set and the second data set with different status labels to obtain a multi-modal data set includes:
and sequentially carrying out data cleaning processing, differential denoising processing, data abnormal value processing and standardization processing on the first data set and the second data set of the different state labels to obtain the multi-mode data set.
Optionally, the dividing the multi-modal dataset into a training set and a test set, and dividing the training set into a model training set and a model verification set includes:
dividing 80% of the multi-modal data sets into the training set, and 20% of the multi-modal data sets into the test set;
and dividing 80% of the training set into the model training set, and dividing 20% of the training set into the model verification set.
Optionally, when training in the attention mechanism LSTM self-encoding neural network, the loss function used is a cross-entropy loss function;
the expression of the cross entropy loss function is
Figure BDA0003897712160000021
Wherein L is cross entropy, n is the number of data in the model training set, y j The value of the state tag corresponding to the jth state, y j Has a value range of [0,1]The lower the transformer fault level is closer to 1, the higher the transformer fault level is closer to 0, p j Log is the logarithm based on the natural constant e, which is the real number domain result for the jth state.
Optionally, the obtaining the real-dimensional vectors in different states after inputting the test set into the hyper-parameter updated deep learning model includes:
when the test set is input into the hyper-parameter updating deep learning model, real number domain results under different states are obtained;
and mapping the real number domain results in different states by using an activation function to obtain the dimensional real vectors in different states.
Optionally, the activation function is a normalized exponential function;
using formulas
Figure BDA0003897712160000031
Determining the dimensional real vectors in different states;
wherein, σ (p) j Is the vector of the dimension real in the j-th state, σ (p) j Has a value range of [0,1]The moreThe lower the fault level of the transformer is close to 1, the higher the fault level of the transformer is close to 0, p j And the result of the real number field of the j state is obtained, N is the type of the characteristic gas, and e is a natural constant.
Optionally, the method further comprises:
if the dimension real vector is larger than or equal to a preset value, outputting prompt information;
if the dimension real vector is smaller than the preset value, alarm information is output;
wherein the preset value is greater than 0 and less than 1.
To achieve the above object, the present invention provides, in a second aspect, a transformer fault diagnosis apparatus, the apparatus including:
the determining model data module is used for acquiring gas production speed data of characteristic gas dissolved in the transformer oil under different states, determining first data sets with different state labels, acquiring content data of the characteristic gas dissolved in the transformer oil under different states, and determining second data sets with different state labels;
the data set processing module is used for preprocessing the first data set and the second data set with different state labels to obtain a multi-modal data set;
the data set dividing module is used for dividing the multi-modal data set into a training set and a test set and dividing the training set into a model training set and a model verification set;
the determination model module is used for building an attention machine LSTM self-coding neural network, inputting the model training set into the attention machine LSTM self-coding neural network for training to obtain a transformer fault diagnosis deep learning model, and inputting the model verification set into the transformer fault diagnosis deep learning model to obtain a super-parameter updating deep learning model;
and the fault grade determining module is used for inputting the test set into the hyper-parameter updating deep learning model to obtain dimensional real vectors in different states, and determining the fault grade of the transformer in different states according to the dimensional real vectors in different states.
To achieve the above object, the present invention provides in a third aspect a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the method according to the first aspect.
To achieve the above object, the present invention provides in a fourth aspect a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the method according to the first aspect.
The embodiment of the invention has the following beneficial effects: acquiring gas production speed data of characteristic gas dissolved in transformer oil under different states, determining a first data set with different state labels, acquiring content data of the characteristic gas dissolved in the transformer oil under different states, and determining a second data set with different state labels; carrying out data preprocessing on the first data set and the second data set with different state labels to obtain a multi-modal data set; dividing a multi-modal data set into a training set and a test set, and dividing the training set into a model training set and a model verification set; establishing an attention machine system LSTM self-coding neural network, inputting a model training set into the attention machine system LSTM self-coding neural network for training to obtain a transformer fault diagnosis deep learning model, and inputting a model verification set into the transformer fault diagnosis deep learning model to obtain a super-parameter updating deep learning model; and inputting the test set into a hyper-parameter updating deep learning model to obtain dimensional real vectors in different states, and determining the fault levels of the transformer in different states according to the dimensional real vectors in different states. According to the method, gas production speed data and content data of characteristic gas dissolved in transformer oil in different states are obtained through real-time analysis, so that a first data set and a second data set with different state labels are obtained, training and testing of a model are performed by using the first data set and the second data set with different state labels, and finally transformer fault grades in different states are obtained, and the method has the following advantages: the gas composition, content and gas production speed in the transformer oil can be accurately obtained at any time; the fault diagnosis can be carried out on the transformer without taking the transformer out of operation; the degree of transformer failure can be determined; fault diagnosis can be carried out without the diagnosis experience of professional technicians; the efficiency of transformer fault diagnosis is high.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Wherein:
fig. 1 is a schematic flowchart of a transformer fault diagnosis method in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a transformer fault diagnosis device in an embodiment of the present application;
FIG. 3 is a diagram of the internal structure of a computer device in one embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a schematic flow chart of a transformer fault diagnosis method in an embodiment of the present application is shown, where the method includes:
step 110: the method comprises the steps of obtaining gas production speed data of characteristic gas dissolved in transformer oil under different states, determining first data sets with different state labels, obtaining content data of the characteristic gas dissolved in the transformer oil under different states, and determining second data sets with different state labels.
Wherein the characteristic gas comprises one or more of hydrogen, carbon monoxide, methane, ethylene, acetylene, ethane, and carbon dioxide. It is understood that the gas production rate data and the content data include the composition of the characteristic gas.
It should be noted that, the determination of the first data set and the second data set with different status labels may be based on analyzing the oil chromatogram of the transformer in real time, continuously acquiring gas production rate data of the characteristic gas dissolved in the transformer oil in different statuses within a preset time period, thereby determining the first data set with different status labels, and continuously acquiring content data of the characteristic gas dissolved in the transformer oil in different statuses within the preset time period, thereby determining the second data set with different status labels, which may also be other acquisition manners, which are not limited herein.
It should be further noted that the gas production speed data and the content data of the characteristic gas dissolved in the transformer oil under different states need to be acquired in order to determine the first data set and the second data set with different state labels, and the purpose is to enable the finally trained model to judge the fault levels of the transformer under different states. It will be appreciated that each state corresponds to the one state tag.
Step 120: and preprocessing the first data set and the second data set with different state labels to obtain a multi-modal data set.
Among them, data preprocessing includes but is not limited to: data cleaning processing, difference denoising processing, data abnormal value processing, standardization processing and the like. It will be appreciated that pre-processing the first and second data sets with different status labels eliminates noise and anomalies present in the data in the first and second data sets with different status labels and eliminates differences in different attributes between the data in the first and second data sets with different status labels.
It should be noted that the multi-modal data set includes the first data set and the second data set with different state labels after processing, that is, the multi-modal data set includes gas production rate data and content data of characteristic gas dissolved in transformer oil under different states.
Step 130: the multi-modal dataset is divided into a training set and a test set, and the training set is divided into a model training set and a model verification set.
Wherein, 80% of the multi-modal data sets can be divided into training sets, and 20% of the multi-modal data sets can be divided into testing sets; 80% of the training set is divided into a model training set, and 20% of the training set is divided into a model verification set. It can be understood that, dividing 80% of the multi-modal data sets into training sets and 20% of the multi-modal data sets into test sets is to better train the model to determine the parameters of the fitting curve, and to evaluate the generalization error of the learner, because when the training sets and the test sets are divided, if the data volume of the test sets is small, the estimation of the generalization error of the model is not accurate; dividing 80% of training sets into model training sets, and 20% of training sets into model verification sets, so as to dynamically track changes of training loss function values and verification loss function values, monitor the model training process and prevent overfitting during model training; therefore, the present application requires trade-offs to be made in partitioning a multimodal dataset to make the partitioning reasonable.
Step 140: and establishing an attention mechanism LSTM self-coding neural network, inputting the model training set into the attention mechanism LSTM self-coding neural network for training to obtain a transformer fault diagnosis deep learning model, and inputting the model verification set into the transformer fault diagnosis deep learning model to obtain a super-parameter updating deep learning model.
Among them, LSTM is Long Short Term Memory (LSTM).
It should be noted that the attention mechanism LSTM self-encoding neural network includes three parts, namely an encoder, an attention mechanism module, and a decoder, wherein the encoder and the attention mechanism module adopt a multilayer LSTM network, which can well realize extraction of feature information of a model training set, and the decoder adopts a sequence LSTM network, which can help to propagate the extracted feature information to high-dimensional feature information, thereby realizing uniform characterization from local to global for implicit features in different attribute data in a training set. The multi-layer LSTM network comprises three layers of LSTM networks and one attention layer, and it can be understood that after the three layers of LSTM networks extract the feature information of the model training set, the attention layer can further extract the feature information to reserve main feature vectors.
It is further to be noted that the attention mechanism LSTM self-encoding neural network is an attention mechanism LSTM self-encoding neural network deep learning model, and the parameters of the attention mechanism LSTM self-encoding neural network deep learning model are set as: the iteration number is 800, the initial learning rate is 0.001, and the cross entropy loss function is adopted by the optimization algorithm.
Step 150: and inputting the test set into a hyper-parameter updating deep learning model to obtain dimensional real vectors in different states, and determining the fault levels of the transformer in different states according to the dimensional real vectors in different states.
It should be noted that when the test set is input to the hyper-parameter updating deep learning model, real number domain results in different states are obtained, and at this time, the hyper-parameter updating deep learning model performs mapping processing on the real number domain results in different states by using an activation function to obtain real-dimensional vectors in different states.
It should be further noted that the value range of the dimension real vector is [0,1], and it can be understood that the closer the value of the dimension real vector is to 1, the lower the fault level of the transformer is, and the closer the value of the dimension real vector is to 0, the higher the fault level of the transformer is. In addition, the health state of the transformer can be understood in reverse, that is, the closer the value of the dimensional vector is to 1, the better the health state of the transformer is, and the closer the value of the dimensional vector is to 0, the worse the health state of the transformer is.
It should be particularly noted that after the hyper-parameter updated deep learning model is obtained through training and testing, and then during transformer fault diagnosis, training and testing are not needed, only gas production speed data and content data of characteristic gas dissolved in transformer oil are obtained and input into the hyper-parameter updated deep learning model, and then output dimensional real vectors can be obtained, so that the fault grade of the transformer is determined according to the dimensional real vectors.
In the embodiment of the application, gas production speed data and content data of characteristic gas dissolved in transformer oil in different states are obtained through real-time analysis, so that a first data set and a second data set with different state labels are obtained, the first data set and the second data set with different state labels are used for training and testing a model, and finally transformer fault grades in different states are obtained, and the method has the following advantages: the gas composition, content and gas production speed in the transformer oil can be accurately obtained at any time; the fault diagnosis can be carried out on the transformer without taking the transformer out of operation; the degree of transformer failure can be determined; fault diagnosis can be performed without the diagnosis experience of professional technicians; the efficiency of transformer fault diagnosis is high.
In one possible implementation manner, in step 120, the pre-processing the first data set and the second data set with different status tags to obtain a multi-modal data set includes: and sequentially carrying out data cleaning processing, differential denoising processing, data abnormal value processing and standardization processing on the first data set and the second data set of different state labels to obtain a multi-mode data set.
It should be noted that, the data cleaning processing, the differential denoising processing, the data abnormal value processing and the normalization processing are sequentially performed on the first data set and the second data set with different status tags, so as to eliminate noise and abnormality existing in the data in the first data set and the second data set with different status tags, and eliminate differences of different attributes between the data in the first data set and the data in the second data set with different status tags.
It is further noted that the multi-modal dataset includes the first dataset and the second dataset after processing, which have different status labels, that is, the multi-modal dataset includes gas production rate data and content data of characteristic gas dissolved in transformer oil under different statuses.
In the embodiment of the application, the first data set and the second data set of different state labels are sequentially subjected to data cleaning processing, differential denoising processing, data abnormal value processing and standardization processing, so that noise and abnormality existing in the data in the first data set and the data in the second data set with different state labels are eliminated, the difference of different attributes between the data in the first data set and the data in the second data set with different state labels is eliminated, the problems that the hyper-parameters obtained through training and testing update a deep learning model due to the problem of training data, and the transformer fault diagnosis is inaccurate in the follow-up process of transformer fault diagnosis and the like are solved.
In one possible implementation, in step 130, dividing the multi-modal data set into a training set and a test set, and dividing the training set into a model training set and a model verification set includes: dividing 80% of the multi-modal data sets into a training set, and dividing 20% of the multi-modal data sets into a test set; 80% of the training set is divided into model training sets, and 20% of the training set is divided into model validation sets.
It should be noted that, dividing 80% of the multi-modal data sets into training sets and 20% of the multi-modal data sets into test sets is to better train the model and determine the parameters of the fitting curve, and to evaluate the generalization error of the learner, because when the training sets and the test sets are divided, if the data volume of the test sets is small, the estimation of the generalization error of the model is not accurate.
It should be further noted that, dividing 80% of the training sets into model training sets and 20% of the training sets into model verification sets is to dynamically track changes of the training loss function values and the verification loss function values, monitor the model training process, and prevent overfitting during model training.
In the embodiment of the application, the multi-modal data sets are balanced to be reasonably divided, namely 80% of the multi-modal data sets are divided into training sets, and 20% of the multi-modal data sets are divided into test sets; 80% of training sets are divided into model training sets, 20% of training sets are divided into model verification sets, the problem of inaccurate estimation of the generalization error of the models is avoided, the models can be trained well, and the over-fitting condition during model training is prevented.
In one possible implementation, in step 140, the loss function used in training in the attention mechanism LSTM self-encoding neural network is a cross-entropy loss function; the expression of the cross entropy loss function is
Figure BDA0003897712160000101
Wherein L is cross entropy, n is the number of data in the model training set, y j The value of the state tag corresponding to the jth state, y j Has a value range of [0,1]The transformer fault level is lower when being closer to 1, and the transformer fault level is higher when being closer to 0, p j Log is the logarithm based on the natural constant e, which is the real number domain result for the jth state.
It should be noted that, when the cross entropy loss function is adopted in the attention mechanism LSTM self-encoding neural network for training, a value in the range of [0,1] needs to be given to the state labels corresponding to different states, it can be understood that the lower the value of the state label is closer to 1, the lower the value of the state label is closer to 0, and the higher the value of the state label is, so that after the hyper-parameter updated deep learning model is obtained through training and testing, and subsequently, in transformer fault diagnosis, the obtained gas production speed data and content data of the characteristic gas dissolved in the transformer oil are input to the hyper-parameter updated deep learning model under the condition that the state label is absent, and then the output real-dimensional vector can be obtained, and thus the transformer fault level is determined according to the real-dimensional vector.
In the embodiment of the application, when the attention mechanism LSTM self-coding neural network is trained, the cross entropy loss function is adopted, so that faster convergence is realized in the model training process, and the training performance of the model is improved.
In a possible implementation manner, in step 150, inputting the test set into the hyper-parameter updated deep learning model to obtain real vectors in different states, including: when the test set is input into the hyper-parameter updating deep learning model, real number domain results under different states are obtained; and mapping the real number domain results in different states by using the activation function to obtain the dimensional real vectors in different states.
It should be noted that when the test set is input to the hyper-parameter updating deep learning model, real number domain results in different states are obtained, and at this time, the hyper-parameter updating deep learning model performs mapping processing on the real number domain results in different states by using an activation function to obtain real-dimensional vectors in different states.
It should be further noted that the value range of the dimension real vector is [0,1], and it can be understood that the closer the value of the dimension real vector is to 1, the lower the fault level of the transformer is, and the closer the value of the dimension real vector is to 0, the higher the fault level of the transformer is. In addition, the health state of the transformer can be understood in reverse, that is, the closer the value of the dimensional vector is to 1, the better the health state of the transformer is, and the closer the value of the dimensional vector is to 0, the worse the health state of the transformer is.
In the embodiment of the application, real number domain results in different states are obtained by inputting a test set into a hyper-parameter updating deep learning model; the real number domain results in different states are mapped by the activation function to obtain dimensional real vectors in different states, so that the fault level of the transformer can be determined according to the values of the dimensional real vectors, the current state of the transformer can be conveniently judged, and an operator can conveniently operate according to the current state of the transformer.
In one possible implementation, the activation function is a normalized exponential function; using formulas
Figure BDA0003897712160000111
Determining real dimensional vectors in different states; wherein, σ (p) j Is the vector of the dimension real in the j-th state, σ (p) j Has a value range of [0,1]The lower the transformer fault level is closer to 1, the higher the transformer fault level is closer to 0, p j Is true of the jth stateThe result of the number domain, N is the type of characteristic gas, and e is a natural constant.
The value range of the dimensional real vector is [0,1], and it can be understood that the closer the value of the dimensional real vector is to 1, the lower the fault level of the transformer is, and the closer the value of the dimensional real vector is to 0, the higher the fault level of the transformer is. In addition, the transformer health state can be understood in reverse, that is, the closer the value of the dimensional real vector is to 1, the better the transformer health state is, and the closer the value of the dimensional real vector is to 0, the worse the transformer health state is.
In the embodiment of the application, the transformer fault grade can be determined according to the value of the dimensional real vector by adopting the normalized index activation function, so that the current state of the transformer can be conveniently judged, and the precision of the dimensional real vector can be effectively improved by adopting the normalized index activation function, so that the transformer fault grade can be more accurately determined according to the value of the dimensional real vector.
In one possible implementation, in step 150, the method further includes: if the dimension real vector is larger than or equal to a preset value, outputting prompt information; if the dimension real vector is smaller than the preset value, outputting alarm information; wherein the preset value is greater than 0 and less than 1.
The value range of the preset value (0,1) can be understood that the preset value is used for distinguishing whether the fault level belongs to a high level or a low level, that is, in a feasible implementation manner, the preset value can be 0.5, and when the dimension real vector is greater than or equal to 0.5, it is indicated that the fault degree of the transformer at the moment belongs to a lower level, so that only prompt information needs to be output; if the vector of the real dimension is less than 0.5, it indicates that the degree of the transformer fault is relatively high, and therefore, in order to enable an operator to process the fault as soon as possible, alarm information needs to be output.
It should be further noted that the output of the prompt information and the alarm information is for prompting an operator, so that the operator can timely handle the transformer fault problem; the prompt information and the alarm information can be displayed through a display screen or voice prompt through a loudspeaker and the like.
In the embodiment of the application, prompt information is output when the dimension real vector is greater than or equal to a preset value; when the dimensional vector is smaller than the preset value, alarm information is output, so that an operator can determine whether the transformer fault needs to be processed immediately according to the output information, even if the operator can timely and seriously process the transformer fault, the condition that the transformer fault is not processed timely is avoided, and the power grid is paralyzed.
Referring to fig. 2, a schematic structural diagram of a transformer fault diagnosis apparatus according to an embodiment of the present application is shown, where the apparatus 210 includes:
the model data determining module 211 is configured to obtain gas production speed data of characteristic gas dissolved in the transformer oil in different states, determine first data sets with different state labels, obtain content data of the characteristic gas dissolved in the transformer oil in different states, and determine second data sets with different state labels.
And the data set processing module 212 is configured to perform data preprocessing on the first data set and the second data set with different status tags to obtain a multi-modal data set.
And a data set partitioning module 213, configured to partition the multi-modal data set into a training set and a test set, and partition the training set into a model training set and a model verification set.
And the determining model module 214 is used for building an attention mechanism LSTM self-coding neural network, inputting the model training set into the attention mechanism LSTM self-coding neural network for training to obtain a transformer fault diagnosis deep learning model, and inputting the model verification set into the transformer fault diagnosis deep learning model to obtain a super-parameter updating deep learning model.
And a fault grade determining module 215, configured to input the test set to the hyper-parameter updated deep learning model to obtain real dimensional vectors in different states, and determine fault grades of the transformer in different states according to the real dimensional vectors in different states.
In the embodiment of the present application, the contents of the model data determining module 211, the data set processing module 212, the data set dividing module 213, the model determining module 214, and the failure level determining module 215 may refer to the contents in the embodiment shown in fig. 1, which are not described herein again.
It should be noted that the apparatus 210 further includes other modules corresponding to the contents in the foregoing embodiments, which are not described herein again.
In the embodiment of the application, gas production speed data and content data of characteristic gas dissolved in transformer oil in different states are obtained through real-time analysis, so that a first data set and a second data set with different state labels are obtained, the first data set and the second data set with different state labels are used for training and testing a model, and finally transformer fault grades in different states are obtained, and the method has the following advantages: the gas composition, content and gas production speed in the transformer oil can be accurately obtained at any time; the fault diagnosis can be carried out on the transformer without stopping the transformer from running; the degree of transformer failure can be determined; fault diagnosis can be carried out without the diagnosis experience of professional technicians; the efficiency of transformer fault diagnosis is high.
In an embodiment of the present application, a computer-readable storage medium is provided, which stores a computer program, and when the computer program is executed by a processor, the computer program causes the processor to execute a transformer fault diagnosis method in the above method embodiments.
In an embodiment, an apparatus is proposed, which comprises a memory and a processor, the memory storing a computer program, the computer program, when executed by the processor, causing the processor to perform one of the above method embodiments of the transformer fault diagnosis method.
FIG. 3 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may be specifically a terminal, a server, or a gateway. As shown in fig. 3, the computer device includes a processor, a memory, and a network interface connected by a system bus.
Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program, which, when executed by the processor, causes the processor to carry out the steps of the above-described method embodiments. The internal memory may also store a computer program, which, when executed by the processor, causes the processor to perform the steps of the above-described method embodiments. It will be appreciated by those skilled in the art that the configuration shown in fig. 3 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed.
Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synch i nk DRAM (SLDRAM), rambus Direct RAM (RDRAM), direct Rambus Dynamic RAM (DRDRAM), and Rambus Dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A transformer fault diagnosis method, characterized in that the method comprises:
acquiring gas production speed data of characteristic gas dissolved in the transformer oil under different states, determining first data sets with different state labels, acquiring content data of the characteristic gas dissolved in the transformer oil under different states, and determining second data sets with different state labels;
preprocessing the first data set and the second data set with different state labels to obtain a multi-modal data set;
dividing the multi-modal data set into a training set and a test set, and dividing the training set into a model training set and a model verification set;
establishing an attention machine system LSTM self-coding neural network, inputting the model training set into the attention machine system LSTM self-coding neural network for training to obtain a transformer fault diagnosis deep learning model, and inputting the model verification set into the transformer fault diagnosis deep learning model to obtain a super-parameter updating deep learning model;
and inputting the test set into the hyper-parameter updating deep learning model to obtain dimensional real vectors in different states, and determining the fault levels of the transformer in different states according to the dimensional real vectors in different states.
2. The method of claim 1, wherein the pre-processing the first and second data sets with different status labels to obtain a multi-modal data set comprises:
and sequentially carrying out data cleaning processing, differential denoising processing, data abnormal value processing and standardization processing on the first data set and the second data set of the different state labels to obtain the multi-mode data set.
3. The method of claim 1 or 2, wherein the dividing the multi-modal dataset into a training set and a test set, and the dividing the training set into a model training set and a model validation set comprises:
dividing 80% of the multi-modal data sets into the training set, and 20% of the multi-modal data sets into the test set;
and dividing 80% of the training set into the model training set, and dividing 20% of the training set into the model verification set.
4. The method of claim 1, wherein the loss function used in training in the attention mechanism LSTM self-encoding neural network is a cross-entropy loss function;
the expression of the cross entropy loss function is
Figure FDA0003897712150000021
Wherein L is cross entropy, n is the number of data in the model training set, y j The value of the state tag corresponding to the jth state, y j Has a value range of [0,1]The lower the transformer fault level is closer to 1, the higher the transformer fault level is closer to 0, p j Log is the logarithm based on the natural constant e, which is the real number domain result for the jth state.
5. The method of claim 1, wherein inputting the test set into the hyper-parametric update deep learning model results in real-dimensional vectors at different states, comprising:
when the test set is input into the hyper-parameter updating deep learning model, real number domain results under different states are obtained;
and mapping the real number domain results in different states by using an activation function to obtain the dimensional real vectors in different states.
6. The method of claim 5, wherein the activation function is a normalized exponential function;
using formulas
Figure FDA0003897712150000022
Determining the dimensional real vectors in different states;
wherein, σ (p) j Is the vector of the dimension real in the j-th state, σ (p) j Has a value range of [0,1]The lower the transformer fault level is closer to 1, the higher the transformer fault level is closer to 0, p j And the result of the real number field of the j state is obtained, N is the type of the characteristic gas, and e is a natural constant.
7. The method of claim 6, further comprising:
if the dimension real vector is larger than or equal to a preset value, outputting prompt information;
if the dimension real vector is smaller than the preset value, alarm information is output;
wherein the preset value is greater than 0 and less than 1.
8. A transformer fault diagnosis apparatus, characterized in that the apparatus comprises:
the system comprises a model data determining module, a first data set and a second data set, wherein the model data determining module is used for acquiring gas production speed data of characteristic gas dissolved in transformer oil under different states, determining first data sets with different state labels, acquiring content data of the characteristic gas dissolved in the transformer oil under different states, and determining second data sets with different state labels;
the data set processing module is used for preprocessing the first data set and the second data set with different state labels to obtain a multi-modal data set;
the data set dividing module is used for dividing the multi-modal data set into a training set and a test set and dividing the training set into a model training set and a model verification set;
the determination model module is used for building an attention machine LSTM self-coding neural network, inputting the model training set into the attention machine LSTM self-coding neural network for training to obtain a transformer fault diagnosis deep learning model, and inputting the model verification set into the transformer fault diagnosis deep learning model to obtain a super-parameter updating deep learning model;
and the fault grade determining module is used for inputting the test set into the hyper-parameter updating deep learning model to obtain dimensional real vectors in different states, and determining the fault grades of the transformers in different states according to the dimensional real vectors in different states.
9. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 7.
10. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 7.
CN202211278679.1A 2022-10-19 2022-10-19 Transformer fault diagnosis method and device, storage medium and equipment Pending CN115563539A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211278679.1A CN115563539A (en) 2022-10-19 2022-10-19 Transformer fault diagnosis method and device, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211278679.1A CN115563539A (en) 2022-10-19 2022-10-19 Transformer fault diagnosis method and device, storage medium and equipment

Publications (1)

Publication Number Publication Date
CN115563539A true CN115563539A (en) 2023-01-03

Family

ID=84747333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211278679.1A Pending CN115563539A (en) 2022-10-19 2022-10-19 Transformer fault diagnosis method and device, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN115563539A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117725529A (en) * 2024-02-18 2024-03-19 南京邮电大学 Transformer fault diagnosis method based on multi-mode self-attention mechanism

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117725529A (en) * 2024-02-18 2024-03-19 南京邮电大学 Transformer fault diagnosis method based on multi-mode self-attention mechanism
CN117725529B (en) * 2024-02-18 2024-05-24 南京邮电大学 Transformer fault diagnosis method based on multi-mode self-attention mechanism

Similar Documents

Publication Publication Date Title
CN111830408B (en) Motor fault diagnosis system and method based on edge calculation and deep learning
CN106895975B (en) Method for Bearing Fault Diagnosis based on Stacked SAE deep neural network
US20220128983A1 (en) Defect prediction methods, apparautses, electronic devices and storage media
CN108398268A (en) A kind of bearing performance degradation assessment method based on stacking denoising self-encoding encoder and Self-organizing Maps
CN113722493B (en) Text classification data processing method, apparatus and storage medium
CN112859822A (en) Equipment health analysis and fault diagnosis method and system based on artificial intelligence
CN112906764B (en) Communication safety equipment intelligent diagnosis method and system based on improved BP neural network
CN114239377A (en) Method and system for evaluating health state of urban rail electromechanical equipment and storage medium
Tan et al. Network fault prediction based on CNN-LSTM hybrid neural network
CN115563563A (en) Fault diagnosis method and device based on transformer oil chromatographic analysis
CN114186379A (en) Transformer state evaluation method based on echo network and deep residual error neural network
CN115563539A (en) Transformer fault diagnosis method and device, storage medium and equipment
CN111060652A (en) Method for predicting concentration of dissolved gas in transformer oil based on long-term and short-term memory network
CN113592649A (en) Data asset value determination method and device and electronic equipment
CN111273125A (en) RST-CNN-based power cable channel fault diagnosis method
CN113704389A (en) Data evaluation method and device, computer equipment and storage medium
CN114662712A (en) Rotating machine state monitoring method based on Wasserstein depth digital twin model
Tang et al. Prediction of bearing performance degradation with bottleneck feature based on LSTM network
CN113782113B (en) Method for identifying gas fault in transformer oil based on deep residual error network
Zhao et al. Bearing fault diagnosis based on mel frequency cepstrum coefficient and deformable space-frequency attention network
CN117235638A (en) Police condition content multilayer classification method based on pre-training model
CN115423370A (en) Relay protection equipment health state assessment method and device
CN115456071A (en) Fault report checking method, device, equipment and storage medium
CN115184734A (en) Power grid line fault detection method and system
CN114462700A (en) Railway track quality evaluation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination