CN109765333A - A kind of Diagnosis Method of Transformer Faults based on GoogleNet model - Google Patents

A kind of Diagnosis Method of Transformer Faults based on GoogleNet model Download PDF

Info

Publication number
CN109765333A
CN109765333A CN201811482932.9A CN201811482932A CN109765333A CN 109765333 A CN109765333 A CN 109765333A CN 201811482932 A CN201811482932 A CN 201811482932A CN 109765333 A CN109765333 A CN 109765333A
Authority
CN
China
Prior art keywords
layer
transformer
googlenet
fault
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811482932.9A
Other languages
Chinese (zh)
Inventor
陈硕
刘树吉
乔林
吴赫
冉冉
李亮
周巧妮
郭哲强
吕旭明
卢彬
李静
王超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
State Grid Corp of China SGCC
Information and Telecommunication Branch of State Grid Liaoning Electric Power Co Ltd
Original Assignee
Nanjing University of Aeronautics and Astronautics
Information and Telecommunication Branch of State Grid Liaoning Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics, Information and Telecommunication Branch of State Grid Liaoning Electric Power Co Ltd filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201811482932.9A priority Critical patent/CN109765333A/en
Publication of CN109765333A publication Critical patent/CN109765333A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention provides a kind of Diagnosis Method of Transformer Faults based on GoogleNet model.The Diagnosis Method of Transformer Faults based on GoogleNet model includes the following steps: that (1) obtains the factor of device fails first, considers that data equipment fault can have an impact, and determines data and feature space to be collected;(2) it determines the fault type that equipment can occur, forms state space;(3) transformer state is monitored, data acquisition is carried out to transformer, the feature and shape of transformer is obtained, is modeled using neural network, carries out model training using collected data;(4) fault diagnosis is carried out according to the feature of equipment with the model after training.The present invention models the gas density for being dissolved in transformer oil using the scene in the model optimization transformer fault detection of GoogleNet, achieves in transformer fault detection compared with high-accuracy.

Description

A kind of Diagnosis Method of Transformer Faults based on GoogleNet model
Technical field
The present invention relates to a kind of Diagnosis Method of Transformer Faults based on GoogleNet model, belong to equipment fault diagnosis Technical field.
Background technique
Transformer is that Utilities Electric Co. for user provides the core component of lasting electricity consumption, there is important work in power transmission With, but its failure risk increases with aging, transformer fault normally results in network and interrupts extensively.Replace power supply transformation Device is very expensive.One unit may spend up to 1,000,000 dollars, and the time of delivery is very long.Therefore, any Utilities Electric Co. is all It is necessary to effectively detect transformer fault problem.Utilities Electric Co. needs effective method, such as intelligent trouble diagnosis algorithm, to reduce The failure rate of operation cost and assets.
Currently, most of Utilities Electric Co.s depend on expert to analyze the data being collected into from transformer, and use routine side Method determines the state of transformer.When the not resource of associated specialist, this may make diagnosis fall into difficulty.In addition, traditional Method can not generate synthesis result sometimes.In fault diagnosis field, common neural network model has depth confidence network (DBN), convolutional neural networks (CNN), stacking autocoder (SAE), recurrent neural network (RNN) etc..
A kind of method that Prasanna etc. describes multisensor Gernral Check-up based on DBN includes 3 successive stages: 1) health status and training and the pretreatment of test data are defined;2) training DBN Classification and Identification model, to predefined more sensings Device health status is diagnosed;3) validation verification is carried out to the model using test data set.This text be by DBN for realizing The classification of sensor health status feature needs further deeply to grind without realizing feature representation and extraction based on DBN Study carefully, but major step has been stepped based on DBN method for diagnosing faults to realize really.
Wei Dong proposes the CNN network structure using two softmax classifiers, and output sequence is divided into two classes, realizes Classification to two kinds of dependent classification problems is judged with a network solution internal fault external fault and two dependents of Fault Phase Selection divides It is shared to realize weight to a greater degree for class problem.Chen Zhiqiang using convolutional neural networks realize gear-box fault detection with Classification problem.Janssens realizes fault detection and recognizer without expertise using CNN, successfully solves rotation The fault diagnosis that outer ring raceway failure and greasy property two kinds of conventional methods of degeneration are difficult to realize in machinery.Qiu Lida is based on depth Learning model proposes a kind of data anastomosing algorithm for combining SAE and Clustering protocol, which constructs spy in each cluster Sign extracts disaggregated model, and homogenous characteristics are merged to later and are sent to aggregation node, improve the data of wireless sensor network Merge performance;Demethual etc. is directed at the insurmountable raw material with unique nonlinear problem of traditional multi-target method Reason system, based on measurement signal carry out fault diagnosis, it is proposed that using scatter diagram (DM), be locally linear embedding into (LLE) and automatically The feature extraction algorithm that encoder (AE) etc. combines, using gustafsonkessel and k-medoid algorithm to encoded signal Classify.The result shows that the fault diagnosis accuracy rate that the method is realized improves 90% than conventional method;Sun Wenjun is then shown Asynchronous machine failure modes problems is realized from encryption algorithm using sparse based on deep neural network method, and using has The sparse of unsupervised feature extraction advantage learns fault signature from encoding model, effectively proposes under the action of noise reduction codes The distracter of feature extraction, improves the robustness of character representation, and SAE after extracting feature is used for that neural network is trained Identify asynchronous machine failure, experiment shows that this method presents the fault diagnosis based on deep learning and examines in asynchronous machine failure Possessed unique advantage in disconnected.Similar research also extends to aero-engine, nuclear power station, wind power generating set equipment, rolling The fault diagnosis field of the complication systems such as dynamic bearing, transformer, robot and rotating machinery, and achieve good effect Fruit.Talebi etc. is then directed in the case where the state of nonlinear system and sensor are uncertain or contain interference, using two Kind of RNN realize the detections of the unknown sensor of system or actuator failures be isolated, and applied in LEOS Low Earth Orbiting Satellite On, a large amount of emulation experiments demonstrate the validity and stability of this method.
Summary of the invention
The purpose of the invention is to devise different neural network models, fault diagnosis is carried out to transformer.For The diagnosis of transformer has used common multiple perceptron model using dissolved gas analyzing method, to being dissolved in transformer oil Gas density modeled, construct a kind of Diagnosis Method of Transformer Faults based on GoogleNet model.
The technical solution adopted by the present invention is that:
The factor of device fails is obtained first, considers that data equipment fault can have an impact, and is determined and is wanted The data and feature space of acquisition;It determines the fault type that equipment can occur, forms state space;Transformer state is monitored, it is right Transformer carries out data acquisition, obtains the feature and state of transformer;It is modeled using neural network, uses collected number According to progress model training;Fault diagnosis is carried out according to the feature of equipment with the model after training.Referring to the model structure of GoogleNet MyNet model is built, the key of GoogleNet is Inception module, and the Inception module that this patent uses is subsequent version This, i.e., 5 × 5 convolution are replaced using 23 × 3 convolution.Wherein, there is volume 1 × 1 of compressed data in Inception module Product may result in information loss, to affect performance if prematurely using Inception module in a network.Cause This, is common convolutional layer at the beginning of network.The aggregate structure figure of network (maximum convolution nuclear volume is 1024).In convolutional layer In, k indicates convolution kernel size, and s indicates step-length, and f indicates convolution nuclear volume;In the layer of pond, k indicates Chi Huahe size, and s is indicated Step-length.Batch normalization is not expressly shown in table, and default can all have batch after each layer of convolutional layer normalization。
Wherein fault diagnosis uses dissolved gas analysis (DGA), and it is close to be dissolved in various gases in transformer oil using analysis Degree, diagnoses the health status and fault type of transformer, by H2, C2H2, C2H4, C2H6, CH4, CO for detecting various concentration Equal gases, capture out the fault messages such as shelf depreciation, low energy electric discharge, high-energy discharge, cryogenic overheating, hyperthermia and superheating.
Fault diagnosis carries out fault detection, the density and event of input transformer associated gas to transformer using neural network Hindering type, training neural network carries out fault diagnosis to training set and verifying collection after having trained, i.e. input associated gas density, Model exports fault type, finally calculates accuracy rate.The specifically used multi-layer perception (MLP) connected entirely, selects different hidden layers The number of plies and neuronal quantity, the number of plies of hidden layer are 1 layer or 2 layers, and the neuronal quantity of each hidden layer can be 3,6 Or 12, a total of 6 kinds of combinations.
Neural network structure is the multi-layer perception (MLP) of full articulamentum.Input layer has 6 neurons, and layers 1 and 2 has 12 neurons, having a Loss Rate after the 2nd layer is 0.3 dropout, and output layer is a softmax classifier, there is 4 A neuron, it is 7 × e that hyper parameter, which is respectively as follows: learning rate,-2, the attenuation rate of learning rate is 0.95, the 20 training set study of every traversal Rate decaying is primary, and batchsize 32 is traversed training set 1000 times, and weights initialisation uses xavier algorithm.
Fault diagnosis is carried out to transformer based on GoogleNet, it is more more acurrate than non-code ratio method and more flexible.Nothing Coding rate method can only diagnose specific fault type, and neural network is then limited without this, as long as in data set The fault type for including can diagnose.
It include input layer, hidden layer, output layer in multi-layer perception (MLP), wherein typically entering layer and output layer respectively has one layer, And then there is no limit can be one layer, or multilayer to the number of plies of hidden layer.Every layer can have multiple sections for being known as neuron Point, the output that one layer of these layers or more is as input, by the input exported as next layer of this layer.Common multi-layer perception (MLP) Operation be linear multiplication before this, add up summation, then passes through nonlinear activation primitive.Illustrate herein, runic is used in this patent The variable of expression is vector or matrix.Assuming that inputting in certain layer and activating letter for x=(x1, x2 ..., xn), parameter w Number is g, the then output of each neuron are as follows: yi=g (∑jwijxj+bi) entire layer output are as follows: y=g (wxT+ b) it is this each Neuron has the layer of a corresponding parameter to be called to do full articulamentum, and full articulamentum has the shortcomings that serious is exactly number of parameters Too much.The neuronal quantity m that the number of parameters of full articulamentum is upper one layer is multiplied by the neuronal quantity n, i.e. m × n of this layer.Assuming that The input of network is 1000 × 1000 RGB image, and first layer has 1000 neurons, then the number of parameters of this layer be 3 × 10003, occupy very more resources.
Detailed description of the invention
Fig. 1 is to carry out fault diagnosis using neural network;
Fig. 2 is the two methods that failure modes are carried out using neural network;
Fig. 3 is the structure of multi-layer perception (MLP);
Fig. 4 is the example of a two-dimensional convolution;
Specific embodiment
To be easy to understand the technical means, the creative features, the aims and the efficiencies achieved by the present invention, below with reference to Specific embodiment, the present invention is further explained.
A kind of Diagnosis Method of Transformer Faults specific steps process of the model based on GoogleNet is as shown in Figure 1.
Fault diagnosis is carried out to equipment using neural network, rough flow includes the following steps:
A. the factor of device fails is obtained first, considers that data equipment fault can have an impact, and is determined The data and feature space to be acquired;
B. it determines the fault type that equipment can occur, forms state space;
C. transformer state is monitored, data acquisition is carried out to transformer, obtains the feature and state of transformer;Use nerve Network is modeled, and carries out model training using collected data;
D. fault diagnosis is carried out according to the feature of equipment with the model after training.
Wherein, the fault diagnosis uses dissolved gas analysis, is dissolved in various gases in transformer oil using analysis Density diagnoses the health status and fault type of transformer, by the H for detecting various concentration2、C2H2、C2H4、C2H6、CH4, CO etc. Gas captures out the fault messages such as shelf depreciation, low energy electric discharge, high-energy discharge, cryogenic overheating, hyperthermia and superheating.
Moreover, the fault diagnosis carries out fault detection, input transformer correlation gas to transformer using neural network The density and fault type of body, training neural network carry out fault diagnosis to training set and verifying collection after having trained, i.e. input phase Gas density is closed, model exports fault type, finally calculates accuracy rate;The specifically used multi-layer perception (MLP) connected entirely, selection is not The number of plies and neuronal quantity of same hidden layer, the number of plies of hidden layer are 1 layer or 2 layers, the neuronal quantity of each hidden layer It can be 3,6 or 12, a total of 6 kinds of combinations.
In addition, the neural network structure is the multi-layer perception (MLP) of full articulamentum;Input layer has 6 neurons, and the 1st layer There are 12 neurons with the 2nd layer, having a Loss Rate after the 2nd layer is 0.3 dropout, and output layer is a softmax Classifier has 4 neurons, and it is 7 × e that hyper parameter, which is respectively as follows: learning rate,-2, the attenuation rate of learning rate is 0.95, every traversal 20 times The decaying of training set learning rate is primary, and batch size is 32, traverses training set 1000 times, weights initialisation is calculated using xavier Method.
Described carries out fault diagnosis to transformer based on GoogleNet, more more acurrate than non-code ratio method and cleverer It is living;Non-code ratio method can only diagnose specific fault type, and neural network is then limited without this, as long as data The fault type that concentration includes can diagnose.
It is softmax classifier that the use neural network, which carries out fault diagnosis classifier, is exported various types of general Rate:
Using the type of maximum probability as the type of model prediction;Neural network finally has a loss function, is used to The error measuring the output of model and really exporting;
The loss function of softmax are as follows:The process of model training is exactly to reduce this error, is used Gradient descent method finds out loss function to the value of the derivative of parametersThen parameters subtract this differential and are multiplied by one The product of a coefficient,Wherein α is referred to as learning rate.
It include input layer, hidden layer, output layer in the multi-layer perception (MLP), wherein typically entering layer and output layer respectively has One layer, and then there is no limit can be one layer, or multilayer to the number of plies of hidden layer;
Every layer can have multiple nodes for being known as neuron, and the output that one layer of these layers or more is as input, by this layer Export the input as next layer;The operation of common multi-layer perception (MLP) is linear multiplication, cumulative summation before this, then is passed through non-thread The activation primitive of property;
Assuming that in certain layer, input as x=(x1, x2 ..., xn), parameter w, activation primitive g, then each neuron Output are as follows: yi=g (∑jwijxj+bi) entire layer output are as follows: y=g (wxT+ b) this each neuron has a correspondence The layer of parameter be called and do full articulamentum, full articulamentum has the shortcomings that serious is exactly that number of parameters is too many;The ginseng of full articulamentum The neuronal quantity m that number quantity is upper one layer is multiplied by the neuronal quantity n, i.e. m × n of this layer;
Assuming that the input of network is 1000 × 1000 RGB image, first layer has 1000 neurons, then the parameter of this layer Quantity is 3 × 10003, occupy very more resources.
The GoogleNet is the convolutional neural networks model that the team of Google proposed in 2014, and GoogleNet exists Network layer is deeper, and performance is higher simultaneously, also maintains efficient computational efficiency;GoogleNet mono- shares 22 layers, without complete Articulamentum, only 5,000,000 parameters, 1/12 of the AlexNet before being;GoogleNet can obtain simultaneously high-performance and efficiently The key of rate is its Inception module;
Inception module is good network topology structure, is the network in network;GoogleNet passes through stacking Inception module and constitute whole network model;In convolutional Neural operation, the size of convolution kernel is a super ginseng Number, can be 3 × 3,5 × 5,7 × 7, different sizes may obtain different effect and performance, and hyper parameter is always by artificial To debug and determine;The main thought for proposing Inception module is that neural network oneself is allowed to determine, passes through data set Training allows these hyper parameters of e-learning;
Method is, in certain layer network, while carrying out 1 × 1,3 × 3,5 × 5 convolution sum maximum ponds, then lateral stacking These operations as a result, output as the layer network;But will cause number of parameters explosion in this way, huge memory is occupied, Computational efficiency is low, and solution is the length for reducing the third dimension of data in the convolution by 1 × 1, and the third dimension is in image Field is also referred to as channel (channel);The quantity of convolution kernel determines the port number of convolution algorithm output, as long as the number of convolution kernel Amount can play the effect of compressed data, to reduce the quantity of parameter less than the port number of input.
The health status for being diagnosed to be equipment and failure classes are sought to using the final result that neural network carries out fault diagnosis Type, the classifier that this patent uses are softmax classifiers, export various types of probability:
Using the type of maximum probability as the type of model prediction.Neural network finally has a loss function, is used to The error measuring the output of model and really exporting.The loss function of softmax are as follows:
The process of model training is exactly to reduce this error, this process, which is called to do, to be optimized.The optimization side in neural network Method is gradient descent method, that is, finds out loss function to the value of the derivative of parametersThen parameters subtract this differential It is multiplied by the product of a coefficient,α is referred to as learning rate.
Neural network seeks the value of the differential of parameters using the method for backpropagation.The training process of neural network is First from low layer toward the high-rise output for calculating each layer and the value of loss function, then according to the chain rule differentiated, from high level The value differentiated toward low layer.Classified with neural network to equipment fault, be broadly divided into two classes: a kind of directly training multilayer Disaggregated model (supervised learning);Another kind of is first to carry out one single layer classifier of feature extraction (self-supervisory study) retraining (prison Educational inspector practises).Both methods is introduced separately below.
Directly training multistratum classification model, directly exercise supervision study, with data and label training one with classifier Feedforward neural network, this patent carry out the judgement of transmission line of electricity internal fault external fault and failure using voltage and current data and respective labels Phase selection.Two kinds of unused models are used: the multi-layer perception (MLP) and convolutional neural networks connected entirely.Both moulds are described below Type.
Multi-layer perception (MLP) (MLP) is also feedforward neural network, and target is to train a function f (x), is allowed to as far as possible Ground is close to true function model.For example, one sorter model of training, is mapped to classification c for f (x).The mould of multi-layer perception (MLP) Type be it is fixed, training be parameter
It include input layer, hidden layer, output layer in multi-layer perception (MLP), wherein typically entering layer and output layer respectively has one layer, And then there is no limit can be one layer, or multilayer to the number of plies of hidden layer.Every layer can have multiple sections for being known as neuron Point, the output that one layer of these layers or more is as input, by the input exported as next layer of this layer.
The operation of common multi-layer perception (MLP) is linear multiplication, cumulative summation before this, then passes through nonlinear activation primitive. Illustrate herein, with the variable that runic indicates is vector or matrix in this patent.Assuming that in certain layer, input for x=(x1, X2 ..., xn), parameter w, activation primitive g, then the output of each neuron are as follows:
yi=g (∑jwijxj+bi) (3)
The output of entire layer are as follows:
Y=g (wxT+b) (4)
This each neuron has the layer of a corresponding parameter to be called to do full articulamentum, and full articulamentum has serious Disadvantage is exactly that number of parameters is too many.The neuronal quantity m that the number of parameters of full articulamentum is upper one layer is multiplied by the neuron of this layer Quantity n, i.e. m × n.Assuming that the input of network is 1000 × 1000 RGB image, first layer has 1000 neurons, then this layer Number of parameters be 3 × 10003, occupy very more resources.Therefore, there is convolutional neural networks (LeCun, 1989).
Convolutional neural networks (CNN) are the data neural networks that spatially there is certain structure for handling.Such as scheme As data (there are two-dimensional structures).Convolution is also a kind of linear multiplication and cumulative operation, but and the multilayer sense said above Know that the linear operation of machine has without place.Convolutional network refers to that those are at least substituted in the one of network layer using convolution algorithm The neural network of normal linear operation.
The parameter of convolutional neural networks is called convolution kernel.Convolution algorithm can be one-dimensional, two-dimentional or three-dimensional.Two-dimensional convolution Input, output, convolution kernel be all two-dimensional.The process of convolution algorithm is exactly that convolution kernel slides on input matrix, convolution kernel On each parameter be multiplied and sum with the input of corresponding position, obtain the output of a position, every sliding is primary, just calculates One output, finally obtains complete output, as shown in Figure 4.Assuming that the dimension of convolution is k × k, then export are as follows:
Convolution kernel can have multiple, and the length of the third dimension of convolution algorithm result is equal to the quantity of convolution kernel.Convolutional Neural Network improves the efficiency of neural network by partially connected, the two important features of parameter sharing, reduces resource consumption.Front It mentioned, if a certain layer in network has a input and b output, full articulamentum needs a × b parameter and algorithm Time complexity be O (a × b).In convolutional neural networks, if it is c that we, which limit the connection number that each output possesses, So sparse connection method only needs the operation time of c × b and O (c × b).In many practical applications, only need to keep The small several orders of magnitude of c ratio b, the performance that can have been obtained.Parameter sharing refers to that the parameter of different neurons is in neural network It is identical.The parameter sharing of convolution algorithm can make our only one parameter sets of training, and not have to for each neuron All with one independent parameter sets of study.Although not reducing the time complexity of algorithm in this way, but can be parameter Quantity is reduced to c, and far smaller than a × b and c × n significantly reduce storage demand[11].Other than convolution, convolutional Neural There are also a kind of very important operations for network: Chi Hua.There are maximum pond and average pond in pond.Maximum pondization is exactly in a phase It is maximized in neighbouring region, average pondization is averaged in adjacent area.As convolution, the window in pond is being inputted It is slided on matrix, every sliding once just calculates the value of a neuron.Pondization often connects behind convolution, it is possible to reduce data Dimension.For example, when the size of Chi Huahe is 2 × 2, when pond step-length is 2, the height and width of data all reduced 1/2, data number Amount is reduced to 1/4.Mono- important role of Chi Huayou is to maintain the invariance of input.For example, in the region Hua Qu of maximum pond Maximum value, after other values except maximum value change, the output in pond is remained unchanged.Translation by a small margin will not make pond The output of change changes.When we only certain features whether occur and do not mind it occur position when, pond it is this Local invariant has effect the performance boost of network with regard to highly useful.Image procossing used at present is mostly convolution mind Through network, convolutional neural networks development is very fast, and complicated and high performance network model is constantly suggested.AlexNet (2012), VGGNet (2014), GoogleNet (2014), ResNet (2015) etc. are very high performance network models.? In the experiment of transformer fault diagnosis described in this patent, the network structure of similar GoogleNet and ResNet has been used, wherein GoogleNet principle is as described below.
GoogleNet is the convolutional neural networks model that the team of Google proposed in 2014, big in current year ImageNet In scale visual identity challenge (ILSVRC), only 6.7% error rate achieves champion.GoogleNet network layer more Deep, performance is higher simultaneously, also maintains efficient computational efficiency.GoogleNet mono- shares 22 layers, without full articulamentum, only There are 5,000,000 parameters, 1/12 of the AlexNet before being.GoogleNet can obtain high-performance simultaneously and efficient key is Its Inception module.Inception module is good network topology structure, is the network in network.GoogleNet Whole network model is constituted by stacking Inception module.In convolutional Neural operation, the size of convolution kernel is one A hyper parameter can be 3 × 3,5 × 5,7 × 7, and different sizes may obtain different effect and performance.Inception The main thought of module is allowed manually with it to determine convolution kernel size, is decided when with pond, not as good as allowing network oneself It determines, allows e-learning these hyper parameters by the training of data set.Method is, in certain layer network, while progress 1 × 1,3 × 3,5 × 5 convolution sum maximum pond, then these operations of lateral stacking as a result, output as the layer network.But It will cause number of parameters explosion in this way, occupy huge memory, computational efficiency is low, and solution is to pass through 1 × 1 convolution Come reduce data the third dimension length, the third dimension is also referred to as channel (channel) in image domains.The quantity of convolution kernel determines The port number of convolution algorithm output, as long as the quantity of convolution kernel can play the effect of compressed data less than the port number of input Fruit, to reduce the quantity of parameter.
This patent carries out fault detection, the density and failure of input transformer associated gas to transformer using neural network Type, training neural network carry out fault diagnosis to training set and verifying collection after having trained, i.e. input associated gas density, mould Type exports fault type, finally calculates accuracy rate.The specifically used multi-layer perception (MLP) connected entirely, selects the layer of different hidden layers Several and neuronal quantity, the number of plies of hidden layer are 1 layer or 2 layers, the neuronal quantity of each hidden layer can for 3,6 or Person 12, a total of 6 kinds of combinations.
As a comparison, experiment also uses the non-code ratio method that Du's sample proposes to examine transformer fault type It is disconnected.Non-code ratio method is according to C in transformer oil2H2/C2H4、C2H4/C2H6、CH4/H2Value judge fault type, specifically Diagnostic method is as shown in table 1.
Table 1 is without coding rate method for diagnosing faults
For the data set that this experiment uses from document [20], one shares 200 datas.Every data contains H2、 C2H2、C2H4、C2H6、CH4, CO this 6 kinds of gases density and fault type.Fault type has high-energy discharge, low energy electric discharge, heat 4 kinds of failure, fault-free, being separately encoded is 0~3.
Because data set quantity is very little, using cross validation.Data set is equally divided into 4 parts, every part of 50 datas, Quantity of the data of every kind of fault type in each part is equal.Collect a copy of it as verifying, remaining is as training set, circulation 4 times, cross validation.
Assuming that the mean value of each variable of data is μ, standard deviation σ, the then formula of data prediction are as follows:
X=(x- μ)/σ (6)
The neural network structure that this patent uses is the multi-layer perception (MLP) of only full articulamentum, and one shares 6 kinds of combinations.Now One of which is lifted to illustrate.Input layer has 6 neurons, and layers 1 and 2 has 12 neurons, has one after the 2nd layer The dropout that Loss Rate is 0.3, output layer is a softmax classifier, has 4 neurons, as shown in table 2.
2 neural network structure of table
It is 7e-2 that hyper parameter, which is respectively as follows: learning rate, and the attenuation rate of learning rate is 0.95,20 training set learning rates of every traversal Decaying is primary, and batch size is 32, is traversed training set 1000 times, and weights initialisation uses xavier algorithm.
The neural network one that transformation diagnostic test uses shares 6 kinds of combinations, every kind combined training 4 times, training is accuracy rate As shown in table 3, verifying collection accuracy rate is as shown in table 4.
3 training set accuracy rate of table
The verifying collection accuracy rate of table 4
Interpretation of result shows in 6 kinds of combinations that only hiding layer number is 1 and every layer of neuronal quantity is 3 this combinations Accuracy rate it is lower, the accuracy rate average out to 86.35% of training set, verify collection accuracy rate be 81.50%, remaining combined training Collecting accuracy rate is 97.84% -98.17%, and verifying collection accuracy rate is 96.00% -97.50%.When the neuron of hidden layer is total For number at 6 or more, neural network model can obtain good effect, and if continuing to increase network model, accuracy rate is not Can improve, be held essentially constant, the little discrimination of experimental result be regarded as due to data set it is small caused by error.Without coding Ratio method cannot diagnose trouble-free situation, so only being used to diagnose the fault type of 100 data sets, accuracy rate is 88.00%.The Comparative result of two methods is as shown in table 5.
5 neural network of table and non-code ratio method Comparative result
Experiment shows to carry out fault diagnosis, Er Qiegeng more more acurrate than non-code ratio method to transformer using neural network Flexibly.Non-code ratio method can only diagnose specific fault type, and neural network is then limited without this, as long as number According to the fault type that concentration includes, can diagnose.Therefore a kind of transformer of model based on GoogleNet of this patent proposition Method for diagnosing faults is effective.
The above description is only an embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills Art field, is included within the scope of the present invention.

Claims (8)

1. a kind of Diagnosis Method of Transformer Faults of the model based on GoogleNet, it is characterised in that include the following steps:
A. the factor of device fails is obtained first, considers that data equipment fault can have an impact, and determining will adopt The data and feature space of collection;
B. it determines the fault type that equipment can occur, forms state space;
C. transformer state is monitored, data acquisition is carried out to transformer, obtains the feature and state of transformer;Use neural network It is modeled, carries out model training using collected data;
D. fault diagnosis is carried out according to the feature of equipment with the model after training.
2. a kind of Diagnosis Method of Transformer Faults of model based on GoogleNet according to claim 1, feature exist In the fault diagnosis uses dissolved gas analysis, is dissolved in various gas densities in transformer oil using analysis, diagnosis becomes The health status and fault type of depressor, by the H for detecting various concentration2、C2H2、C2H4、C2H6、CH4, the gases such as CO, capture out The fault messages such as shelf depreciation, low energy electric discharge, high-energy discharge, cryogenic overheating, hyperthermia and superheating.
3. a kind of Diagnosis Method of Transformer Faults of model based on GoogleNet according to claim 1, feature exist Fault detection carried out to transformer using neural network in, the fault diagnosis, the density of input transformer associated gas and Fault type, training neural network carry out fault diagnosis to training set and verifying collection after having trained, i.e. input associated gas is close Degree, model export fault type, finally calculate accuracy rate;The specifically used multi-layer perception (MLP) connected entirely selects different hide The number of plies and neuronal quantity of layer, the number of plies of hidden layer are 1 layer or 2 layers, and the neuronal quantity of each hidden layer can be 3 It is a, 6 or 12, it is a total of 6 kinds combination.
4. a kind of Diagnosis Method of Transformer Faults of model based on GoogleNet according to claim 1, feature exist In the neural network structure is the multi-layer perception (MLP) of full articulamentum;Input layer has 6 neurons, and layers 1 and 2 is equal There are 12 neurons, having a Loss Rate after the 2nd layer is 0.3 dropout, and output layer is a softmax classifier, is had 4 neurons, it is 7 × e that hyper parameter, which is respectively as follows: learning rate,-2, the attenuation rate of learning rate is 0.95,20 training sets of every traversal The decaying of habit rate is primary, and batch size is 32, is traversed training set 1000 times, and weights initialisation uses xavier algorithm.
5. a kind of Diagnosis Method of Transformer Faults of model based on GoogleNet according to claim 1, feature exist In, it is described that fault diagnosis is carried out to transformer based on GoogleNet, it is more more acurrate than non-code ratio method and more flexible;Nothing Coding rate method can only diagnose specific fault type, and neural network is then limited without this, as long as in data set The fault type for including can diagnose.
6. a kind of Diagnosis Method of Transformer Faults of model based on GoogleNet according to claim 1, feature exist In, it is softmax classifier that the use neural network, which carries out fault diagnosis classifier, export various types of probability:
Using the type of maximum probability as the type of model prediction;Neural network finally has a loss function, for measuring The output of model and the error really exported;
The loss function of softmax are as follows:The process of model training is exactly to reduce this error, uses gradient Descent method finds out loss function to the value of the derivative of parametersThen parameters, which subtract this differential and are multiplied by one, is Several products,Wherein α is referred to as learning rate.
7. a kind of Diagnosis Method of Transformer Faults of model based on GoogleNet according to claim 1, feature exist In, it include input layer, hidden layer, output layer in the multi-layer perception (MLP), wherein typically entering layer and output layer respectively has one layer, And then there is no limit can be one layer, or multilayer to the number of plies of hidden layer;
Every layer can have multiple nodes for being known as neuron, and the output that one layer of these layers or more is as input, by the output of this layer As next layer of input;The operation of common multi-layer perception (MLP) is linear multiplication, cumulative summation before this, then is passed through nonlinear Activation primitive;
Assuming that in certain layer, input as x=(x1, x2 ..., xn), parameter w, activation primitive g, then each neuron is defeated Out are as follows: yi=g (∑jwijxj+bi) entire layer output are as follows: y=g (wxT+ b) this each neuron has a corresponding ginseng Several layers, which is called, does full articulamentum, and full articulamentum has the shortcomings that serious is exactly that number of parameters is too many;The parameter number of full articulamentum The neuronal quantity m that amount is upper one layer is multiplied by the neuronal quantity n, i.e. m × n of this layer;
Assuming that the input of network is 1000 × 1000 RGB image, first layer has 1000 neurons, then the number of parameters of this layer It is 3 × 10003, occupy very more resources.
8. a kind of Diagnosis Method of Transformer Faults of model based on GoogleNet according to claim 5, feature exist In the GoogleNet is the convolutional neural networks model that the team of Google proposed in 2014, and GoogleNet is in network Level is deeper, and performance is higher simultaneously, also maintains efficient computational efficiency;GoogleNet mono- shares 22 layers, does not connect entirely Layer, only 5,000,000 parameters, 1/12 of the AlexNet before being;GoogleNet can obtain high-performance and efficient simultaneously Key is its Inception module;
Inception module is good network topology structure, is the network in network;GoogleNet passes through stacking Inception module and constitute whole network model;In convolutional Neural operation, the size of convolution kernel is a super ginseng Number, can be 3 × 3,5 × 5,7 × 7, different sizes may obtain different effect and performance, and hyper parameter is always by artificial To debug and determine;The main thought for proposing Inception module is that neural network oneself is allowed to determine, passes through data set Training allows these hyper parameters of e-learning;
Method is, in certain layer network, while carry out 1 × 1,3 × 3,5 × 5 convolution sum maximum ponds, then lateral stacking these Operation as a result, output as the layer network;But will cause number of parameters explosion in this way, huge memory is occupied, is calculated Inefficiency, solution are the length for reducing the third dimension of data in the convolution by 1 × 1, and the third dimension is in image domains Also referred to as channel (channel);The quantity of convolution kernel determines the port number of convolution algorithm output, as long as the quantity of convolution kernel is few In the port number of input, the effect of compressed data can be played, to reduce the quantity of parameter.
CN201811482932.9A 2018-12-05 2018-12-05 A kind of Diagnosis Method of Transformer Faults based on GoogleNet model Pending CN109765333A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811482932.9A CN109765333A (en) 2018-12-05 2018-12-05 A kind of Diagnosis Method of Transformer Faults based on GoogleNet model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811482932.9A CN109765333A (en) 2018-12-05 2018-12-05 A kind of Diagnosis Method of Transformer Faults based on GoogleNet model

Publications (1)

Publication Number Publication Date
CN109765333A true CN109765333A (en) 2019-05-17

Family

ID=66450713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811482932.9A Pending CN109765333A (en) 2018-12-05 2018-12-05 A kind of Diagnosis Method of Transformer Faults based on GoogleNet model

Country Status (1)

Country Link
CN (1) CN109765333A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110243405A (en) * 2019-06-25 2019-09-17 东北大学 A kind of Aero-Engine Sensor Failure diagnostic method based on deep learning
CN110414673A (en) * 2019-07-31 2019-11-05 北京达佳互联信息技术有限公司 Multimedia recognition methods, device, equipment and storage medium
CN110458240A (en) * 2019-08-16 2019-11-15 集美大学 A kind of three-phase bridge rectifier method for diagnosing faults, terminal device and storage medium
CN110596492A (en) * 2019-09-17 2019-12-20 昆明理工大学 Transformer fault diagnosis method based on particle swarm optimization random forest model
CN110672988A (en) * 2019-08-29 2020-01-10 国网江西省电力有限公司电力科学研究院 Partial discharge mode identification method based on hierarchical diagnosis
CN110703006A (en) * 2019-09-04 2020-01-17 国网浙江省电力有限公司金华供电公司 Three-phase power quality disturbance detection method based on convolutional neural network
CN110927501A (en) * 2019-12-12 2020-03-27 吉林省电力科学研究院有限公司 Transformer fault diagnosis method based on gray correlation improved weighted wavelet neural network
CN111539486A (en) * 2020-05-12 2020-08-14 国网四川省电力公司电力科学研究院 Transformer fault diagnosis method based on Dropout deep confidence network
CN111553297A (en) * 2020-05-06 2020-08-18 东华大学 Method and system for diagnosing production fault of polyester filament based on 2D-CNN and DBN
CN111612078A (en) * 2020-05-25 2020-09-01 中国人民解放军军事科学院国防工程研究院 Transformer fault sample enhancement method based on condition variation automatic encoder
CN111652870A (en) * 2020-06-02 2020-09-11 集美大学诚毅学院 Cloth defect detection method and device, storage medium and electronic equipment
CN111695288A (en) * 2020-05-06 2020-09-22 内蒙古电力(集团)有限责任公司电力调度控制分公司 Transformer fault diagnosis method based on Apriori-BP algorithm
CN111798437A (en) * 2020-07-09 2020-10-20 兴义民族师范学院 Novel coronavirus pneumonia AI rapid diagnosis method based on CT image
CN112580883A (en) * 2020-12-24 2021-03-30 哈尔滨理工大学 Power transformer state prediction method based on machine learning and neural network
CN112633550A (en) * 2020-11-23 2021-04-09 成都唐源电气股份有限公司 RNN-based catenary fault trend prediction method, equipment and storage medium
CN112947385A (en) * 2021-03-22 2021-06-11 华中科技大学 Aircraft fault diagnosis method and system based on improved Transformer model
CN113761804A (en) * 2021-09-13 2021-12-07 国网江苏省电力有限公司电力科学研究院 Transformer state diagnosis method, computer equipment and storage medium
CN114220041A (en) * 2021-11-12 2022-03-22 浙江大华技术股份有限公司 Target recognition method, electronic device, and storage medium
CN116310599A (en) * 2023-05-17 2023-06-23 湖北工业大学 Power transformer fault diagnosis method and system based on improved CNN-PNN network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110301942A1 (en) * 2010-06-02 2011-12-08 Nec Laboratories America, Inc. Method and Apparatus for Full Natural Language Parsing
CN103218662A (en) * 2013-04-16 2013-07-24 郑州航空工业管理学院 Transformer fault diagnosis method based on back propagation (BP) neural network
CN103268516A (en) * 2013-04-16 2013-08-28 郑州航空工业管理学院 Transformer fault diagnosing method based on neural network
CN104299035A (en) * 2014-09-29 2015-01-21 国家电网公司 Method for diagnosing fault of transformer on basis of clustering algorithm and neural network
CN107907799A (en) * 2017-11-10 2018-04-13 国网浙江省电力公司电力科学研究院 The recognition methods of shelf depreciation defect type based on convolutional neural networks and system
CN108038847A (en) * 2017-12-05 2018-05-15 国网内蒙古东部电力有限公司 Transformer inspection digital image recognition and fault detection system based on deep learning
CN108896296A (en) * 2018-04-18 2018-11-27 北京信息科技大学 A kind of wind turbine gearbox method for diagnosing faults based on convolutional neural networks

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110301942A1 (en) * 2010-06-02 2011-12-08 Nec Laboratories America, Inc. Method and Apparatus for Full Natural Language Parsing
CN103218662A (en) * 2013-04-16 2013-07-24 郑州航空工业管理学院 Transformer fault diagnosis method based on back propagation (BP) neural network
CN103268516A (en) * 2013-04-16 2013-08-28 郑州航空工业管理学院 Transformer fault diagnosing method based on neural network
CN104299035A (en) * 2014-09-29 2015-01-21 国家电网公司 Method for diagnosing fault of transformer on basis of clustering algorithm and neural network
CN107907799A (en) * 2017-11-10 2018-04-13 国网浙江省电力公司电力科学研究院 The recognition methods of shelf depreciation defect type based on convolutional neural networks and system
CN108038847A (en) * 2017-12-05 2018-05-15 国网内蒙古东部电力有限公司 Transformer inspection digital image recognition and fault detection system based on deep learning
CN108896296A (en) * 2018-04-18 2018-11-27 北京信息科技大学 A kind of wind turbine gearbox method for diagnosing faults based on convolutional neural networks

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHRISTIAN SZEGEDY等: "Going deeper with convolutions", 《2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
李辉等: "基于卷积神经网络的变压器故障诊断", 《河南理工大学学报(自然科学版)》 *
杨涛等: "基于深度学习的变压器故障诊断方法研究", 《电力大数据》 *
柳杨: "《数字图像物体识别理论详解与实战》", 31 January 2018 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110243405A (en) * 2019-06-25 2019-09-17 东北大学 A kind of Aero-Engine Sensor Failure diagnostic method based on deep learning
CN110414673A (en) * 2019-07-31 2019-11-05 北京达佳互联信息技术有限公司 Multimedia recognition methods, device, equipment and storage medium
CN110458240A (en) * 2019-08-16 2019-11-15 集美大学 A kind of three-phase bridge rectifier method for diagnosing faults, terminal device and storage medium
CN110672988A (en) * 2019-08-29 2020-01-10 国网江西省电力有限公司电力科学研究院 Partial discharge mode identification method based on hierarchical diagnosis
CN110703006A (en) * 2019-09-04 2020-01-17 国网浙江省电力有限公司金华供电公司 Three-phase power quality disturbance detection method based on convolutional neural network
CN110596492B (en) * 2019-09-17 2021-04-27 昆明理工大学 Transformer fault diagnosis method based on particle swarm optimization random forest model
CN110596492A (en) * 2019-09-17 2019-12-20 昆明理工大学 Transformer fault diagnosis method based on particle swarm optimization random forest model
CN110927501A (en) * 2019-12-12 2020-03-27 吉林省电力科学研究院有限公司 Transformer fault diagnosis method based on gray correlation improved weighted wavelet neural network
CN111553297B (en) * 2020-05-06 2022-03-15 东华大学 Method and system for diagnosing production fault of polyester filament based on 2D-CNN and DBN
CN111695288A (en) * 2020-05-06 2020-09-22 内蒙古电力(集团)有限责任公司电力调度控制分公司 Transformer fault diagnosis method based on Apriori-BP algorithm
CN111553297A (en) * 2020-05-06 2020-08-18 东华大学 Method and system for diagnosing production fault of polyester filament based on 2D-CNN and DBN
CN111695288B (en) * 2020-05-06 2023-08-08 内蒙古电力(集团)有限责任公司电力调度控制分公司 Transformer fault diagnosis method based on Apriori-BP algorithm
CN111539486A (en) * 2020-05-12 2020-08-14 国网四川省电力公司电力科学研究院 Transformer fault diagnosis method based on Dropout deep confidence network
CN111612078A (en) * 2020-05-25 2020-09-01 中国人民解放军军事科学院国防工程研究院 Transformer fault sample enhancement method based on condition variation automatic encoder
CN111652870A (en) * 2020-06-02 2020-09-11 集美大学诚毅学院 Cloth defect detection method and device, storage medium and electronic equipment
CN111652870B (en) * 2020-06-02 2023-04-07 集美大学诚毅学院 Cloth defect detection method and device, storage medium and electronic equipment
CN111798437A (en) * 2020-07-09 2020-10-20 兴义民族师范学院 Novel coronavirus pneumonia AI rapid diagnosis method based on CT image
CN112633550B (en) * 2020-11-23 2023-07-18 成都唐源电气股份有限公司 RNN-based contact network fault trend prediction method, equipment and storage medium
CN112633550A (en) * 2020-11-23 2021-04-09 成都唐源电气股份有限公司 RNN-based catenary fault trend prediction method, equipment and storage medium
CN112580883A (en) * 2020-12-24 2021-03-30 哈尔滨理工大学 Power transformer state prediction method based on machine learning and neural network
CN112947385A (en) * 2021-03-22 2021-06-11 华中科技大学 Aircraft fault diagnosis method and system based on improved Transformer model
CN113761804A (en) * 2021-09-13 2021-12-07 国网江苏省电力有限公司电力科学研究院 Transformer state diagnosis method, computer equipment and storage medium
CN114220041A (en) * 2021-11-12 2022-03-22 浙江大华技术股份有限公司 Target recognition method, electronic device, and storage medium
CN116310599A (en) * 2023-05-17 2023-06-23 湖北工业大学 Power transformer fault diagnosis method and system based on improved CNN-PNN network
CN116310599B (en) * 2023-05-17 2023-08-15 湖北工业大学 Power transformer fault diagnosis method and system based on improved CNN-PNN network

Similar Documents

Publication Publication Date Title
CN109765333A (en) A kind of Diagnosis Method of Transformer Faults based on GoogleNet model
Mao et al. Imbalanced fault diagnosis of rolling bearing based on generative adversarial network: A comparative study
CN111476294B (en) Zero sample image identification method and system based on generation countermeasure network
CN106980822B (en) A kind of rotary machinery fault diagnosis method based on selective ensemble study
Zhao et al. Intelligent fault diagnosis of multichannel motor–rotor system based on multimanifold deep extreme learning machine
Shao et al. Rolling bearing fault diagnosis using an optimization deep belief network
Lin et al. Spectral-spatial classification of hyperspectral image using autoencoders
Zhang et al. Ensemble deep contractive auto-encoders for intelligent fault diagnosis of machines under noisy environment
Che et al. Hybrid multimodal fusion with deep learning for rolling bearing fault diagnosis
Jiang et al. Joint label consistent dictionary learning and adaptive label prediction for semisupervised machine fault classification
CN101419671B (en) Face gender identification method based on fuzzy support vector machine
CN101907681B (en) Analog circuit dynamic online failure diagnosing method based on GSD-SVDD
Liang et al. Multi-scale dynamic adaptive residual network for fault diagnosis
CN110213244A (en) A kind of network inbreak detection method based on space-time characteristic fusion
Su et al. Hierarchical diagnosis of bearing faults using branch convolutional neural network considering noise interference and variable working conditions
CN112101426A (en) Unsupervised learning image anomaly detection method based on self-encoder
CN103728551A (en) Analog circuit fault diagnosis method based on cascade connection integrated classifier
Ma et al. An unsupervised domain adaptation approach with enhanced transferability and discriminability for bearing fault diagnosis under few-shot samples
Zhang et al. A class-aware supervised contrastive learning framework for imbalanced fault diagnosis
CN109214460A (en) Method for diagnosing fault of power transformer based on Relative Transformation Yu nuclear entropy constituent analysis
Yao et al. Multiscale domain adaption models and their application in fault transfer diagnosis of planetary gearboxes
CN112147432A (en) BiLSTM module based on attention mechanism, transformer state diagnosis method and system
Zhao et al. A novel deep fuzzy clustering neural network model and its application in rolling bearing fault recognition
Li et al. Intelligent fault diagnosis of aeroengine sensors using improved pattern gradient spectrum entropy
Ma et al. A collaborative central domain adaptation approach with multi-order graph embedding for bearing fault diagnosis under few-shot samples

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20190605

Address after: 11 004 No. 18 Ningbo Road, Shenyang City, Liaoning Province

Applicant after: INFORMATION COMMUNICATION BRANCH, STATE GRID LIAONING ELECTRIC POWER Co.,Ltd.

Applicant after: Nanjing University of Aeronautics and Astronautics

Applicant after: STATE GRID CORPORATION OF CHINA

Address before: 11 004 No. 18 Ningbo Road, Shenyang City, Liaoning Province

Applicant before: INFORMATION COMMUNICATION BRANCH, STATE GRID LIAONING ELECTRIC POWER Co.,Ltd.

Applicant before: Nanjing University of Aeronautics and Astronautics

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20190517

RJ01 Rejection of invention patent application after publication