CN110929847A - Converter transformer fault diagnosis method based on deep convolutional neural network - Google Patents

Converter transformer fault diagnosis method based on deep convolutional neural network Download PDF

Info

Publication number
CN110929847A
CN110929847A CN201911120136.5A CN201911120136A CN110929847A CN 110929847 A CN110929847 A CN 110929847A CN 201911120136 A CN201911120136 A CN 201911120136A CN 110929847 A CN110929847 A CN 110929847A
Authority
CN
China
Prior art keywords
model
neural network
data
fault diagnosis
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911120136.5A
Other languages
Chinese (zh)
Inventor
郑一鸣
王文浩
万梓聪
闫丹凤
毕建刚
王峰渊
袁帅
杨圆
常文治
是艳杰
王广真
邵明鑫
韩睿
杨智
姜炯挺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Electric Power Research Institute of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
Beijing University of Posts and Telecommunications
Electric Power Research Institute of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications, Electric Power Research Institute of State Grid Zhejiang Electric Power Co Ltd filed Critical Beijing University of Posts and Telecommunications
Priority to CN201911120136.5A priority Critical patent/CN110929847A/en
Publication of CN110929847A publication Critical patent/CN110929847A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/0004Gaseous mixtures, e.g. polluted air
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Food Science & Technology (AREA)
  • Medicinal Chemistry (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Testing And Monitoring For Control Systems (AREA)

Abstract

The invention discloses a converter transformer fault diagnosis method based on a deep convolutional neural network. The deep convolutional neural network model is applied to the fault detection of the power grid equipment, so that the defects that the number of parameters is large, the parameter adjusting process is complicated, and the performance of the model depends on data preprocessing and characteristic engineering in a machine learning algorithm based on statistics and probability theory are overcome, the data is expanded to a higher dimension on the basis of a shallow artificial neural network, the fitting capacity of the model to a complex function is further improved, meanwhile, a residual error network and a batch normalization algorithm are applied to the deep convolutional neural network, the convergence rate and the generalization capacity of the model are improved, and the accuracy of the model to the fault diagnosis of the power grid equipment is greatly improved compared with that of the shallow neural network.

Description

Converter transformer fault diagnosis method based on deep convolutional neural network
Technical Field
The invention belongs to the field of fault diagnosis of power transmission and transformation equipment, and particularly relates to a converter transformer fault diagnosis method based on a deep convolutional neural network.
Background
At present, most of large transformers are oil-immersed transformers and adopt oil paper insulation structures. When the transformer normally operates, the oil paper insulating material in the transformer is subjected to the action of heat and electricity, and can be gradually aged and decomposed to generate a small amount of dissolved gas. By analysing dissolved gas H in the transformer2、CH4、C2H2、C2H4、C2H6、CO、CO2Can detect the fault and prevent the fault from deteriorating. Through statistical analysis of the results of a large number of DGAs, experts have attempted to establish a standard of interest for the content of dissolved gases in oil, i.e. an early latent fault causing an accident may exist inside the plant which is considered to contain the content of dissolved gases in oil above this standard.
In transformer fault diagnosis, some local faults and heating defects are difficult to find by means of an electrical test method, and early diagnosis of some latent faults and development degrees of the latent faults inside a transformer is very sensitive and effective through chromatographic analysis of gas in transformer oil, which is proved by a large number of fault diagnosis practices. In the national promulgated implementation of the preventive test procedure for electrical equipment DL/T596-1996, the DGA method was placed in the primary place.
However, practice has shown that it is difficult to simplify the complicated problem of determining the presence or absence of a fault to a mechanical determination from only one numerical limit. Based on this, Dornenburg proposes a gas three-ratio method to diagnose the transformer fault. The three-ratio method avoids the volume effect of oil, improves the accuracy of fault diagnosis of the power transformer, and the three-ratio method has the reliability of fault diagnosis of oil-immersed power equipment of about 80% as shown by statistics at home and abroad, and recommends the use of the three-ratio method based on DGA results in national standards to carry out fault diagnosis on the oil-immersed equipment. However, in practice, the three-ratio method has a coding blind spot problem, and a considerable part of the DGA analysis result falls outside the coding proposed by the three-ratio method, so that diagnosis cannot be carried out on some conditions.
The machine learning method based on statistics and probability theory has stronger processing and learning capacity on large-scale data. In the current scene, the monitoring data of the dissolved gas in the oil is mostly nonlinear data (namely the change rule of the dissolved gas data is difficult to be fitted by a linear function), and the content of various gases in the dissolved gas in the oil affects each other, so that certain correlation exists. Based on this, the fitting capability of the machine learning model based on the tree model to the large-scale oil chromatogram monitoring data generated in the power grid system is more advantageous.
At present, the algorithm with the highest performance among the algorithms based on the tree model is the XGBosot algorithm (only in 29 algorithms winning in Kaggle competitions in 2015, 17 of the algorithms use XGBoost). XGBoost is one of Boosting algorithms. The idea of Boosting is to integrate many weak classifiers together to form one strong classifier. The base tree (weak classifier) used in XGBoost is a CART regression tree model, which is a binary tree assumed by continuously splitting features. For example, the current tree node is split based on the jth eigenvalue, and the samples with eigenvalues smaller than s are divided into left subtrees, and the samples with eigenvalues larger than s are divided into right subtrees.
The idea of the XGboost algorithm is to continuously add trees, continuously perform feature splitting to grow a tree, and each time a tree is added, actually learn a new function to fit the residual error predicted last time. When the training is completed to obtain k trees, the score of a sample is to be predicted, namely, according to the characteristics of the sample, a corresponding leaf node is fallen in each tree, each leaf node corresponds to a score, and finally, the predicted value of the sample is obtained by only adding the scores corresponding to each tree.
The XGboost explicitly adds a regularization term to control the complexity of the model, so that overfitting is prevented, and the generalization capability of the model is improved; the XBGoost supports parallelization, and although a tree is in a serial relation with the tree, nodes at the same level can be parallel, and specifically: for a certain node, the optimal split point is selected in the node, the calculation gains of the candidate split points are parallel by multiple threads, and the training speed is high, so that the accuracy of the XGboost in the fault diagnosis of the power grid equipment is greatly improved, but the model parameters are more, the parameter adjusting process is complicated, the manual intervention is more, and the performance of the model is directly influenced by the quality of data preprocessing and characteristic engineering.
The neural network model has strong knowledge acquisition capacity, can effectively process noise in data, can autonomously optimize parameters by a gradient descent method, does not need data preprocessing and characteristic engineering, does not have a fussy parameter tuning process, has less manual intervention, and greatly improves the accuracy of fault diagnosis.
The monitoring data of the gas dissolved in the oil belongs to sequence data, the state at the current moment is dependent on the previous moment, and the state at the next moment is influenced. Among the artificial neural network models, the most classical models for modeling sequence data are the recurrent neural network and the convolutional neural network.
A Recurrent Neural Network (RNN) refers to a structure that repeats over time. The method is widely applied to multiple fields such as Natural Language Processing (NLP), voice images and the like. The largest difference between RNN networks and other networks is that RNNs can implement some kind of "memory function" which is the best choice for performing time series analysis. As human beings can better understand the world by virtue of their past memories. The RNN also implements a mechanism similar to the human brain, with some memory of processed information, unlike other types of neural networks that do not. A standard RNN unit comprises three layers: input layer, hidden layer and output layer, as shown in FIG. 1, where the most initial input in the RNN network is x0The output is h0This represents the input of the RNN network at time 0 as x0Output is h0The state of the network neuron at time 0 is stored in a. When the next time 1 arrives, the state of the network neuron is not only input x at the time 11The decision is also determined by the neuron state at time 0. And the rest of the process continues until t time at the end of the time sequence, so that the hidden information in the sequence is learned.
Convolutional Neural Networks (CNN) are a type of feedforward Neural network which comprises convolution calculation and has a deep structure, has strong representation learning capacity, can carry out translation invariant classification on input information according to a hierarchical structure of the Convolutional Neural Networks, and ensures that the Convolutional Neural Networks can carry out lattice point characteristics with smaller calculated amount due to convolution kernel parameter sharing in a hidden layer and sparsity of interlayer connection, thereby having stable effect and having no additional characteristic engineering requirements on data. In general, the basic structure of CNN includes two layers, one of which is a feature extraction layer, and the input of each neuron is connected to a local acceptance domain of the previous layer and extracts the feature of the local acceptance domain. Once the local feature is extracted, the position relation between the local feature and other features is determined; the other is a feature mapping layer, each calculation layer of the network is composed of a plurality of feature mappings, each feature mapping is a plane, and the weights of all neurons on the plane are equal. The feature mapping structure adopts a sigmoid function with small influence function kernel as an activation function of the convolution network, so that the feature mapping has displacement invariance. In addition, since the neurons on one mapping surface share the weight, the number of free parameters of the network is reduced. Each convolutional layer in the convolutional neural network is followed by a computation layer for local averaging and quadratic extraction, which reduces the feature resolution. The convolution neural network has unique superiority in the aspects of voice recognition, sequence and image processing by virtue of a special structure shared by local weights, the layout of the convolution neural network is closer to the actual biological neural network, the complexity of the network is reduced by virtue of weight sharing, and particularly, the image of a multi-dimensional input vector can be directly input into the network, so that the complexity of data reconstruction in the processes of feature extraction and classification is avoided by virtue of the characteristic.
Under the background of big data, the scale of equipment data generated in a power grid system is increased day by day, the artificial intelligence-based equipment fault diagnosis method has strong processing capacity on the big data and can learn hidden information in the big data, so that the accuracy of equipment fault diagnosis is improved, and the artificial intelligence-based equipment fault diagnosis method has great research value. At present, artificial intelligence methods for equipment fault diagnosis are divided into two main categories: 1) machine learning based on statistics and probability theory with XGboost as a representative; 2) and artificial neural networks represented by RNN and CNN.
Machine learning based on statistics and probability theory has rigorous mathematical theory support, the model has strong interpretability, the accuracy of fault diagnosis of the power grid equipment is high, and the defects are obvious: the model parameters are more, the parameter adjusting process is complicated, the manual intervention is more, and the performance of the model is directly influenced by the quality of data preprocessing and characteristic engineering.
The artificial neural network adopts a random gradient descent method to autonomously optimize parameters, so that the manual intervention part is greatly reduced, a large number of nonlinear units exist in the artificial neural network, a target function can be approached with any precision theoretically, the learning capability is strong, and the hidden features of the model can be autonomously learned. However, the sampling frequency of monitoring data of dissolved gas in oil in a power grid system cannot be too high, so that the acquired monitoring data are too sparse, and the correlation between the data is diluted, so that the RNN is used for forcibly fusing data information at the previous moment and the next moment into the learning of the current data, the noise in the learning of the current data is increased, the model cannot be converged for a long time, and the fault monitoring capability is poor; although a large number of experiments prove that the CNN effect is superior to that of a common neural network model and the superiority of a convolutional neural network in the field of transformer fault diagnosis is verified, the models used at present are all shallow-layer structures (the number of network layers is small) and have limited representation capacity on complex functions, so that the generalization capacity of the models is limited and the ductility of the models is not high.
The Deep Neural Network (DNN) model has the advantages that potential modes can be identified from original data, an arbitrary complex function can be approximated, and the DNN model has better performance when the monitoring data of the power grid equipment is large in scale or the fault categories of the equipment need to be further distinguished.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a converter transformer fault diagnosis method based on a deep convolutional Neural Network, which is characterized in that a deep convolutional Neural Network (DcNN) obtained by combining a CNN model and a DNN model is applied to fault diagnosis of power grid equipment, so that the hidden characteristics of fault data of the power grid equipment are discovered while manual operation is simplified, and the accuracy of fault diagnosis is further improved.
Therefore, the invention adopts the following technical scheme: a converter transformer fault diagnosis method based on a deep convolutional neural network comprises the following steps:
combining the deep neural network with the convolutional neural network to obtain a converter transformer fault diagnosis model (DcNN for short) based on the deep convolutional neural network, and performing fault diagnosis by using the converter transformer fault diagnosis model;
the diagnosis process of the converter transformer fault diagnosis model is as follows:
after data is input, the method enters an I stage, and in the I stage, training data are mapped to a higher-dimensional space through convolution operation, batch normalization and nonlinear operation of a Relu activation function, so that a model can learn more information conveniently;
then entering a second stage, wherein the second stage is a model core stage, namely a deep convolutional network, and comprises a plurality of sequentially superposed residual error structures; in the residual error structure, data passes through two branches, in the first branch, the data is firstly subjected to batch normalization, so that the convergence speed is further improved, the parameter adjusting process is simplified, the data is pre-activated by using a Relu activation function, then, the model overfitting is prevented by using information discarding operation, and finally, the convolution operation is performed; performing batch normalization, Relu activation function and information discarding operation on the result after the convolution operation again, and finally obtaining the output of the branch through the convolution operation; in the second branch, the data is directly downsampled through a layer of maximum pooling operation, so as to be aligned with the data of the first branch; finally, summing the inputs of the two branches to serve as the output of the II stage;
and finally, the data enters a stage III, and the data is activated by utilizing a softmax activation function through batch normalization, a Relu activation function and full connection operation, so that a result is obtained.
Further, the input of the model is the online monitoring data X ═ X (X) of the equipment1,x2,...,xi,...,xm) The output of the model is each piece of monitoring data xiCorresponding diagnostic result yiSet of (Y ═ Y1,y2,...,yi,...,ym),yiThe value of 0 or 1 can be only taken, wherein 0 represents that the equipment is abnormal, and 1 represents that the equipment is normal.
Further, in the model training phase, the model loss function is:
Figure BDA0002275229980000051
wherein p (-) represents the model to map the diagnosis result of the ith piece of monitoring data into the correct result yiThe probability of (c).
Further, each residual structure contains 2 convolutional layers, each convolutional layer having a convolutional kernel with a step size of 16, for 32k convolutional kernels, where k starts at 1 and increases by 1 every four residual structures; every 1 residual structure, the input is down-sampled with a sampling rate of 2, so the original input is finally down-sampled with a sampling rate of 24And (5) performing double down sampling.
Furthermore, there are 8 residual structures.
Further, when the model is trained, the weight of the convolutional layer is initialized, the Adam optimizer is used for optimizing the model parameters, and when the loss function value of the model on the verification set stops decreasing, the learning rate is decreased to be current 1/10; and in the optimization process, an optimal model evaluated on a verification set is stored, a convolution structure with 17 layers of 1 hidden layer is finally obtained, and an output layer is a converter transformer fault diagnosis model of a full connection layer using a softmax activation function.
Further, in order to optimize the model more easily and increase the network convergence speed, a similar mode, namely a quick connection module, in ResNet is adopted, and quick connection between neural network layers optimizes training by allowing information to be well spread in a very deep network; when the residual structure downsamples the input, the corresponding shortcut connection module also downsamples its input using the maximum pooling operation of the same sampling rate.
Further, in the model evaluation stage, a comprehensive index F-Score, namely a weighted harmonic function of accuracy and recall rate, is used as an evaluation function, and the calculation formula is as follows:
Figure BDA0002275229980000052
α is used to adjust precision and recall ratio weight (α is more weight greater than 1 recall ratio, α is more weight less than 1 precision ratio), P is precision ratio, R is recall ratio, and the calculation formula is:
Figure BDA0002275229980000053
Figure BDA0002275229980000054
when calculating the index of each category, each category needs to be considered as "positive" separately, and all other categories need to be considered as "negative"; for each class, TPNumber of correctly determined sentences representing "positive" class, FPRepresenting the number of "negative" class sentences erroneously determined to be "positive", FNSentences representing the "positive" class are erroneously determined to be the "negative" class.
Further, during model training, α in the F-Score equation was assigned a value of 1.5.
Further, device fingerprints are constructed based on parameters in the model when training is completed (in the model training stage, N DcNN models are used for training on monitoring data of N devices respectively, so that the specificity of the obtained model is ensured), and the device fingerprints are stored in a matrix form to construct a fingerprint database; in the actual fault diagnosis stage, whether a fault occurs at the current moment can be quickly and accurately diagnosed by utilizing the orthogonality of the device fingerprint and the monitoring data at the current moment; in the stage of updating the equipment fingerprint, the updating can be completed based on the existing equipment fingerprint and a small amount of new data, and the performance of model fault diagnosis is ensured.
The DcNN model obtained by combining the DNN and the CNN is applied to the power grid equipment fault diagnosis scene, so that the model precision is further improved, the manual intervention process is reduced, and the power grid fault diagnosis efficiency is improved.
The invention modifies the structure of the traditional Dcnn model and applies the residual error network structure to the Dcnn model, thereby improving the convergence speed of the model.
According to the method, data are preprocessed by using a Batch Normalization algorithm (Batch Normalization), and data are pre-activated by using a ReLu activation function, so that the generalization capability and the convergence speed of the model are further improved, and meanwhile, the human intervention is reduced;
the method is used for evaluating the model performance by using the F-Score evaluation index aiming at the characteristic that the number of normal data in the fault data of the power grid equipment is far larger than that of abnormal data.
The invention has the following beneficial effects: the deep convolutional neural network model is applied to the fault detection of the power grid equipment, so that the defects that the number of parameters is large, the parameter adjusting process is complicated, and the performance of the model depends on data preprocessing and characteristic engineering in a machine learning algorithm based on statistics and probability theory are overcome, the data is expanded to a higher dimension on the basis of a shallow artificial neural network, the fitting capacity of the model to a complex function is further improved, meanwhile, a residual error network and a batch normalization algorithm are applied to the deep convolutional neural network, the convergence rate and the generalization capacity of the model are improved, and the accuracy of the model to the fault diagnosis of the power grid equipment is greatly improved compared with that of the shallow neural network. The invention provides a concept of the power grid equipment fault characteristic fingerprint, and data which accords with a specific equipment data distribution function can be artificially generated by utilizing the specificity of the equipment fault characteristic fingerprint, so that the condition that a model is difficult to train due to the problems of insufficient data quantity, poor data quality and the like is relieved, and a data basis is accumulated for subsequent equipment fault diagnosis.
Drawings
FIG. 1 is a diagram of a standard RNN unit in the prior art;
FIG. 2 is a structural diagram of a converter transformer fault diagnosis model (Dcnn model) according to the present invention;
FIG. 3 is a flow chart of a diagnostic method of a converter transformer fault diagnostic model according to the present invention;
FIG. 4 is a diagram illustrating the result of labeling phase C monitoring data of 8212B equipment in an application example of the present invention;
FIG. 5 is a graph of the relationship between iteration times and model loss function values in an application example of the present invention;
FIG. 6 is a graph showing the relationship between the number of iterations and the F-Score value in an application example of the present invention;
FIG. 7 is a diagram showing the results of comparative experiments on the performance of DcNN and RNN in the application example of the present invention;
FIG. 8 is a diagram of the performance comparison experiment results of DcNN and CNN in the application example of the present invention.
Detailed Description
The invention is further described with reference to the drawings and the detailed description.
Examples
The embodiment provides a converter transformer fault diagnosis method based on a deep convolutional neural network.
The transformer fault diagnosis is essentially a two-classification problem, and the input of the problem is the online monitoring data X ═ X (X) of the equipment1,x2,...,xi,...,xm) Output as each monitoring data xiCorresponding diagnostic result yiSet of (Y ═ Y1,y2,...,yi,...,ym),yiOnly 0 or 1 can be taken (0 represents abnormal equipment, and 1 represents normal equipment). In the model training phase, the model loss function is:
Figure BDA0002275229980000071
wherein p (-) represents the model to map the diagnosis result of the ith piece of monitoring data into the correct result yiThe probability of (c). Before data is input into a network, the data is normalized, the network can conveniently learn more information, and 1 group of diagnosis labels are finally output. When training the model, initialize the weight of the convolution layer, use Adam optimizer to optimize the model parameters, when the loss function value of the model on the verification set stops decreasing, decrease the learning rate to current 1/10. The optimal model evaluated on the verification set is saved in the optimization process, a convolution structure with 17 layers of 1 hidden layer is finally obtained, the output layer is a DcNN model of a full connection layer using a softmax activation function, and the model structure is shown in FIG. 2.
In order to optimize the Dcnn model more easily and increase the network convergence speed, a quick connection module in a similar mode in ResNet is adopted, and quick connection between neural network layers is realized by allowing informationInformation is well propagated in very deep networks to optimize training. The network consists of 8 residual blocks, each of which contains 2 convolutional layers; each convolutional layer has a convolutional kernel of step size 16, for a total of 32k convolutional kernels (where k starts at 1 and increments by 1 every 4 residual blocks). Every 1 residual block, its input is down-sampled at a sampling rate of 2, so the original input is finally down-sampled at a sampling rate of 24And (5) performing double down sampling. When the residual block downsamples the input, the corresponding shortcut connection module also downsamples its input using the maximum pooling operation of the same sampling rate.
In the second stage, before performing convolution layer operation on the data each time, batch normalization processing is performed on the data, so that the convergence speed is further improved, the parameter adjusting process is simplified, and the data is pre-excited by using a ReLU activation function. This pre-excitation structure makes the first and last layers of the network have a magic effect on the final output of the model, applying a discard of information (Dropout) layer after the convolutional layer and the ReLU activation function prevents the model from being over-fitted. And finally, after the data passes through the full connection layer and the softmax activation function, obtaining the device diagnosis result at the corresponding moment.
In the model evaluation phase, for the binary problem, the evaluation function is usually accuracy Acc ═ ntrue/nall,ntruePredicting the correct number of samples, n, in the prediction setallIs the total number of prediction sets. However, for the transformer equipment, the equipment failure belongs to a small probability event, and abnormal data and normal data in the monitoring data are not equal in proportion (the proportion is about 37: 1). When the positive and negative sample ratios are not uniform, a large estimation error may occur when only the accuracy estimation model is used for prediction: considering extreme conditions, the prediction set comprises 99 positive samples and 1 negative sample, and when the model predicts all the prediction sets as positive samples, the accuracy rate is 99%; when the model successfully predicts the negative samples and 97 positive samples, the accuracy rate is 98%, but the prediction capability of the latter is better than that of the former because the fault diagnosis model can predict the negative samples as accurately as possible. Therefore, the invention adopts the comprehensive index F-Score, namely the weighted harmonic function of the accuracy and the recall rate, as the evaluation function to avoidWithout the problems, the calculation formula is
Figure BDA0002275229980000081
α is used to adjust precision and recall ratio weight (α is more weight greater than 1 recall ratio, α is more weight less than 1 precision ratio), P is precision ratio, R is recall ratio, and the calculation formula is:
Figure BDA0002275229980000082
Figure BDA0002275229980000083
each category needs to be considered separately as "positive" and all other categories as "negative" when calculating the metric for each category. For each class, TPNumber of correctly determined sentences representing "positive" class, FPRepresenting the number of "negative" class sentences erroneously determined to be "positive", FNSentences representing the "positive" class are erroneously determined to be the "negative" class.
The present invention aims to diagnose the fault of the transformer, and the model should diagnose the fault point as accurately as possible (i.e. allow the model to sacrifice a certain degree of accuracy to improve the recall rate), so α in the F-Score formula is assigned to 1.5 during the model training process.
Due to the factors such as special working environment of the power equipment, limited on-line data monitoring technology and the like, the collected on-line monitoring data of the equipment has the problems of low precision, large sampling interval, inconsistent sampling interval and the like, so that the data volume is insufficient, and the data loss time characteristic is degenerated into mutually independent data points, which is not favorable for the follow-up research on equipment fault diagnosis. In addition, as the fingerprints of each person are unique, the power equipment also has the same effect on the equipment due to different working environments and different input running times of different equipment, the content change of dissolved gas in the equipment has different effects on the equipment, the same data can be obtained if the equipment A is normally represented and the equipment B is abnormally represented if the data is represented on 2 pieces of equipment A and B, namely, each piece of equipment has the unique equipment attribute, namely the equipment fingerprints.
Therefore, the device fingerprint is constructed based on parameters in a model when DcNN training is finished (in the stage of training the model, N DcNN models are used for training on monitoring data of N devices respectively so as to ensure the specificity of the obtained model), and the fingerprint database is stored in a matrix form to construct the fingerprint database. Different device fingerprints reflect different distribution functions of monitoring data of the same type of device in different working environments, and data conforming to the device data distribution functions can be artificially generated by using the characteristic, so that the condition of low model accuracy caused by insufficient data volume, poor data quality and the like is relieved, and the negative influence of insufficient data volume caused by limited sampling technology is eliminated; in the actual fault diagnosis stage, whether a fault occurs at the current moment can be quickly and accurately diagnosed by utilizing the orthogonality of the device fingerprint and the monitoring data at the current moment; in the stage of updating the equipment fingerprint, the updating can be completed based on the existing equipment fingerprint and a small amount of new data, and the performance of model fault diagnosis is ensured.
Application example
The DcNN model is an end-to-end model, so the DcNN model can be used correctly as long as the input data format is guaranteed to be consistent with the data format used by the present invention.
Experimental data
The data used by the invention are three-phase online monitoring data of 8 total extra-high voltage transformers A, B, C of China State stations 8111B, 8112B, 8121B, 8122B, 8211B, 8212B, 8221B and 8222B, and the monitoring data dimensions are shown in Table 1.
TABLE 1 on-line monitoring of data dimensionality
On-line monitoring index Unit/format
H2 μL/L
CH4 μL/L
C2H2 μL/L
C2H4 μL/L
C2H6 μL/L
CO μL/L
CO2 μL/L
O2 μL/L
N2 μL/L
Because the data dimensionality is higher than three dimensions, all monitoring indexes cannot be displayed in one graph at the same time, therefore, the total hydrocarbon content of representative equipment is selected for visual display, and the total hydrocarbon content is the sum of the contents of 4 dissolved gases of methane, ethane, ethylene and acetylene in the equipment.
In order to analyze the diagnosis effect of the transformer fault diagnosis model based on the device fingerprints, 8 extra-high voltage transformer devices in 2018 of Zhongzhou station 2015 are selected for experiment, each device has A, B, C three phases, and each phase has 4800 pieces of monitoring data of dissolved gas on average. Taking the site 8212B device C as an example, the device has 3886 pieces of monitoring data from 28 days 10 and 28 months 2015 to 20 days 9 and 9 months 2018, wherein the number of the monitoring data is 3783 pieces of normal data and the number of the abnormal data is 103 pieces, as shown in fig. 4. Taking abnormal case data as negative samples (total 199 pieces), and randomly extracting 50% of the abnormal case data from the normal data of the device as positive samples (total 1892 pieces) to form a training set of a DcNN model; randomly cutting 10% of training set as verification set; the abnormal data of the device (103 pieces in total) and the remaining normal data (1891 pieces in total) which are not used as the training set are combined into a prediction set. In order to ensure the stability of the experimental results, sample data is randomly disturbed in all experiments, and then 10-fold cross validation is carried out, and the average value of 10 results is taken as the final result.
Analysis of Experimental results
1. Impact of iteration number and information discarding layer on fault diagnosis
The iteration number of the model in the training process determines the performance and the training time. In order to ensure the real-time performance of the characteristic fingerprint of the high-voltage equipment, the training time of the model cannot be too long, so that the iteration number is an important index in the experimental process. Meanwhile, the loss rate between layers in the model can influence the generalization capability of the model and can also prevent the model from being over-fitted. For this purpose, first, the optimal number of iterations of the model and whether the model requires an information discarding layer are investigated.
The invention trains 2 models, DcNN _ dropout and DcNN _ no _ dropout, simultaneously, the latter having no information discarding layer in the residual block structure of fig. 2. FIGS. 5 and 6 show the loss function values and the change in F-score values over the training and validation sets, respectively, as the number of iterations increases, DcNN _ dropout and DcNN _ no _ dropout. Dropout _ layer _ on _ train in FIGS. 5 and 6 is the effect of Dcnn _ dropout on the training set; dropout _ layer _ on _ evaluation is the effect of DcNN _ dropout on the verification set; no _ dropout _ layer _ on _ train is the effect of DcN _ no _ dropout on the training set, and no _ dropout _ layer _ on _ identification is the effect of DcN _ no _ dropout on the verification set.
From the analysis of fig. 5 and 6, when the number of iterations exceeds 32, both the loss function value and the F-score tend to be stable, which indicates that the improvement effect of continuously increasing the number of iterations on the fault detection capability of the training model is not large, and thus the optimal number of iterations of the device is determined to be 32. As can be seen from FIG. 5, the loss function values of the DcNN _ dropout model on the training set and the validation set are both smaller than DcNN _ no _ dropout. As can be seen from FIG. 6, when the model converges, the F-score of DcNN _ dropout is better than that of DcNN _ no _ dropout. Experiments show that the structure of the residual block with the information discarding layer is more suitable for fault diagnosis of the equipment, because the data scale of the equipment is larger, the number of model layers is larger, and the transmission of redundant information can be reduced by introducing the information loss rate through the information discarding layer, so that the training speed and the generalization capability of the model are improved.
Comparison of Dcnn and RNN Fault diagnosis Performance
The RNN incorporates information on the upper and lower times into learning at the current time, and is excellent in the performance of the sequence data processing. For the time-series monitoring data in the context of the present invention, it is one of the preferred models that the RNN can learn the interaction between the monitoring data. In order to verify whether the performance of DcNN is better than that of RNN, the invention carries out a fault diagnosis performance comparison experiment on DcNN and RNN, and the experimental result is shown in FIG. 7. In FIG. 7, DcNN _ on _ train is the representation of DcNN on the training set, and DcNN _ on _ evaluation is the representation of DcNN on the verification set; RNN _ on _ train is the representation of RNN on the training set, and RNN _ on _ identification is the representation of RNN on the validation set.
As can be seen from fig. 7, DcNN performs better than RNN because: under the situation of the invention, the monitoring data acquired by the equipment is too sparse, and through statistics, 4800 pieces of monitoring data are averagely arranged on each piece of equipment in 2015, 10-2018, 9 months, namely 4 pieces of monitoring data are arranged every day, and almost no correlation exists among the data; therefore, the RNN is used for forcibly fusing data information of the previous moment and the later moment into the current data learning, noise in the current data learning is increased, the model cannot be converged for a long time, and the fault monitoring capability is poor. And the DcNN automatically learns the data distribution characteristics of the equipment when the equipment fails through the multilayer convolution layer, thereby avoiding the problems caused by insufficient equipment monitoring data acquisition of the RNN. In conclusion, the Dcnn has better fault diagnosis effect on the transformer.
Comparison of Dcnn and CNN Fault diagnosis Performance
The difference between the CNN and DcNN network structures is reflected in the number of network layers, which is much greater than that of the CNN network, and fig. 8 shows the comparison result of the two performances. In FIG. 8, DcNN _ on _ train is the representation of DcNN on the training set, and DcNN _ on _ evaluation is the representation of DcNN on the verification set; CNN _ on _ train is the representation of CNN on the training set, and CNN _ on _ evaluation is the representation of CNN on the validation set. As can be seen from fig. 8, DcNN performance is superior to that of the conventional CNN because the CNN network is a shallow structure (the number of network layers is small), the representation capability of the CNN network on complex functions is limited, the generalization capability of the CNN network is restricted, and the model ductility is not high; and the Dcnn has the function of identifying potential modes from the original data, can approximate any complex function and has better performance.
Comparing DcNN with XGboost and three-ratio method for fault diagnosis performance
The XGboost is a Tree Boosting algorithm based on a gradient Boosting algorithm, has high-efficiency training speed and good learning effect on classification problems, and only 17 XGboost are used in 29 algorithms winning in Kaggle competitions in 2015. Therefore, in order to fully verify the performance of the Dcnn model in transformer fault diagnosis, a transformer fault diagnosis comparison experiment of Dcnn, XGboost and a three-ratio method is set, and the results are summarized in Table 2.
TABLE 2 comparison of DcNN with XGboost, three ratio method for fault diagnosis
Model (method) P R F-Score
DcNN 0.831 0.920 0.890
XGBoost 0.819 0.859 0.847
Three ratio method 0.804 0.825 0.818
As can be seen from table 2, the DcNN fault diagnosis performance is superior to the XGBoost and the three-ratio method because: the XGboost algorithm needs to carry out preprocessing and feature engineering on data and needs manual intervention to adjust parameters, the process is complicated, and in most cases, the feature engineering determines the top end of model performance; the three-ratio method is not suitable for the condition that the dissolved gas in the oil does not exceed the alarm value, and the misdiagnosis rate of the power transformer is increased due to low accuracy and integrity of online monitoring data.

Claims (10)

1. A converter transformer fault diagnosis method based on a deep convolutional neural network is characterized in that,
combining the deep neural network with the convolutional neural network to obtain a converter transformer fault diagnosis model based on the deep convolutional neural network, and performing fault diagnosis by using the converter transformer fault diagnosis model;
the diagnosis process of the converter transformer fault diagnosis model is as follows:
after data is input, entering an I stage, and in the I stage, performing convolution operation, batch normalization and nonlinear operation of a Relu activation function;
then entering a second stage, wherein the second stage is a model core stage, namely a deep convolutional network, and comprises a plurality of sequentially superposed residual error structures; in a residual error structure, data passes through two branches, in the first branch, the data is firstly subjected to batch normalization, a Relu activation function is used for pre-activating the data, then, an information discarding operation is used for preventing model overfitting, and finally, a convolution operation is carried out; performing batch normalization, Relu activation function and information discarding operation on the result after the convolution operation again, and finally obtaining the output of the branch through the convolution operation; in the second branch, the data is directly downsampled through a layer of maximum pooling operation, so as to be aligned with the data of the first branch; finally, summing the inputs of the two branches to serve as the output of the II stage;
and finally, the data enters a stage III, and the data is activated by utilizing a softmax activation function through batch normalization, a Relu activation function and full connection operation, so that a result is obtained.
2. The method for diagnosing the fault of the converter transformer based on the deep convolutional neural network as claimed in claim 1, wherein the input of the model is the online monitoring data of the equipment, X ═ X (X)1,x2,...,xi,...,xm) The output of the model is each piece of monitoring data xiCorresponding diagnostic result yiSet of (Y ═ Y1,y2,...,yi,...,ym),yiThe value of 0 or 1 can be only taken, wherein 0 represents that the equipment is abnormal, and 1 represents that the equipment is normal.
3. The converter transformer fault diagnosis method based on the deep convolutional neural network as claimed in claim 2, wherein in the model training stage, the model loss function is:
Figure FDA0002275229970000011
wherein p (-) represents the model to map the diagnosis result of the ith piece of monitoring data into the correct result yiThe probability of (c).
4. The converter transformer fault diagnosis method based on the deep convolutional neural network, as claimed in claim 1 or 2, wherein each residual structure comprises 2 convolutional layers, each convolutional layer has a convolutional kernel with a step size of 16, and 32k convolutional kernels are provided, where k starts from 1 and increases by 1 after every four residual structures; every 1 residual structure, the input is down-sampled with a sampling rate of 2, so the original input is finally down-sampled with a sampling rate of 24And (5) performing double down sampling.
5. The method for diagnosing the fault of the converter transformer based on the deep convolutional neural network as claimed in claim 1 or 2, wherein the number of the residual error structures is 8.
6. The method for diagnosing the fault of the converter transformer based on the deep convolutional neural network as claimed in claim 3, wherein when a model is trained, the weight of a convolutional layer is initialized, an Adam optimizer is used for optimizing the parameters of the model, and when the loss function value of the model on a verification set stops decreasing, the learning rate is reduced to current 1/10; and in the optimization process, an optimal model evaluated on a verification set is stored, a convolution structure with 17 layers of 1 hidden layer is finally obtained, and an output layer is a converter transformer fault diagnosis model of a full connection layer using a softmax activation function.
7. The method for diagnosing the fault of the converter transformer based on the deep convolutional neural network as claimed in claim 4, wherein a similar mode, namely a quick connection module in ResNet is adopted to more easily optimize a model and improve the convergence speed of the network; when the residual structure downsamples the input, the corresponding shortcut connection module also downsamples its input using the maximum pooling operation of the same sampling rate.
8. The method for diagnosing the fault of the converter transformer based on the deep convolutional neural network as claimed in claim 4, wherein in the model evaluation stage, a comprehensive index F-Score, namely a weighted harmonic function of accuracy and recall rate, is adopted as an evaluation function, and the calculation formula is as follows:
Figure FDA0002275229970000021
α is used for adjusting the precision and the recall ratio weight, P is the precision, R is the recall ratio, and the calculation formulas are respectively:
Figure FDA0002275229970000022
Figure FDA0002275229970000023
when calculating the index of each category, each category needs to be considered as "positive" separately, and all other categories need to be considered as "negative"; for each class, TPNumber of correctly determined sentences representing "positive" class, FPRepresenting the number of "negative" class sentences erroneously determined to be "positive", FNSentences representing the "positive" class are erroneously determined to be the "negative" class.
9. The method for diagnosing the fault of the converter transformer based on the deep convolutional neural network as claimed in claim 8, wherein α in the F-Score formula is assigned as 1.5 in the model training process.
10. The converter transformer fault diagnosis method based on the deep convolutional neural network as claimed in claim 4, wherein an equipment fingerprint is constructed based on parameters in a model when training is completed, and the equipment fingerprint is stored in a matrix form to construct a fingerprint database; in the actual fault diagnosis stage, whether a fault occurs at the current moment can be quickly and accurately diagnosed by utilizing the orthogonality of the device fingerprint and the monitoring data at the current moment; in the stage of updating the equipment fingerprint, the updating can be completed based on the existing equipment fingerprint and a small amount of new data, and the performance of model fault diagnosis is ensured.
CN201911120136.5A 2019-11-15 2019-11-15 Converter transformer fault diagnosis method based on deep convolutional neural network Pending CN110929847A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911120136.5A CN110929847A (en) 2019-11-15 2019-11-15 Converter transformer fault diagnosis method based on deep convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911120136.5A CN110929847A (en) 2019-11-15 2019-11-15 Converter transformer fault diagnosis method based on deep convolutional neural network

Publications (1)

Publication Number Publication Date
CN110929847A true CN110929847A (en) 2020-03-27

Family

ID=69853164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911120136.5A Pending CN110929847A (en) 2019-11-15 2019-11-15 Converter transformer fault diagnosis method based on deep convolutional neural network

Country Status (1)

Country Link
CN (1) CN110929847A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111505424A (en) * 2020-05-06 2020-08-07 哈尔滨工业大学 Large experimental device power equipment fault diagnosis method based on deep convolutional neural network
CN111583592A (en) * 2020-05-06 2020-08-25 哈尔滨工业大学 Experimental environment safety early warning method based on multidimensional convolution neural network
CN111695289A (en) * 2020-05-13 2020-09-22 中国东方电气集团有限公司 Fault diagnosis method and platform of full-power converter
CN111796180A (en) * 2020-06-23 2020-10-20 广西电网有限责任公司电力科学研究院 Automatic identification method and device for mechanical fault of high-voltage switch
CN111931851A (en) * 2020-08-11 2020-11-13 辽宁工程技术大学 Fan blade icing fault diagnosis method based on one-dimensional residual error neural network
CN112163619A (en) * 2020-09-27 2021-01-01 北华大学 Transformer fault diagnosis method based on two-dimensional tensor
CN112329914A (en) * 2020-10-26 2021-02-05 华翔翔能科技股份有限公司 Fault diagnosis method and device for buried transformer substation and electronic equipment
CN112446326A (en) * 2020-11-26 2021-03-05 中国核动力研究设计院 Canned motor pump fault mode identification method and system based on deep rewinding and accumulating network
CN113361637A (en) * 2021-06-30 2021-09-07 杭州东方通信软件技术有限公司 Potential safety hazard identification method and device for base station room
CN113486965A (en) * 2021-07-14 2021-10-08 西南交通大学 Training method for abnormity identification model of vehicle network electric coupling data
CN113624466A (en) * 2021-07-08 2021-11-09 中南民族大学 Steam turbine rotor fault diagnosis method, device, equipment and storage medium
CN113822771A (en) * 2021-07-21 2021-12-21 广西电网有限责任公司 Low false detection rate electricity stealing detection method based on deep learning
CN113889198A (en) * 2021-09-24 2022-01-04 国网宁夏电力有限公司电力科学研究院 Transformer fault diagnosis method and equipment based on oil chromatogram time-frequency domain information and residual error attention network
CN114091549A (en) * 2021-09-28 2022-02-25 国网江苏省电力有限公司苏州供电分公司 Equipment fault diagnosis method based on deep residual error network
CN114615010A (en) * 2022-01-19 2022-06-10 上海电力大学 Design method of edge server-side intrusion prevention system based on deep learning
CN115294411A (en) * 2022-10-08 2022-11-04 国网浙江省电力有限公司 Power grid power transmission and transformation image data processing method based on neural network
CN116467662A (en) * 2023-03-24 2023-07-21 江苏邦鼎科技有限公司 Granulator fault identification method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108942409A (en) * 2018-08-26 2018-12-07 西北工业大学 The modeling and monitoring method of tool abrasion based on residual error convolutional neural networks
CN109512423A (en) * 2018-12-06 2019-03-26 杭州电子科技大学 A kind of myocardial ischemia Risk Stratification Methods based on determining study and deep learning
CN110163234A (en) * 2018-10-10 2019-08-23 腾讯科技(深圳)有限公司 A kind of model training method, device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108942409A (en) * 2018-08-26 2018-12-07 西北工业大学 The modeling and monitoring method of tool abrasion based on residual error convolutional neural networks
CN110163234A (en) * 2018-10-10 2019-08-23 腾讯科技(深圳)有限公司 A kind of model training method, device and storage medium
CN109512423A (en) * 2018-12-06 2019-03-26 杭州电子科技大学 A kind of myocardial ischemia Risk Stratification Methods based on determining study and deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王峰等: "基于深度卷积神经网络的变压器故障诊断方法", 《广东电力》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583592A (en) * 2020-05-06 2020-08-25 哈尔滨工业大学 Experimental environment safety early warning method based on multidimensional convolution neural network
CN111505424A (en) * 2020-05-06 2020-08-07 哈尔滨工业大学 Large experimental device power equipment fault diagnosis method based on deep convolutional neural network
CN111695289B (en) * 2020-05-13 2023-04-28 中国东方电气集团有限公司 Fault diagnosis method and platform for full-power converter
CN111695289A (en) * 2020-05-13 2020-09-22 中国东方电气集团有限公司 Fault diagnosis method and platform of full-power converter
CN111796180A (en) * 2020-06-23 2020-10-20 广西电网有限责任公司电力科学研究院 Automatic identification method and device for mechanical fault of high-voltage switch
CN111931851A (en) * 2020-08-11 2020-11-13 辽宁工程技术大学 Fan blade icing fault diagnosis method based on one-dimensional residual error neural network
CN112163619A (en) * 2020-09-27 2021-01-01 北华大学 Transformer fault diagnosis method based on two-dimensional tensor
CN112329914A (en) * 2020-10-26 2021-02-05 华翔翔能科技股份有限公司 Fault diagnosis method and device for buried transformer substation and electronic equipment
CN112329914B (en) * 2020-10-26 2024-02-02 华翔翔能科技股份有限公司 Fault diagnosis method and device for buried transformer substation and electronic equipment
CN112446326A (en) * 2020-11-26 2021-03-05 中国核动力研究设计院 Canned motor pump fault mode identification method and system based on deep rewinding and accumulating network
CN112446326B (en) * 2020-11-26 2022-04-01 中国核动力研究设计院 Canned motor pump fault mode identification method and system based on deep rewinding and accumulating network
CN113361637A (en) * 2021-06-30 2021-09-07 杭州东方通信软件技术有限公司 Potential safety hazard identification method and device for base station room
CN113624466A (en) * 2021-07-08 2021-11-09 中南民族大学 Steam turbine rotor fault diagnosis method, device, equipment and storage medium
CN113624466B (en) * 2021-07-08 2023-10-03 中南民族大学 Method, device, equipment and storage medium for diagnosing turbine rotor faults
CN113486965A (en) * 2021-07-14 2021-10-08 西南交通大学 Training method for abnormity identification model of vehicle network electric coupling data
CN113822771A (en) * 2021-07-21 2021-12-21 广西电网有限责任公司 Low false detection rate electricity stealing detection method based on deep learning
CN113889198A (en) * 2021-09-24 2022-01-04 国网宁夏电力有限公司电力科学研究院 Transformer fault diagnosis method and equipment based on oil chromatogram time-frequency domain information and residual error attention network
CN114091549A (en) * 2021-09-28 2022-02-25 国网江苏省电力有限公司苏州供电分公司 Equipment fault diagnosis method based on deep residual error network
CN114615010A (en) * 2022-01-19 2022-06-10 上海电力大学 Design method of edge server-side intrusion prevention system based on deep learning
CN114615010B (en) * 2022-01-19 2023-12-15 上海电力大学 Edge server-side intrusion prevention system design method based on deep learning
CN115294411B (en) * 2022-10-08 2022-12-30 国网浙江省电力有限公司 Power grid power transmission and transformation image data processing method based on neural network
CN115294411A (en) * 2022-10-08 2022-11-04 国网浙江省电力有限公司 Power grid power transmission and transformation image data processing method based on neural network
CN116467662A (en) * 2023-03-24 2023-07-21 江苏邦鼎科技有限公司 Granulator fault identification method and system
CN116467662B (en) * 2023-03-24 2023-10-13 江苏邦鼎科技有限公司 Granulator fault identification method and system

Similar Documents

Publication Publication Date Title
CN110929847A (en) Converter transformer fault diagnosis method based on deep convolutional neural network
CN113496262B (en) Data-driven active power distribution network abnormal state sensing method and system
CN109408389B (en) Code defect detection method and device based on deep learning
CN105930901B (en) A kind of Diagnosis Method of Transformer Faults based on RBPNN
CN110542819B (en) Transformer fault type diagnosis method based on semi-supervised DBNC
CN109142946A (en) Transformer fault detection method based on ant group algorithm optimization random forest
CN112557034B (en) Bearing fault diagnosis method based on PCA _ CNNS
CN114120041B (en) Small sample classification method based on double-countermeasure variable self-encoder
CN115270965A (en) Power distribution network line fault prediction method and device
CN111401599A (en) Water level prediction method based on similarity search and L STM neural network
CN112147432A (en) BiLSTM module based on attention mechanism, transformer state diagnosis method and system
CN112289391B (en) Anode aluminum foil performance prediction system based on machine learning
CN113191429A (en) Power transformer bushing fault diagnosis method and device
CN114925612A (en) Transformer fault diagnosis method for optimizing hybrid kernel extreme learning machine based on sparrow search algorithm
CN116842337A (en) Transformer fault diagnosis method based on LightGBM (gallium nitride based) optimal characteristics and COA-CNN (chip on board) model
CN112756759A (en) Spot welding robot workstation fault judgment method
CN114169091A (en) Method for establishing prediction model of residual life of engineering mechanical part and prediction method
CN117235565A (en) Transformer fault diagnosis model construction method and device
CN113379116A (en) Cluster and convolutional neural network-based line loss prediction method for transformer area
CN113884807B (en) Power distribution network fault prediction method based on random forest and multi-layer architecture clustering
CN114021758A (en) Operation and maintenance personnel intelligent recommendation method and device based on fusion of gradient lifting decision tree and logistic regression
CN113033898A (en) Electrical load prediction method and system based on K-means clustering and BI-LSTM neural network
CN116663414A (en) Fault diagnosis method and system for power transformer
CN116400168A (en) Power grid fault diagnosis method and system based on depth feature clustering
CN108898157B (en) Classification method for radar chart representation of numerical data based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200327