CN110163332B - Transformer fault diagnosis method - Google Patents

Transformer fault diagnosis method Download PDF

Info

Publication number
CN110163332B
CN110163332B CN201910519783.7A CN201910519783A CN110163332B CN 110163332 B CN110163332 B CN 110163332B CN 201910519783 A CN201910519783 A CN 201910519783A CN 110163332 B CN110163332 B CN 110163332B
Authority
CN
China
Prior art keywords
layer
neural network
neuron
value
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910519783.7A
Other languages
Chinese (zh)
Other versions
CN110163332A (en
Inventor
蒋辉
马胤刚
王巍
孙鲜明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Seic Information Technology Co ltd
Original Assignee
Shenyang Seic Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Seic Information Technology Co ltd filed Critical Shenyang Seic Information Technology Co ltd
Priority to CN201910519783.7A priority Critical patent/CN110163332B/en
Publication of CN110163332A publication Critical patent/CN110163332A/en
Application granted granted Critical
Publication of CN110163332B publication Critical patent/CN110163332B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Testing Electric Properties And Detecting Electric Faults (AREA)

Abstract

The invention discloses a transformer fault diagnosis method, which combines thermal fault, electrical fault and aging degree of a transformer by extracting 12 transformer fault diagnosis state quantities of the transformer, predicts the fault of the transformer from a wider layer and can more accurately predict the fault of the transformer; the relevance among the parameters is removed by adopting a dimension reduction method, and the fault of the transformer can be more accurately judged by using the parameters with redundant information removed; the weight is determined by training in a single-layer neural network training mode instead of randomly distributing the weight, so that the phenomenon of network gradient disappearance can be prevented; compared with other intelligent methods, the method has objectivity and more accurate prediction.

Description

Transformer fault diagnosis method
Technical Field
The invention relates to the field of transformer fault diagnosis, and particularly provides a transformer fault diagnosis method.
Background
At present, a transformer fault diagnosis method mainly comprises an ultrasonic fault detection method, a partial discharge detection method, a method for detecting gas dissolved in oil and the like, and the method for detecting gas dissolved in oil is not influenced by the strong electromagnetic environment of a transformer, so that some intelligent algorithms (such as a BP (back propagation) neural network, a Bayesian algorithm, an SVM (support vector machine) and the like) which are generated according to the method are widely used.
Therefore, it is an urgent need to provide a new transformer fault diagnosis method that can reflect the fault degree of the transformer from more layers, remove the correlation between the state quantities, and deeply mine the nature of the state quantities, and accurately predict the fault degree of the transformer.
Disclosure of Invention
In view of the above, the present invention aims to provide a transformer fault diagnosis method, so as to solve the problems in the prior art that the detection state quantity is too simple, the generated intelligent algorithm cannot perform deep mining, the gradient of the algorithm disappears, the subjectivity is strong, and the like.
The technical scheme provided by the invention is as follows: a transformer fault diagnosis method comprises the following steps:
s1: dividing historical data of the transformer into a training set and a test set, wherein each historical data comprises CH4,C2H6,C2H4,C2H2,CO,CO2,H2Breakdown voltage, dielectric loss coefficient, acid value, furfural and water, and 12-dimensional transformer fault diagnosis state quantity;
s2: training a first layer of neural network to obtain a weight and an offset of the updated first layer of neural network, wherein an input layer of the first layer of neural network comprises 12 neurons which respectively correspond to 12-dimensional transformer fault diagnosis state quantities, a hidden layer comprises 9 neurons, and an output layer comprises 12 neurons;
s3: training a second-layer neural network to obtain the updated weight and bias of the second-layer neural network, wherein an input layer of the second-layer neural network is a hidden layer of the first-layer neural network, the hidden layer of the second-layer neural network comprises 6 neurons, and an output layer comprises 9 neurons;
s4: training a third-layer neural network to obtain a weight and an offset of the updated third-layer neural network, wherein an input layer of the third-layer neural network is a hidden layer of the second-layer neural network, the hidden layer of the third-layer neural network comprises 3 neurons, an output layer comprises 6 neurons, and the 3 neurons of the hidden layer of the third-layer neural network represent the influence degrees of electrical faults, thermal faults and aging degrees on transformer faults respectively;
s5: taking a hidden layer of the third layer of neural network as an input layer of the fourth layer of neural network, wherein an output layer of the fourth layer of neural network comprises 4 neurons, a value of each neuron is calculated by formula (6), and the 4 neurons respectively correspond to 4 states of the transformer: normal, first order degradation, second order degradation, third order degradation
Figure BDA0002096259370000021
In the formula: y isgThe value of the g-th neuron of the output layer of the fourth layer of neural network is represented, namely the probability that the fault category of the transformer sample is g; w is afgRepresenting the weight of the input layer and the output layer of the neural network of the fourth layer, and taking the value as a random number of 0-1; bgThe bias value of the g-th neuron of the fourth layer neural network output layer is a random number with the value of 0-1; x is the number offRepresenting the value of the f-th neuron of the input layer of the fourth layer neural network;
s6: establishing an integral network, wherein a first layer of neural network input layer is used as an input layer of the integral network, a hidden layer of the first layer of neural network is used as a first hidden layer of the integral network, a hidden layer of a second layer of neural network is used as a second hidden layer of the integral network, a hidden layer of a third layer of neural network is used as a third hidden layer of the integral network, and an output layer of a fourth layer of neural network is used as an output layer of the integral network;
s7: fine-tuning the weight and the offset value of the whole network by using the sample with the label in the training set to obtain a transformer fault diagnosis model;
s8: verifying the transformer fault diagnosis model obtained in the S7 by using the test set sample, and ensuring the accuracy of the transformer fault diagnosis model;
s9: and processing the sampled data received in real time by using the transformer fault diagnosis model to obtain a transformer fault diagnosis result.
Preferably, the method for training the first-layer neural network in S2 is as follows:
s21: calculating the value of each neuron in the hidden layer of the first layer neural network by formula (1)
Figure BDA0002096259370000031
In the formula: h isiA value representing an ith neuron of a first layer neural network hidden layer; wjiRepresenting the weight of the jth neuron of the input layer and the ith neuron of the hidden layer, and the initial value is between 0 and 1The random number of (2); p is a radical ofiThe bias of the ith neuron of the hidden layer is a random number with an initial value of 0-1, xjRepresenting the value of the jth neuron of the input layer, namely the jth dimension transformer fault diagnosis state quantity;
s22: calculating the value of each neuron in the output layer of the first layer of neural network according to the formula (2)
Figure BDA0002096259370000032
In the formula: y iskA value representing a kth neuron of an output layer of the first layer neural network;
Figure BDA0002096259370000045
representing the weight of the ith neuron of the hidden layer and the kth neuron of the output layer, wherein the initial value is a random number between 0 and 1; q. q.skRepresenting the bias of the kth neuron of the output layer, and the initial value is a random number h between 0 and 1iA value representing an ith neuron of the hidden layer;
s23: constructing an error function J, and updating the weight values of the input layer and the hidden layer of the first layer of neural network and the bias value of the hidden layer through a formula (4) and a formula (5);
wherein the error function J is shown in formula (3),
Figure BDA0002096259370000041
in the formula:
Figure BDA0002096259370000042
representing an error function, wherein N represents the number of samples without labels in the training set samples; s represents a set without label sample data in the training set sample; x represents data in the set s; x is the number ofnA value representing the nth dimension of the data; y isnA value representing the nth dimension of the output data after the network calculation; wjiRepresenting the weight of the j-th neuron of the input layer and the i-th neuron of the hidden layer,
Figure BDA0002096259370000046
representing the weight, p, of the ith neuron of the hidden layer and the kth neuron of the output layeriTo hide the bias of the ith neuron of the layer, qkRepresenting the bias of the kth neuron of the output layer;
Figure BDA0002096259370000043
in the formula: w is ajiThe weights of the jth neuron of the input layer and the ith neuron in the hidden layer are obtained; α represents a learning rate; j represents an error function;
Figure BDA0002096259370000044
in the formula: p is a radical ofiA bias value of an ith neuron of a hidden layer; alpha is the learning rate; j represents an error function;
s24: the weights of the hidden layer and the output layer of the first layer neural network and the bias of the output layer are updated in the same manner as S23.
Further preferably, the value range of α in the formulas 4 and 5 is 0.05 to 0.2.
More preferably, α in the formulas 4 and 5 takes a value of 0.1.
Further preferably, in S7, the method for fine-tuning the weights and bias values of the entire network by using the samples with labels in the training set is as follows:
constructing a total loss function Q of the neural network as shown in a formula (7), and then finely adjusting the weight and the offset value of each layer of the neural network to obtain the finely adjusted weight and offset value, thereby obtaining a transformer fault diagnosis model
Figure BDA0002096259370000051
In the formula: i (y)(i)G) is an indicator function; y isgRepresenting overall network outputValue of layer g neuron; and c represents the number of the label samples in the training set.
According to the transformer fault diagnosis method provided by the invention, the thermal fault, the electrical fault and the aging degree of the transformer are combined by extracting 12 transformer fault diagnosis state quantities of the transformer, so that the fault of the transformer is predicted from a wider layer, and the fault of the transformer can be predicted more accurately; the relevance among the parameters is removed by adopting a dimension reduction method, and the fault of the transformer can be more accurately judged by using the parameters with redundant information removed; the weight is determined by training in a single-layer neural network training mode instead of randomly distributing the weight, so that the phenomenon of network gradient disappearance can be prevented; compared with other intelligent methods, the method has objectivity and more accurate prediction.
Detailed Description
The invention will be further explained with reference to specific embodiments, without limiting the invention.
The invention provides a transformer fault diagnosis method, which comprises the following steps:
s1: dividing historical data of the transformer into a training set and a test set, wherein each historical data comprises CH4,C2H6,C2H4,C2H2,CO,CO2,H2Breakdown voltage, dielectric loss coefficient, acid value, furfural and water, and 12-dimensional transformer fault diagnosis state quantity;
the 12-dimensional state quantity has a large influence on the electrical fault degree, the thermal fault degree and the aging fault degree of the transformer. In this embodiment, the selected training set includes 800 transformer fault samples, and the test set includes 200 transformer fault samples, where the 800 samples of the training set include 200 labeled samples and 600 unlabeled samples.
S2: training a first layer of neural network to obtain a weight and an offset of the updated first layer of neural network, wherein an input layer of the first layer of neural network comprises 12 neurons which respectively correspond to 12-dimensional transformer fault diagnosis state quantities, a hidden layer comprises 9 neurons, and an output layer comprises 12 neurons;
the specific training method comprises the following steps:
s21: calculating the value of each neuron in the hidden layer of the first layer neural network by formula (1)
Figure BDA0002096259370000061
In the formula: h isiA value representing an ith neuron of a first layer neural network hidden layer; wjiRepresenting the weight of the jth neuron of the input layer and the ith neuron of the hidden layer, wherein the initial value is a random number between 0 and 1; p is a radical ofiThe bias of the ith neuron of the hidden layer is a random number with an initial value of 0-1, xjRepresenting the value of the jth neuron of the input layer, namely the jth dimension transformer fault diagnosis state quantity;
s22: calculating the value of each neuron in the output layer of the first layer of neural network according to the formula (2)
Figure BDA0002096259370000062
In the formula: y iskA value representing a kth neuron of an output layer of the first layer neural network;
Figure BDA0002096259370000063
representing the weight of the ith neuron of the hidden layer and the kth neuron of the output layer, wherein the initial value is a random number between 0 and 1; q. q.skRepresenting the bias of the kth neuron of the output layer, and the initial value is a random number h between 0 and 1iA value representing an ith neuron of the hidden layer;
s23: constructing an error function J, and updating the weight values of the input layer and the hidden layer of the first layer of neural network and the bias value of the hidden layer through a formula (4) and a formula (5);
wherein the error function J is shown in formula (3),
Figure BDA0002096259370000071
in the formula:
Figure BDA0002096259370000072
representing an error function, wherein N represents the number of samples without labels in the training set samples; s represents a set without label sample data in the training set sample; x represents data in the set s; x is the number ofnA value representing the nth dimension of the data; y isnA value representing the nth dimension of the output data after the network calculation; wjiRepresenting the weight of the j-th neuron of the input layer and the i-th neuron of the hidden layer,
Figure BDA0002096259370000073
representing the weight, p, of the ith neuron of the hidden layer and the kth neuron of the output layeriTo hide the bias of the ith neuron of the layer, qkRepresenting the bias of the kth neuron of the output layer;
Figure BDA0002096259370000074
in the formula: w is ajiThe weights of the jth neuron of the input layer and the ith neuron in the hidden layer are obtained; alpha represents the learning rate, the value range is 0.05-0.2, preferably 0.1; j represents an error function;
Figure BDA0002096259370000075
in the formula: p is a radical ofiA bias value of an ith neuron of a hidden layer; alpha is the learning rate, the value range is 0.05-0.2, preferably 0.1; j represents an error function;
s24: updating the weights of the hidden layer and the output layer of the first layer of neural network and the bias of the output layer in the same way as the S23;
s3: training a second-layer neural network to obtain the updated weight and bias of the second-layer neural network, wherein an input layer of the second-layer neural network is a hidden layer of the first-layer neural network, the hidden layer of the second-layer neural network comprises 6 neurons, an output layer of the second-layer neural network comprises 9 neurons, and the method for training the second-layer neural network is the same as the method for training the first-layer neural network;
s4: training a third-layer neural network to obtain a weight and an offset of the updated third-layer neural network, wherein an input layer of the third-layer neural network is a hidden layer of the second-layer neural network, the hidden layer of the third-layer neural network comprises 3 neurons, an output layer comprises 6 neurons, and the 3 neurons of the hidden layer of the third-layer neural network represent the influence degrees of electrical faults, thermal faults and aging degrees on transformer faults respectively, are linearly independent, and can better predict the fault degree of the transformer, and the method for training the third-layer neural network is the same as the method for training the first-layer neural network;
s5: taking a hidden layer of the third layer of neural network as an input layer of the fourth layer of neural network, wherein an output layer of the fourth layer of neural network comprises 4 neurons, a value of each neuron is calculated by formula (6), and the 4 neurons respectively correspond to 4 states of the transformer: normal, first order degradation, second order degradation, third order degradation
Figure BDA0002096259370000081
In the formula: y isgThe value of the g-th neuron of the output layer of the fourth layer of neural network is represented, namely the probability that the fault category of the transformer sample is g; w is afgRepresenting the weight of the input layer and the output layer of the neural network of the fourth layer, and taking the value as a random number of 0-1; bgThe bias value of the g-th neuron of the fourth layer neural network output layer is a random number with the value of 0-1; x is the number offRepresenting the value of the f-th neuron of the input layer of the fourth layer neural network;
s6: establishing an integral network, wherein a first layer of neural network input layer is used as an input layer of the integral network, a hidden layer of the first layer of neural network is used as a first hidden layer of the integral network, a hidden layer of a second layer of neural network is used as a second hidden layer of the integral network, a hidden layer of a third layer of neural network is used as a third hidden layer of the integral network, and an output layer of a fourth layer of neural network is used as an output layer of the integral network;
s7: utilizing a sample with a label in a training set to carry out fine adjustment on the weight and the offset value of the whole network to obtain a transformer fault diagnosis model, wherein the fine adjustment method comprises the following steps:
constructing a total loss function Q of the neural network as shown in a formula (7), and then finely adjusting the weight and the offset value of each layer of the neural network to obtain the finely adjusted weight and offset value, thereby obtaining a transformer fault diagnosis model
Figure BDA0002096259370000091
In the formula: i (y)(i)G) is an indicator function; y isgA value representing the g-th neuron of the overall network output layer; c represents the number of the label samples in the training set;
wherein, the method for updating the weight and the offset value of the whole network according to the total loss function to obtain the weight and the offset value after fine adjustment is similar to S2, and when the weight of the input layer and the first hidden layer of the whole network and the offset of the first hidden layer are fine adjusted, the function J is changed into the function Q, wijRepresenting the weights, p, of the input layer and the first hidden layeriRepresenting the bias of the first hidden layer, and finishing the updating of the weight and the bias according to the formulas (4) and (5); when the weight of the first hidden layer and the second hidden layer and the bias of the second hidden layer are fine-tuned, the function J is changed into the function Q, wijRepresenting the weights, p, of the first hidden layer and the second hidden layeriRepresenting the bias of the second hidden layer, and finishing the updating of the weight and the bias according to the formulas (4) and (5); when the weights of the second hidden layer and the third hidden layer and the bias of the third hidden layer are fine-tuned, the function J is changed into the function Q, wijRepresenting the weights, p, of the second and third hidden layersiRepresenting the bias of the third hidden layer, and finishing the updating of the weight and the bias according to the formulas (4) and (5); for weights and output layers of the third hidden layer and output layerWhen the offset is fine-tuned, the function J is changed into the function Q, wijRepresents the weight, p, of the third hidden layer and the output layeriRepresenting the bias of the output layer, and finishing the updating of the weight and the bias according to the formulas (4) and (5); the i and the j are changed along with the number of the neurons in each layer, and the weight and the bias of the whole network are updated, so that a transformer fault diagnosis model can be obtained;
s8: verifying the transformer fault diagnosis model obtained in the S7 by using the test set sample, and ensuring the accuracy of the transformer fault diagnosis model;
s9: and processing the sampled data received in real time by using the transformer fault diagnosis model to obtain a transformer fault diagnosis result.
The transformer fault diagnosis method combines the thermal fault, the electrical fault and the aging degree of the transformer by extracting 12 transformer fault diagnosis state quantities of the transformer, predicts the fault of the transformer from a wider layer and can more accurately predict the fault of the transformer; the relevance among the parameters is removed by adopting a dimension reduction method, and the fault of the transformer can be more accurately judged by using the parameters with redundant information removed; the weight is determined by training in a single-layer neural network training mode instead of randomly distributing the weight, so that the phenomenon of network gradient disappearance can be prevented; compared with other intelligent methods, the method has objectivity and more accurate prediction.
The embodiments of the present invention have been written in a progressive manner with emphasis placed on the differences between the various embodiments, and similar elements may be found in relation to each other.
While the embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.

Claims (3)

1. A transformer fault diagnosis method is characterized by comprising the following steps:
s1: the historical data of the transformer is divided into a training set and a testing set, wherein each calendarThe history data all include CH4,C2H6,C2H4,C2H2,CO,CO2,H2Breakdown voltage, dielectric loss coefficient, acid value, furfural and water, and 12-dimensional transformer fault diagnosis state quantity;
s2: training a first layer of neural network to obtain a weight and an offset of the updated first layer of neural network, wherein an input layer of the first layer of neural network comprises 12 neurons which respectively correspond to 12-dimensional transformer fault diagnosis state quantities, a hidden layer comprises 9 neurons, and an output layer comprises 12 neurons;
s3: training a second-layer neural network to obtain the updated weight and bias of the second-layer neural network, wherein an input layer of the second-layer neural network is a hidden layer of the first-layer neural network, the hidden layer of the second-layer neural network comprises 6 neurons, and an output layer comprises 9 neurons;
s4: training a third-layer neural network to obtain a weight and an offset of the updated third-layer neural network, wherein an input layer of the third-layer neural network is a hidden layer of the second-layer neural network, the hidden layer of the third-layer neural network comprises 3 neurons, an output layer comprises 6 neurons, and the 3 neurons of the hidden layer of the third-layer neural network represent the influence degrees of electrical faults, thermal faults and aging degrees on transformer faults respectively;
s5: taking a hidden layer of the third layer of neural network as an input layer of the fourth layer of neural network, wherein an output layer of the fourth layer of neural network comprises 4 neurons, a value of each neuron is calculated by formula (6), and the 4 neurons respectively correspond to 4 states of the transformer: normal, first order degradation, second order degradation, third order degradation
Figure FDA0003026527820000011
In the formula: y isgThe value of the g-th neuron of the output layer of the fourth layer of neural network is represented, namely the probability that the fault category of the transformer sample is g; w is afgRepresenting the weight of the input layer and the output layer of the neural network of the fourth layerTaking a random number of 0-1 as a value; bgThe bias value of the g-th neuron of the fourth layer neural network output layer is a random number with the value of 0-1; x is the number offRepresenting the value of the f-th neuron of the input layer of the fourth layer neural network;
s6: establishing an integral network, wherein a first layer of neural network input layer is used as an input layer of the integral network, a hidden layer of the first layer of neural network is used as a first hidden layer of the integral network, a hidden layer of a second layer of neural network is used as a second hidden layer of the integral network, a hidden layer of a third layer of neural network is used as a third hidden layer of the integral network, and an output layer of a fourth layer of neural network is used as an output layer of the integral network;
s7: fine-tuning the weight and the offset value of the whole network by using the sample with the label in the training set to obtain a transformer fault diagnosis model;
s8: verifying the transformer fault diagnosis model obtained in the S7 by using the test set sample, and ensuring the accuracy of the transformer fault diagnosis model;
s9: processing the sampled data received in real time by using a transformer fault diagnosis model to obtain a transformer fault diagnosis result;
the method for training the first-layer neural network in S2 is as follows:
s21: calculating the value of each neuron in the hidden layer of the first layer neural network by formula (1)
Figure FDA0003026527820000021
In the formula: h isiA value representing an ith neuron of a first layer neural network hidden layer; wjiRepresenting the weight of the jth neuron of the input layer and the ith neuron of the hidden layer, wherein the initial value is a random number between 0 and 1; p is a radical ofiThe bias of the ith neuron of the hidden layer is a random number with an initial value of 0-1, xjRepresenting the value of the jth neuron of the input layer, namely the jth dimension transformer fault diagnosis state quantity;
s22: calculating the value of each neuron in the output layer of the first layer of neural network according to the formula (2)
Figure FDA0003026527820000031
In the formula: y iskA value representing a kth neuron of an output layer of the first layer neural network;
Figure FDA0003026527820000036
representing the weight of the ith neuron of the hidden layer and the kth neuron of the output layer, wherein the initial value is a random number between 0 and 1; q. q.skRepresenting the bias of the kth neuron of the output layer, and the initial value is a random number h between 0 and 1iA value representing an ith neuron of the hidden layer;
s23: constructing an error function J, and updating the weight values of the input layer and the hidden layer of the first layer of neural network and the bias value of the hidden layer through a formula (4) and a formula (5);
wherein the error function J is shown in formula (3),
Figure FDA0003026527820000032
in the formula:
Figure FDA0003026527820000033
representing an error function, wherein N represents the number of samples without labels in the training set samples; s represents a set without label sample data in the training set sample; x represents data in the set s; x is the number ofnA value representing the nth dimension of the data; y isnA value representing the nth dimension of the output data after the network calculation; wjiRepresenting the weight of the j-th neuron of the input layer and the i-th neuron of the hidden layer,
Figure FDA0003026527820000037
representing the weight, p, of the ith neuron of the hidden layer and the kth neuron of the output layeriTo hide the bias of the ith neuron of the layer, qkRepresenting the bias of the kth neuron of the output layer;
Figure FDA0003026527820000034
in the formula: w is ajiThe weights of the jth neuron of the input layer and the ith neuron in the hidden layer are obtained; α represents a learning rate; j represents an error function;
Figure FDA0003026527820000035
in the formula: p is a radical ofiA bias value of an ith neuron of a hidden layer; alpha is the learning rate; j represents an error function;
s24: updating the weights of the hidden layer and the output layer of the first layer of neural network and the bias of the output layer in the same way as the S23;
in S7, the method for fine-tuning the weight and offset of the entire network by using the samples with labels in the training set is as follows:
constructing a total loss function Q of the neural network as shown in a formula (7), and then finely adjusting the weight and the offset value of each layer of the neural network to obtain the finely adjusted weight and offset value, thereby obtaining a transformer fault diagnosis model
Figure FDA0003026527820000041
In the formula: i (y)(i)G) is an indicator function; y isgA value representing the g-th neuron of the overall network output layer; and c represents the number of the label samples in the training set.
2. The transformer fault diagnosis method according to claim 1, characterized in that: the value range of alpha in the formulas 4 and 5 is 0.05-0.2.
3. The transformer fault diagnosis method according to claim 2, characterized in that: the value of α in equations 4 and 5 is 0.1.
CN201910519783.7A 2019-06-17 2019-06-17 Transformer fault diagnosis method Active CN110163332B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910519783.7A CN110163332B (en) 2019-06-17 2019-06-17 Transformer fault diagnosis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910519783.7A CN110163332B (en) 2019-06-17 2019-06-17 Transformer fault diagnosis method

Publications (2)

Publication Number Publication Date
CN110163332A CN110163332A (en) 2019-08-23
CN110163332B true CN110163332B (en) 2021-08-06

Family

ID=67625789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910519783.7A Active CN110163332B (en) 2019-06-17 2019-06-17 Transformer fault diagnosis method

Country Status (1)

Country Link
CN (1) CN110163332B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111678679A (en) * 2020-05-06 2020-09-18 内蒙古电力(集团)有限责任公司电力调度控制分公司 Circuit breaker fault diagnosis method based on PCA-BPNN
CN113516310B (en) * 2021-07-12 2022-04-19 内蒙古电力(集团)有限责任公司内蒙古电力科学研究院分公司 Transformer fault early warning method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103576061A (en) * 2013-10-17 2014-02-12 国家电网公司 Method for discharge fault diagnosis of transformer
CN103914735A (en) * 2014-04-17 2014-07-09 北京泰乐德信息技术有限公司 Failure recognition method and system based on neural network self-learning
CN109632309A (en) * 2019-01-17 2019-04-16 燕山大学 Based on the rolling bearing fault intelligent diagnosing method for improving S-transformation and deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3117512A1 (en) * 2014-05-12 2017-01-18 Siemens Aktiengesellschaft Fault level estimation method for power converters

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103576061A (en) * 2013-10-17 2014-02-12 国家电网公司 Method for discharge fault diagnosis of transformer
CN103914735A (en) * 2014-04-17 2014-07-09 北京泰乐德信息技术有限公司 Failure recognition method and system based on neural network self-learning
CN109632309A (en) * 2019-01-17 2019-04-16 燕山大学 Based on the rolling bearing fault intelligent diagnosing method for improving S-transformation and deep learning

Also Published As

Publication number Publication date
CN110163332A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN110083951B (en) Solid insulation life prediction method based on relevant operation data of transformer
CN115081592B (en) Highway low-visibility prediction method based on genetic algorithm and feedforward neural network
CN110163332B (en) Transformer fault diagnosis method
CN110334865B (en) Power equipment fault rate prediction method and system based on convolutional neural network
CN112115636B (en) Advanced prediction method and system for insulation aging life of power cable
CN110794308B (en) Method and device for predicting train battery capacity
Liu et al. Combined forecasting method of dissolved gases concentration and its application in condition-based maintenance
Li et al. A k-nearest neighbor locally weighted regression method for short-term traffic flow forecasting
CN112215279B (en) Power grid fault diagnosis method based on immune RBF neural network
CN114266289A (en) Complex equipment health state assessment method
CN111860971B (en) Method and device for predicting residual life of turnout switch machine
CN110941902A (en) Lightning stroke fault early warning method and system for power transmission line
CN114595883B (en) Individualized dynamic prediction method for residual life of oil immersed transformer based on meta learning
CN113459867A (en) Electric vehicle charging process fault early warning method based on adaptive deep confidence network
CN113782113B (en) Method for identifying gas fault in transformer oil based on deep residual error network
CN114818817A (en) Weak fault recognition system and method for capacitive voltage transformer
CN113791351B (en) Lithium battery life prediction method based on transfer learning and difference probability distribution
CN111680712A (en) Transformer oil temperature prediction method, device and system based on similar moments in the day
CN113298296A (en) Method for predicting day-ahead load probability of power transmission substation from bottom to top
CN117574835A (en) Method, equipment and storage medium for simulating Nand false LLR distribution
CN117077052A (en) Dry-type transformer abnormality detection method based on working condition identification
CN111967593A (en) Method and system for processing abnormal data based on modeling
CN114444798A (en) Contact network operation and maintenance method and device based on space-time distribution technology
CN110222606B (en) Early failure prediction method of electronic system based on tree search extreme learning machine
CN113673766B (en) Method for predicting gas content in oil of oil-filled electrical equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Jiang Hui

Inventor after: Ma Yingang

Inventor after: Wang Wei

Inventor after: Sun Xianming

Inventor before: Ma Yingang

Inventor before: Jiang Hui

Inventor before: Wang Wei

Inventor before: Sun Xianming

GR01 Patent grant
GR01 Patent grant