CN110969262A - Transformer fault diagnosis method - Google Patents
Transformer fault diagnosis method Download PDFInfo
- Publication number
- CN110969262A CN110969262A CN201911220444.5A CN201911220444A CN110969262A CN 110969262 A CN110969262 A CN 110969262A CN 201911220444 A CN201911220444 A CN 201911220444A CN 110969262 A CN110969262 A CN 110969262A
- Authority
- CN
- China
- Prior art keywords
- kernel
- learning machine
- fault diagnosis
- particle
- transformer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000003745 diagnosis Methods 0.000 title claims abstract description 40
- 230000006870 function Effects 0.000 claims abstract description 38
- 238000012549 training Methods 0.000 claims abstract description 31
- 238000012360 testing method Methods 0.000 claims abstract description 11
- 230000008569 process Effects 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 239000002245 particle Substances 0.000 claims description 63
- 239000011159 matrix material Substances 0.000 claims description 18
- 239000007789 gas Substances 0.000 claims description 15
- UGFAIRIUMAVXCW-UHFFFAOYSA-N Carbon monoxide Chemical compound [O+]#[C-] UGFAIRIUMAVXCW-UHFFFAOYSA-N 0.000 claims description 9
- 229910002091 carbon monoxide Inorganic materials 0.000 claims description 9
- VNWKTOKETHGBQD-UHFFFAOYSA-N methane Chemical compound C VNWKTOKETHGBQD-UHFFFAOYSA-N 0.000 claims description 9
- 239000013598 vector Substances 0.000 claims description 9
- CURLTUGMZLYLDI-UHFFFAOYSA-N Carbon dioxide Chemical compound O=C=O CURLTUGMZLYLDI-UHFFFAOYSA-N 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 6
- 238000013021 overheating Methods 0.000 claims description 6
- 239000004215 Carbon black (E152) Substances 0.000 claims description 5
- 229930195733 hydrocarbon Natural products 0.000 claims description 5
- 150000002430 hydrocarbons Chemical class 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 4
- OTMSDBZUPAUEDD-UHFFFAOYSA-N Ethane Chemical compound CC OTMSDBZUPAUEDD-UHFFFAOYSA-N 0.000 claims description 3
- VGGSQFUCUMXWEO-UHFFFAOYSA-N Ethene Chemical compound C=C VGGSQFUCUMXWEO-UHFFFAOYSA-N 0.000 claims description 3
- 239000005977 Ethylene Substances 0.000 claims description 3
- HSFWRNGVRCDJHI-UHFFFAOYSA-N alpha-acetylene Natural products C#C HSFWRNGVRCDJHI-UHFFFAOYSA-N 0.000 claims description 3
- 229910002092 carbon dioxide Inorganic materials 0.000 claims description 3
- 239000001569 carbon dioxide Substances 0.000 claims description 3
- 125000002534 ethynyl group Chemical group [H]C#C* 0.000 claims description 3
- 229910052739 hydrogen Inorganic materials 0.000 claims description 3
- 239000001257 hydrogen Substances 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000000717 retained effect Effects 0.000 claims description 3
- UFHFLCQGNIYNRP-UHFFFAOYSA-N Hydrogen Chemical compound [H][H] UFHFLCQGNIYNRP-UHFFFAOYSA-N 0.000 claims 1
- 150000002431 hydrogen Chemical class 0.000 claims 1
- 238000013528 artificial neural network Methods 0.000 abstract description 3
- 238000011160 research Methods 0.000 abstract description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 125000004435 hydrogen atom Chemical class [H]* 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 229910052799 carbon Inorganic materials 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 230000003211 malignant effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R31/00—Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to a transformer fault diagnosis method, which comprises the following steps: s1, obtaining sample data of the concentration of dissolved gas in the transformer oil and corresponding fault conclusion and preprocessing the sample data to generate a training sample set and a test sample set; s2, establishing a prediction model of the extreme learning machine by adopting the generated training sample set; s3, optimizing kernel function parameters and penalty coefficients of the kernel limit learning machine by adopting a crisscross algorithm in the model training process; and S4, inputting the test sample into a trained kernel limit learning machine for prediction to obtain a transformer fault diagnosis result. The transformer fault diagnosis method effectively solves the problem that the transformer fault data coding and the parameter selection of the kernel limit learning machine are difficult, simultaneously avoids the local optimal problem of the traditional BP neural network, can be applied to scientific research and engineering application in the related field of transformers, has high recognition speed and high recognition rate, and greatly improves the diagnosis precision of transformer faults.
Description
Technical Field
The invention relates to the technical field of transformer fault diagnosis, in particular to a transformer fault diagnosis method.
Background
The power transformer is the most important power transmission and transformation equipment of the power system, and is one of the most accident-occurring equipment in the power system, and the running state of the power transformer directly affects the safety and stability of the system running. How to ensure safe operation of transformers has received a great deal of attention from countries around the world. The power transformer is periodically preventively maintained, the actual operation condition of the high-voltage equipment is detected in real time, latent faults or defects of the high-voltage equipment are detected and diagnosed, the diagnosis level is improved, targeted maintenance is achieved, early fault prediction is achieved, occurrence of malignant accidents is avoided, and the method has important practical significance. In addition, many domestic and foreign data show that obvious economic benefits can be obtained by carrying out fault diagnosis. According to the statistics of Japan, the investigation of the United kingdom on the national construction of the whole country shows that the diagnosis technology saves billion pounds of maintenance cost each year, the cost for the diagnosis technology is only billion pounds, and the net profit is billion pounds. Therefore, the state of the power transformer can be judged timely and effectively by diagnosing faults in the operation process of the power transformer, and the long-term, safe and reliable operation of the transformer can be possible. Whether measured by importance or economic benefit, will have important significance for safe operation of the power system.
The traditional diagnosis method is a method for judging the fault type directly or through a simple calculation ratio, which is summarized by people in long-term scientific research and transformer fault diagnosis practice, and mainly comprises a characteristic gas method, a coding ratio method, a non-coding ratio method and the like, but the defects that the coding is incomplete and the coding boundary is too absolute and the like are exposed in practical use. In recent years, artificial intelligence methods such as BP neural networks and the like are widely applied in the field of transformer fault diagnosis, but a BP algorithm adopts a gradient descent method, so that the training speed is low, the BP algorithm is easy to fall into a local minimum point, and the learning rate is very sensitive, so that the transformer fault recognition rate is low.
Disclosure of Invention
In order to solve the problems of low speed, over-absolute result and low fault recognition rate in the prior art, the invention provides a transformer fault diagnosis method, which is a transformer fault diagnosis method based on a crossbar algorithm optimized kernel limit learning machine, effectively solves the problem of difficult transformer fault data coding and kernel limit learning machine parameter selection, simultaneously avoids the local optimization problem of the traditional BP neural network, can be applied to scientific research and engineering application in the related field of transformers, has high recognition speed and high recognition rate, and greatly improves the diagnosis precision of transformer faults.
In order to solve the technical problems, the invention provides the following technical scheme:
a transformer fault diagnosis method comprises the following steps:
s1, obtaining sample data of the concentration of dissolved gas in the transformer oil and corresponding fault conclusion and preprocessing the sample data to generate a training sample set and a test sample set;
s2, establishing a prediction model of the extreme learning machine by adopting the generated training sample set;
s3, optimizing kernel function parameters and penalty coefficients of the kernel limit learning machine by adopting a crisscross algorithm in the model training process;
and S4, inputting the test sample into a trained kernel limit learning machine for prediction to obtain a transformer fault diagnosis result.
Further, in step S1, the characteristic gas used for dissolving the gas in the oil in the sample data includes methane (CH)4) Ethane (C)2H6) Ethylene (C)2H4) Acetylene (C)2H2) Hydrogen (H)2) Carbon dioxide (CO)2) And carbon monoxide (CO), many characteristics, high precision.
Further, in step S1, the types of failures corresponding to the transformer are classified into the following six types according to the concentration of the hydrocarbon gas: 6 states including high-temperature overheating (T2), high-energy discharge (D2), low-energy discharge (D1), medium-low-temperature overheating (T1), Partial Discharge (PD) and normal state (NC); the corresponding output codes are respectively: 100000, 010000, 001000, 000100, 000010 and 000001, with higher precision.
Further, in step S1Dividing the sample data into a training sample set and a test sample set, training sample set TrnEach sample included the above-mentioned concentrations of the 7 hydrocarbon gases as inputs, denoted asWherein m is the input number of the prediction model, 1 corresponding fault type is taken as output and is expressed asThe value of l is determined by the output number of the prediction model, and n is the nth sample in the sample set, so that the diagnosis precision is improved.
Further, where m is 7 and l is 1, the result is more accurate.
Further, in step S1, the sample set Te is testednSelection mode and training sample set TrnThe selection modes are the same, and the calculation amount is reduced.
Further, in step S2, the specific steps of building the prediction model of the kernel-based extreme learning machine are as follows:
s2.1, the regression function of the extreme learning machine and the link weight of the hidden layer and the output layer are as follows:
wherein, x is sample input, f (x) is network output, H (x) and H are hidden layer characteristic mapping matrixes and are random mapping, β is a connection weight of the hidden layer and the output layer, the connection weight is calculated according to the generalized inverse matrix theory, I is a diagonal matrix, C is a punishment coefficient, and T is a sample target value vector;
s2.2, defining a kernel matrix of the kernel extreme learning machine as follows:
in the formula, xiAnd xjInputting a vector for a sample, where i and j are positive integers having values in the range of 1 to N, K (x)i,xj) As kernel functions, kernel limit theoryThe kernel function of the learning machine is selected as a radial basis RBF kernel function, and the RBF kernel function expression is as follows:
wherein, | | xi-xjI is an Euclidean norm between samples, and sigma is a kernel function parameter;
then the output of the kernel limit learning machine and the connection weight between the hidden layer and the output layer are as follows:
in the formula, N is the number of sample input vectors, and the result is more accurate.
Further, in step S2, the number of variables that the core limit learning machine needs to optimize is 2: the penalty coefficient C and the kernel function parameter sigma, and the result is more accurate.
Furthermore, the value ranges of the penalty coefficient C and the kernel function parameter sigma are respectively [0.01,1000] and [0.001,100], and the result is more accurate.
Further, in step S3, the specific steps of optimizing the penalty coefficient and the kernel function parameter of the kernel-limit learning machine by using the crossbar algorithm are as follows:
s3.1, initializing parameters, setting the size M of a population in a criss-cross algorithm to be 20, and setting the maximum iteration number TmaxTake 100, longitudinal cross probability PvSet to 0.8;
s3.2, randomly generating a group of particles as initial C and sigma of the kernel limit learning machine, and establishing a diagnosis model of the kernel limit learning machine optimized by a crisscross algorithm:
Fi=[Ci,σi],i=1,2,...,M
wherein C is a penalty coefficient, sigma is a kernel function parameter, and M is the population size, and the total number of M particles is;
the iteration times t is set to be 1, each initial population particle of the longitudinal and transverse intersection algorithm is converted into a penalty coefficient and a kernel function parameter of a kernel-limit learning machine respectively, model training is carried out, and a training error is calculated according to the following formula, namely the fitness value of the particle:
in the formula, pt、Respectively outputting an actual fault type and a target fault type, wherein T is the number of training samples;
s3.3, in t iterations of the particle search target space, from FiRepresenting the position of each particle in the solution space;
s3.3.1, randomly combining all the particles in the population in pairs, wherein the combinations have M/2 pairs, and for each pair of combinations, the particles are transversely crossed according to the following formula:
MShc(i,d)=e1×F(i,d)+(1-e1)×F(j,d)+
f1×(F(i,d)-F(j,d))
MShc(j,d)=e2×F(j,d)+(1-e2)×F(i,d)+
f2×(F(j,d)-F(i,d))
i,j∈N(1,M);d∈N(1,D)
in the formula, e1、e2Is [0,1 ]]Random number of (d), f1、f2Is [ -1,1 [ ]]M is the particle size, D is the variable dimension, F (i, D), F (j, D) are the D-th dimensions of parent particles F (i) and F (j), respectively, MShc(i,d)、MShc(j, d) are respectively the d-dimension filial generations generated by transverse intersection of F (i, d) and F (j, d);
the transverse crossing result is stored in a mediocre solution matrix MShcCalculating the fitness value of the particle, and comparing the fitness value with the fitness value of the parent particle, wherein the particle with small fitness value is retained in F;
s3.3.2, normalizing each dimension of the particles obtained by transverse intersection, then carrying out non-repeated pairwise random pairing on all dimensions of the particles to obtain D/2 pairs, and generating a random pair of any pair of dimensionsNumber rand, if rand<PvThen the pair of dimensions are longitudinally interleaved according to the following equation:
MSvc(i,d1)=e·F(i,d1)+(1-e)·F(i,d2)
i∈N(1,M);d1,d2∈N(1,D);r∈[0,1]
in the formula, MSvc(i,d1) D of parent particle F (i)1And d2The dimension is the filial generation generated by longitudinal crossing, e is [0,1 ]]A random number in between;
the vertical crossing result is stored in a mediocre solution matrix MSvcIn the method, the fitness value of particles in the intermediate resolution matrix is calculated after the longitudinal crossing result is subjected to inverse normalization, and is compared with the fitness value of parent particles, so that the particles with good fitness values are stored in F;
s3.3.3, after the particle updating is finished, calculating the particle fitness value after the updating position, and recording the corresponding optimal individual Fbest;
S3.3.4, adding 1 to the iteration time t, when the iteration time t is<TmaxIf so, go to step S3.3.1, otherwise, the optimization is over, FbestThe method is the optimal penalty coefficient and the function parameter of the kernel limit learning machine, and the whole process is efficient and simple.
Compared with the prior art, the invention has the following beneficial effects:
the invention relates to a transformer fault diagnosis method for optimizing a nuclear limit learning machine based on a criss-cross algorithm.
Drawings
In order to more clearly illustrate the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only embodiments of the present invention, and it is obvious to those skilled in the art that other drawings can be obtained based on the drawings without inventive labor.
Fig. 1 is a flowchart of specific steps of optimizing penalty coefficients and kernel function parameters of a kernel-limit learning machine by using a criss-cross algorithm in the transformer fault diagnosis method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention comprises the following steps:
a transformer fault diagnosis method comprises the following steps:
s1, obtaining sample data of the concentration of dissolved gas in the transformer oil and corresponding fault conclusion and preprocessing the sample data to generate a training sample set and a test sample set;
s2, establishing a prediction model of the extreme learning machine by adopting the generated training sample set;
s3, optimizing kernel function parameters and penalty coefficients of the kernel limit learning machine by adopting a crisscross algorithm in the model training process;
and S4, inputting the test sample into a trained kernel limit learning machine for prediction to obtain a transformer fault diagnosis result.
In this embodiment, in step S1, the characteristic gas used for dissolving the gas in the oil in the sample data includes methane (CH)4) Ethane (C)2H6) Ethylene (C)2H4) Acetylene (C)2H2) Hydrogen (H)2) Carbon dioxide, carbon dioxideCarbon (CO)2) And carbon monoxide (CO), many characteristics, high precision.
In this embodiment, in step S1, the types of failures corresponding to the transformer are classified into the following six types according to the concentration of the hydrocarbon gas: 6 states including high-temperature overheating (T2), high-energy discharge (D2), low-energy discharge (D1), medium-low-temperature overheating (T1), Partial Discharge (PD) and normal state (NC); the corresponding output codes are respectively: 100000, 010000, 001000, 000100, 000010 and 000001, with higher precision.
In this embodiment, in step S1, the sample data is divided into a training sample set and a test sample set, the training sample set TrnEach sample included the above-mentioned concentrations of the 7 hydrocarbon gases as inputs, denoted asWherein m is the input number of the prediction model, 1 corresponding fault type is taken as output and is expressed asThe value of l is determined by the output number of the prediction model, and n is the nth sample in the sample set, so that the diagnosis precision is improved.
In the present embodiment, where m is 7 and l is 1, the result is more accurate.
In the present embodiment, in step S1, the sample set Te is testednSelection mode and training sample set TrnThe selection modes are the same, and the calculation amount is reduced.
In this embodiment, in step S2, the specific steps of building the prediction model of the kernel-based extreme learning machine are as follows:
s2.1, the regression function of the extreme learning machine and the link weight of the hidden layer and the output layer are as follows:
wherein, x is sample input, f (x) is network output, H (x) and H are hidden layer characteristic mapping matrixes and are random mapping, β is a connection weight of the hidden layer and the output layer, the connection weight is calculated according to the generalized inverse matrix theory, I is a diagonal matrix, C is a punishment coefficient, and T is a sample target value vector;
s2.2, defining a kernel matrix of the kernel extreme learning machine as follows:
in the formula, xiAnd xjInputting a vector for a sample, where i and j are positive integers having values in the range of 1 to N, K (x)i,xj) The kernel function of the kernel limit learning machine is selected as a radial basis RBF kernel function, and the RBF kernel function expression is as follows:
wherein, | | xi-xjI is an Euclidean norm between samples, and sigma is a kernel function parameter;
then the output of the kernel limit learning machine and the connection weight between the hidden layer and the output layer are as follows:
in the formula, N is the number of sample input vectors, and the result is more accurate.
In this embodiment, in step S2, the number of variables that the core limit learning machine needs to optimize is 2: the penalty coefficient C and the kernel function parameter sigma, and the result is more accurate.
In this embodiment, the value ranges of the penalty coefficient C and the kernel function parameter σ are [0.01,1000] and [0.001,100], respectively, and the result is more accurate.
As shown in fig. 1, in step S3, the specific steps of optimizing the penalty coefficient and the kernel function parameter of the kernel-limit learning machine by using the crossbar algorithm are as follows:
s3.1, initializing parameters, setting the size M of a population in a criss-cross algorithm to be 20, and setting the maximum iteration number TmaxTake 100, longitudinal cross probability PvSet to 0.8;
s3.2, randomly generating a group of particles as initial C and sigma of the kernel limit learning machine, and establishing a diagnosis model of the kernel limit learning machine optimized by a crisscross algorithm:
Fi=[Ci,σi],i=1,2,...,M
wherein C is a penalty coefficient, sigma is a kernel function parameter, and M is the population size, and the total number of M particles is;
the iteration times t is set to be 1, each initial population particle of the longitudinal and transverse intersection algorithm is converted into a penalty coefficient and a kernel function parameter of a kernel-limit learning machine respectively, model training is carried out, and a training error is calculated according to the following formula, namely the fitness value of the particle:
in the formula, pt、Respectively outputting an actual fault type and a target fault type, wherein T is the number of training samples;
s3.3, in t iterations of the particle search target space, from FiRepresenting the position of each particle in the solution space;
s3.3.1, randomly combining all the particles in the population in pairs, wherein the combinations have M/2 pairs, and for each pair of combinations, the particles are transversely crossed according to the following formula:
MShc(i,d)=e1×F(i,d)+(1-e1)×F(j,d)+
f1×(F(i,d)-F(j,d))
MShc(j,d)=e2×F(j,d)+(1-e2)×F(i,d)+
f2×(F(j,d)-F(i,d))
i,j∈N(1,M);d∈N(1,D)
in the formula, e1、e2Is [0,1 ]]Random number of (d), f1、f2Is [ -1,1 [ ]]M is the particle size, D is the variable dimension, F (i, D), F (j, D) are respectivelyDimension d of parent particles F (i) and F (j), MShc(i,d)、MShc(j, d) are respectively the d-dimension filial generations generated by transverse intersection of F (i, d) and F (j, d);
the transverse crossing result is stored in a mediocre solution matrix MShcCalculating the fitness value of the particle, and comparing the fitness value with the fitness value of the parent particle, wherein the particle with small fitness value is retained in F;
s3.3.2, normalizing each dimension of the particles obtained by transverse intersection, then performing random pairing on all dimensions of the particles without repeating pairwise pairs to obtain D/2 pairs, and generating a random number rand for any pair of dimensions, if rand<PvThen the pair of dimensions are longitudinally interleaved according to the following equation:
MSvc(i,d1)=e·F(i,d1)+(1-e)·F(i,d2)
i∈N(1,M);d1,d2∈N(1,D);r∈[0,1]
in the formula, MSvc(i,d1) D of parent particle F (i)1And d2The dimension is the filial generation generated by longitudinal crossing, e is [0,1 ]]A random number in between;
the vertical crossing result is stored in a mediocre solution matrix MSvcIn the method, the fitness value of particles in the intermediate resolution matrix is calculated after the longitudinal crossing result is subjected to inverse normalization, and is compared with the fitness value of parent particles, so that the particles with good fitness values are stored in F;
s3.3.3, after the particle updating is finished, calculating the particle fitness value after the updating position, and recording the corresponding optimal individual Fbest;
S3.3.4, adding 1 to the iteration time t, when the iteration time t is<TmaxIf so, go to step S3.3.1, otherwise, the optimization is over, FbestThe method is the optimal penalty coefficient and the function parameter of the kernel limit learning machine, and the whole process is efficient and simple.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by the present specification, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. A transformer fault diagnosis method is characterized by comprising the following steps:
s1, obtaining sample data of the concentration of dissolved gas in the transformer oil and corresponding fault conclusion and preprocessing the sample data to generate a training sample set and a test sample set;
s2, establishing a prediction model of the extreme learning machine by adopting the generated training sample set;
s3, optimizing kernel function parameters and penalty coefficients of the kernel limit learning machine by adopting a crisscross algorithm in the model training process;
and S4, inputting the test sample into a trained kernel limit learning machine for prediction to obtain a transformer fault diagnosis result.
2. The transformer fault diagnosis method according to claim 1, wherein in step S1, the characteristic gas used for dissolving the gas in the oil in the sample data includes methane (CH)4) Ethane (C)2H6) Ethylene (C)2H4) Acetylene (C)2H2) Hydrogen (H)2) Carbon dioxide (CO)2) And carbon monoxide (CO).
3. The transformer fault diagnosis method according to claim 2, wherein in step S1, the types of faults corresponding to the transformer are classified into the following six types according to the concentration of hydrocarbon gas: 6 states including high-temperature overheating (T2), high-energy discharge (D2), low-energy discharge (D1), medium-low-temperature overheating (T1), Partial Discharge (PD) and normal state (NC); the corresponding output codes are respectively: 100000, 010000, 001000, 000100, 000010 and 000001.
4. The transformer fault diagnosis method according to claim 3, wherein in step S1, the sample data is divided into a training sample set and a test sample set, the training sample set TrnEach sample included the above 7 carbonsThe concentration of hydrogen gas is taken as input and is expressed asWherein m is the input number of the prediction model, 1 corresponding fault type is taken as output and is expressed asThe value of l is determined by the output number of the prediction model, and n is the nth sample in the sample set.
5. The transformer fault diagnosis method according to claim 4, wherein m-7 and l-1.
6. The transformer fault diagnosis method according to claim 5, characterized in that in step S1, the sample set Te is testednSelection mode and training sample set TrnThe selection mode is the same.
7. The transformer fault diagnosis method according to claim 6, wherein in step S2, the specific steps of establishing the prediction model of the kernel limit learning machine are as follows:
s2.1, the regression function of the extreme learning machine and the link weight of the hidden layer and the output layer are as follows:
wherein, x is sample input, f (x) is network output, H (x) and H are hidden layer characteristic mapping matrixes and are random mapping, β is a connection weight of the hidden layer and the output layer, the connection weight is calculated according to the generalized inverse matrix theory, I is a diagonal matrix, C is a punishment coefficient, and T is a sample target value vector;
s2.2, defining a kernel matrix of the kernel extreme learning machine as follows:
in the formula, xiAnd xjInputting a vector for a sample, where i and j are positive integers having values in the range of 1 to N, K (x)i,xj) The kernel function of the kernel limit learning machine is selected as a radial basis RBF kernel function, and the RBF kernel function expression is as follows:
K(xi,xj)=exp{-||xi-xj||2/2σ2}
wherein, | | xi-xjI is an Euclidean norm between samples, and sigma is a kernel function parameter;
then the output of the kernel limit learning machine and the connection weight between the hidden layer and the output layer are as follows:
wherein N is the number of sample input vectors.
8. The transformer fault diagnosis method according to claim 7, wherein in step S2, the number of variables to be optimized by the kernel limit learning machine is 2: penalty coefficient C and kernel parameter σ.
9. The transformer fault diagnosis method according to claim 8, wherein the penalty coefficient C and the kernel function parameter σ have values in the ranges of [0.01,1000] and [0.001,100], respectively.
10. The transformer fault diagnosis method according to claim 9, wherein in step S3, the specific steps of optimizing the penalty coefficients and kernel function parameters of the kernel-limit learning machine by using the crossbar algorithm are as follows:
s3.1, initializing parameters, setting the size M of a population in a criss-cross algorithm to be 20, and setting the maximum iteration number TmaxTake 100, longitudinal cross probability PvSet to 0.8;
s3.2, randomly generating a group of particles as initial C and sigma of the kernel limit learning machine, and establishing a diagnosis model of the kernel limit learning machine optimized by a crisscross algorithm:
Fi=[Ci,σi],i=1,2,...,M
wherein C is a penalty coefficient, sigma is a kernel function parameter, and M is the population size, and the total number of M particles is;
the iteration times t is set to be 1, each initial population particle of the longitudinal and transverse intersection algorithm is converted into a penalty coefficient and a kernel function parameter of a kernel-limit learning machine respectively, model training is carried out, and a training error is calculated according to the following formula, namely the fitness value of the particle:
in the formula, pt、Respectively outputting an actual fault type and a target fault type, wherein T is the number of training samples;
s3.3, in t iterations of the particle search target space, from FiRepresenting the position of each particle in the solution space;
s3.3.1, randomly combining all the particles in the population in pairs, wherein the combinations have M/2 pairs, and for each pair of combinations, the particles are transversely crossed according to the following formula:
MShc(i,d)=e1×F(i,d)+(1-e1)×F(j,d)+f1×(F(i,d)-F(j,d))
MShc(j,d)=e2×F(j,d)+(1-e2)×F(i,d)+f2×(F(j,d)-F(i,d))
i,j∈N(1,M);d∈N(1,D)
in the formula, e1、e2Is [0,1 ]]Random number of (d), f1、f2Is [ -1,1 [ ]]M is the particle size, D is the variable dimension, F (i, D), F (j, D) are the D-th dimensions of parent particles F (i) and F (j), respectively, MShc(i,d)、MShc(j, d) are respectively the d-dimension filial generations generated by transverse intersection of F (i, d) and F (j, d);
transverse cross result guaranteeThere is a mediocre solution matrix MShcCalculating the fitness value of the particle, and comparing the fitness value with the fitness value of the parent particle, wherein the particle with small fitness value is retained in F;
s3.3.2, normalizing each dimension of the particles obtained by transverse intersection, then performing random pairing on all dimensions of the particles without repeating pairwise pairs to obtain D/2 pairs, and generating a random number rand for any pair of dimensions, if rand<PvThen the pair of dimensions are longitudinally interleaved according to the following equation:
MSvc(i,d1)=e·F(i,d1)+(1-e)·F(i,d2)
i∈N(1,M);d1,d2∈N(1,D);r∈[0,1]
in the formula, MSvc(i,d1) D of parent particle F (i)1And d2The dimension is the filial generation generated by longitudinal crossing, e is [0,1 ]]A random number in between;
the vertical crossing result is stored in a mediocre solution matrix MSvcIn the method, the fitness value of particles in the intermediate resolution matrix is calculated after the longitudinal crossing result is subjected to inverse normalization, and is compared with the fitness value of parent particles, so that the particles with good fitness values are stored in F;
s3.3.3, after the particle updating is finished, calculating the particle fitness value after the updating position, and recording the corresponding optimal individual Fbest;
S3.3.4, adding 1 to the iteration time t, when the iteration time t is<TmaxIf so, go to step S3.3.1, otherwise, the optimization is over, FbestThe optimal penalty coefficient and the function parameter of the kernel limit learning machine are obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911220444.5A CN110969262A (en) | 2019-12-03 | 2019-12-03 | Transformer fault diagnosis method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911220444.5A CN110969262A (en) | 2019-12-03 | 2019-12-03 | Transformer fault diagnosis method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110969262A true CN110969262A (en) | 2020-04-07 |
Family
ID=70032716
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911220444.5A Pending CN110969262A (en) | 2019-12-03 | 2019-12-03 | Transformer fault diagnosis method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110969262A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111598150A (en) * | 2020-05-12 | 2020-08-28 | 国网四川省电力公司电力科学研究院 | Transformer fault diagnosis method considering operation state grade |
CN111695611A (en) * | 2020-05-27 | 2020-09-22 | 电子科技大学 | Bee colony optimization kernel extreme learning and sparse representation mechanical fault identification method |
CN112114214A (en) * | 2020-09-08 | 2020-12-22 | 贵州电网有限责任公司 | Transformer fault diagnosis method |
CN112561129A (en) * | 2020-11-27 | 2021-03-26 | 广东电网有限责任公司肇庆供电局 | First-aid repair material allocation method based on distribution line fault information |
CN112748372A (en) * | 2020-12-21 | 2021-05-04 | 湘潭大学 | Transformer fault diagnosis method of artificial bee colony optimization extreme learning machine |
CN112766140A (en) * | 2021-01-15 | 2021-05-07 | 云南电网有限责任公司电力科学研究院 | Transformer fault identification method based on kernel function extreme learning machine |
CN113341347A (en) * | 2021-06-02 | 2021-09-03 | 云南大学 | Dynamic fault detection method for distribution transformer based on AOELM |
CN113469257A (en) * | 2021-07-07 | 2021-10-01 | 云南大学 | Distribution transformer fault detection method and system |
CN113506252A (en) * | 2021-06-29 | 2021-10-15 | 国家电网有限公司 | Transformer bushing typical defect type identification method based on t-SNE and nuclear extreme learning machine |
CN115598470A (en) * | 2022-09-05 | 2023-01-13 | 国网江苏省电力有限公司无锡供电分公司(Cn) | Arc active early warning method and system based on multispectral frequency band |
CN117390520A (en) * | 2023-12-08 | 2024-01-12 | 惠州市宝惠电子科技有限公司 | Transformer state monitoring method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107516150A (en) * | 2017-08-25 | 2017-12-26 | 广东工业大学 | A kind of Forecasting Methodology of short-term wind-electricity power, apparatus and system |
CN108229581A (en) * | 2018-01-31 | 2018-06-29 | 西安工程大学 | Based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost |
CN109214460A (en) * | 2018-09-21 | 2019-01-15 | 西华大学 | Method for diagnosing fault of power transformer based on Relative Transformation Yu nuclear entropy constituent analysis |
-
2019
- 2019-12-03 CN CN201911220444.5A patent/CN110969262A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107516150A (en) * | 2017-08-25 | 2017-12-26 | 广东工业大学 | A kind of Forecasting Methodology of short-term wind-electricity power, apparatus and system |
CN108229581A (en) * | 2018-01-31 | 2018-06-29 | 西安工程大学 | Based on the Diagnosis Method of Transformer Faults for improving more classification AdaBoost |
CN109214460A (en) * | 2018-09-21 | 2019-01-15 | 西华大学 | Method for diagnosing fault of power transformer based on Relative Transformation Yu nuclear entropy constituent analysis |
Non-Patent Citations (2)
Title |
---|
张利伟: "油浸式电力变压器故障诊断方法研究" * |
董朕 等: "基于混合算法优化的短期风功率预测" * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111598150A (en) * | 2020-05-12 | 2020-08-28 | 国网四川省电力公司电力科学研究院 | Transformer fault diagnosis method considering operation state grade |
CN111695611B (en) * | 2020-05-27 | 2022-05-03 | 电子科技大学 | Bee colony optimization kernel extreme learning and sparse representation mechanical fault identification method |
CN111695611A (en) * | 2020-05-27 | 2020-09-22 | 电子科技大学 | Bee colony optimization kernel extreme learning and sparse representation mechanical fault identification method |
CN112114214A (en) * | 2020-09-08 | 2020-12-22 | 贵州电网有限责任公司 | Transformer fault diagnosis method |
CN112561129A (en) * | 2020-11-27 | 2021-03-26 | 广东电网有限责任公司肇庆供电局 | First-aid repair material allocation method based on distribution line fault information |
CN112561129B (en) * | 2020-11-27 | 2022-09-02 | 广东电网有限责任公司肇庆供电局 | First-aid repair material allocation method based on distribution line fault information |
CN112748372A (en) * | 2020-12-21 | 2021-05-04 | 湘潭大学 | Transformer fault diagnosis method of artificial bee colony optimization extreme learning machine |
CN112766140A (en) * | 2021-01-15 | 2021-05-07 | 云南电网有限责任公司电力科学研究院 | Transformer fault identification method based on kernel function extreme learning machine |
CN113341347A (en) * | 2021-06-02 | 2021-09-03 | 云南大学 | Dynamic fault detection method for distribution transformer based on AOELM |
CN113506252A (en) * | 2021-06-29 | 2021-10-15 | 国家电网有限公司 | Transformer bushing typical defect type identification method based on t-SNE and nuclear extreme learning machine |
CN113469257A (en) * | 2021-07-07 | 2021-10-01 | 云南大学 | Distribution transformer fault detection method and system |
CN115598470A (en) * | 2022-09-05 | 2023-01-13 | 国网江苏省电力有限公司无锡供电分公司(Cn) | Arc active early warning method and system based on multispectral frequency band |
CN117390520A (en) * | 2023-12-08 | 2024-01-12 | 惠州市宝惠电子科技有限公司 | Transformer state monitoring method and system |
CN117390520B (en) * | 2023-12-08 | 2024-04-16 | 惠州市宝惠电子科技有限公司 | Transformer state monitoring method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110969262A (en) | Transformer fault diagnosis method | |
CN110929763B (en) | Multi-source data fusion-based mechanical fault diagnosis method for medium-voltage vacuum circuit breaker | |
CN110542819B (en) | Transformer fault type diagnosis method based on semi-supervised DBNC | |
CN112328588B (en) | Industrial fault diagnosis unbalanced time sequence data expansion method | |
CN115563563A (en) | Fault diagnosis method and device based on transformer oil chromatographic analysis | |
CN110879373B (en) | Oil-immersed transformer fault diagnosis method with neural network and decision fusion | |
CN112147432A (en) | BiLSTM module based on attention mechanism, transformer state diagnosis method and system | |
CN115687115B (en) | Automatic testing method and system for mobile application program | |
CN116562114A (en) | Power transformer fault diagnosis method based on graph convolution neural network | |
CN105574589A (en) | Transformer oil chromatogram fault diagnosis method based on ecological niche genetic algorithm | |
CN114184861A (en) | Fault diagnosis method for oil-immersed transformer | |
CN110689068A (en) | Transformer fault type diagnosis method based on semi-supervised SVM | |
CN111612078A (en) | Transformer fault sample enhancement method based on condition variation automatic encoder | |
CN117113166A (en) | Industrial boiler fault detection method based on improved integrated learning | |
CN115018512A (en) | Electricity stealing detection method and device based on Transformer neural network | |
CN116562121A (en) | XGBoost and FocalLoss combined cable aging state assessment method | |
CN115345222A (en) | Fault classification method based on TimeGAN model | |
CN112686404B (en) | Power distribution network fault first-aid repair-based collaborative optimization method | |
CN112380763A (en) | System and method for analyzing reliability of in-pile component based on data mining | |
CN110348489B (en) | Transformer partial discharge mode identification method based on self-coding network | |
CN116992362A (en) | Transformer fault characterization feature quantity screening method and device based on Xia Puli value | |
CN111709495A (en) | Transformer fault diagnosis method based on NBC model | |
Yin et al. | Deep learning based transformer fault diagnosis method | |
CN115828185A (en) | Fault diagnosis method for oil immersed transformer | |
CN113507389B (en) | Power grid key node identification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200407 |
|
RJ01 | Rejection of invention patent application after publication |