CN110348489B - Transformer partial discharge mode identification method based on self-coding network - Google Patents
Transformer partial discharge mode identification method based on self-coding network Download PDFInfo
- Publication number
- CN110348489B CN110348489B CN201910532414.1A CN201910532414A CN110348489B CN 110348489 B CN110348489 B CN 110348489B CN 201910532414 A CN201910532414 A CN 201910532414A CN 110348489 B CN110348489 B CN 110348489B
- Authority
- CN
- China
- Prior art keywords
- hidden layer
- weight
- data
- self
- new
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 40
- 238000012549 training Methods 0.000 claims abstract description 37
- 238000005457 optimization Methods 0.000 claims abstract description 34
- 238000012545 processing Methods 0.000 claims abstract description 7
- 239000013598 vector Substances 0.000 claims description 17
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000012360 testing method Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000003064 k means clustering Methods 0.000 claims description 6
- 230000035772 mutation Effects 0.000 claims description 6
- 239000005367 kimax Substances 0.000 claims description 5
- 238000009826 distribution Methods 0.000 claims description 4
- 210000002569 neuron Anatomy 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 2
- 150000001875 compounds Chemical class 0.000 claims description 2
- 239000011541 reaction mixture Substances 0.000 claims description 2
- 238000012567 pattern recognition method Methods 0.000 claims 2
- 238000013507 mapping Methods 0.000 claims 1
- 238000012795 verification Methods 0.000 claims 1
- 239000010410 layer Substances 0.000 description 112
- 238000009413 insulation Methods 0.000 description 7
- 230000006378 damage Effects 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000003909 pattern recognition Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- GRYLNZFGIOXLOG-UHFFFAOYSA-N Nitric acid Chemical compound O[N+]([O-])=O GRYLNZFGIOXLOG-UHFFFAOYSA-N 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000000593 degrading effect Effects 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000017525 heat dissipation Effects 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 229910017604 nitric acid Inorganic materials 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
- 239000000243 solution Substances 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R31/00—Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
- G01R31/50—Testing of electric apparatus, lines, cables or components for short-circuits, continuity, leakage current or incorrect line connections
- G01R31/62—Testing of transformers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- H02J13/0013—
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J3/00—Circuit arrangements for ac mains or ac distribution networks
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Power Engineering (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a transformer partial discharge mode identification method based on a self-coding network, which comprises the following steps: step 1, processing collected partial discharge signal data; step 2, selecting a self-coding network as a network model; step 3, training the weight of the self-coding network by using a simulated annealing-brainstorming hybrid optimization algorithm; step 4, optimizing the number of hidden layers and hidden layer nodes of the self-coding network by using a brainstorming optimization algorithm so as to obtain a trained network; step 5, inputting the data to be recognized into the trained network, and classifying the data to be recognized by utilizing the trained network; and 6, calculating the identification rate of the data to be identified. The method of the invention has better identification precision, saves time and labor and improves the universality of the network.
Description
Technical Field
The invention belongs to the technical field of power equipment monitoring, and relates to a transformer partial discharge mode identification method based on a self-coding network.
Background
Electrical equipment is a major component of an electrical power system, and if the electrical equipment fails, a large loss is caused. With the development of science and technology, the functions of the power system become complex, and higher automation is realized. As the functions and performances of electrical equipment are improved, the influence factors thereof are increased, and thus the possibility of malfunction is also increased. A single faulty component can cause a chain reaction that results in the power system not functioning properly, and therefore the power system must ensure proper functioning of the electrical equipment.
The transformer is one of key equipment of the power system, has the functions of electric energy conversion and distribution, has high manufacturing cost and complex structure, and can cause the abnormal operation of the power system if the transformer fails, thereby bringing inconvenience to the life of people and causing huge economic loss, so that the normal and safe operation of the transformer plays an important role in the normal operation of the power system. There are many causes of the transformer failure, such as artificial damage, environmental influence, loss of the transformer itself, and the like, wherein the loss of the transformer itself, i.e., insulation deterioration caused by long-term operation, is a main cause of the transformer failure.
The insulation performance of the transformer is seriously damaged by local discharge, which is mainly shown as follows: charged particles impact the molecular structure to damage or even damage the insulation; the insulating temperature rises sharply due to the large amount of heat generated by the particles during impact; a large amount of oxides are generated in the discharging process, and the oxides generate chemical reaction when meeting water, namely nitric acid is generated, so that the insulation corrosion phenomenon is caused; schottky injection causes oil breakdown, thereby degrading its heat dissipation performance, which is a long slow process. The cause and manifestation of the insulation degradation of the transformer is partial discharge. However, the types of partial discharge are different, and the insulation damage caused by the partial discharge is also different, so that the type of the partial discharge can be quickly and accurately judged by carrying out pattern recognition on the partial discharge, and the method has a vital significance on the normal operation of the transformer.
Because the mass data acquired by the existing monitoring system causes the partial discharge monitoring to enter a big data era, and the identification of the partial discharge signal by the traditional artificial feature extraction method and the shallow neural network is very difficult or even impossible, the research and the utilization of advanced theories and methods can extract the features from the partial discharge data and accurately identify the features to become a new problem facing the partial discharge of the transformer.
Disclosure of Invention
The invention aims to provide a partial discharge mode identification method based on a self-coding network, which is free from the limitation that the conventional research only applies experimental data to carry out mode identification and carries out partial discharge mode identification under the condition of fully utilizing complex on-site partial discharge data, so that the method is more suitable for the current engineering practice with mass data.
The technical scheme of the invention is that a transformer partial discharge mode identification method based on a self-coding network is implemented according to the following steps:
Processing original data by adopting a nonlinear filtering method to eliminate interference random signals; determining the type of test data and validation data for the network, and its classification, i.e. the data is expressed as { (x)(1),X(1)),...,(x(m),X(m)) Or unlabeled data x(1),x(2),...,x(m)Where m is the number of data and the ith data is x(i)The label is X(i)E.g. {1,2,. k }, wherein k is a category number; determining the characteristic number, namely the dimension, of the data;
step 2, selecting a self-coding network as a network model,
2.1) because the stack self-coding machine does not have the classification characteristic, the stack self-coding machine is combined with a classifier to construct a new self-coding network;
2.2) determining an objective function of the self-coding network;
step 3, training the weight of the self-coding network by using a simulated annealing-brainstorming hybrid optimization algorithm, wherein the specific process is as follows:
3.1) setting parameters of simulated annealing-brainstorming hybrid optimization algorithm
The algorithm parameters mainly comprise: initial number of individuals NP, maximum number of iterations KImaxProbability parameter P1、P2、P3、P4Number of clusters n _ c, initial temperature t0;
3.2) generating NP random distribution hidden layer numbers, hidden nodes and weights which meet constraint conditions according to the initialized hidden layer numbers, hidden nodes and weight formulas;
3.3) generating and updating NP weight values;
and 4, optimizing the number of hidden layers and hidden layer nodes of the self-coding network by using a brainstorming optimization algorithm, wherein the specific process is as follows:
4.1) generating and updating NP new hidden layer numbers and hidden layer nodes;
4.2) carrying out iterative search optimization, and outputting the optimal number of hidden layers, hidden layer nodes and corresponding weights when the set convergence precision or the maximum iteration number is reached, thereby obtaining a trained network;
step 5, inputting the data to be recognized into the trained network, and classifying the data to be recognized by utilizing the trained network;
step 6, calculating the recognition rate of the data to be recognized
In the formula (15), a is the correct classification number of the data to be identified, and s is the number of the data to be identified as a whole, so that the identification rate of the partial discharge signal data can be calculated intuitively.
The invention has the advantages that the collected partial discharge data is subjected to feature extraction and classification so as to effectively solve the problem of partial discharge mode identification, and the method specifically comprises the following steps:
1) due to the complexity of mass data on the partial discharge site, the self-coding network is utilized to train the partial discharge data, and the identification precision of partial discharge is improved.
2) In the training process of the self-coding network, the number of hidden layers of the network, hidden layer nodes and corresponding weights are trained by using a simulated annealing-brainstorming hybrid optimization algorithm, so that the recognition accuracy is optimal, and the universality of the network is improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a simplified diagram of a self-encoding network architecture in accordance with the present invention;
FIG. 3 is a schematic diagram of a single layer codec of the present invention;
FIG. 4 is a flow chart of simulated annealing-brainstorming hybrid optimization algorithm versus weight training in the present invention;
FIG. 5 is a flow chart of a K-means clustering method in the present invention;
FIG. 6 is a flow chart of selecting weights in the present invention;
fig. 7 is a flow chart of the optimization of the number of hidden layers and hidden layer nodes by the brainstorming optimization algorithm in the invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to a transformer partial discharge mode identification method based on a self-coding network, which mainly comprises the steps of establishing a self-coding network model and solving network parameters by using a simulated annealing-brain storm hybrid optimization algorithm, as shown in figure 1, and is specifically implemented according to the following steps:
the collected massive partial discharge original data contains noise and other interferences, and preprocessing is needed before analysis and application to ensure stability and reliability of the data. In the step, a nonlinear filtering method is adopted to process the original data, and the random signal of interference is eliminated.
Determining the type of test data and validation data for the network, and its classification, i.e. the data is expressed as { (x)(1),X(1)),...,(x(m),X(m)) Or unlabeled data x(1),x(2),...,x(m)Where m is the number of data and the ith data is x(i)The label is X(i)E.g. {1,2,. k }, wherein k is a category number; determining the characteristic number, namely the dimension, of the data;
step 2, selecting a self-coding network as a network model,
2.1) because the stack self-coding machine does not have the classification characteristic, the stack self-coding machine is combined with a classifier to construct a new self-coding network, and the classifier adopts a softmax classifier as shown in FIG. 2;
2.2) determining an objective function of the self-encoding network,
as shown in fig. 3, in the training process, the network model first maps the input data x into the hidden layer to obtain the hidden layer feature y, which is called as an encoder; then y is mapped to an output layer by the next layer network to obtain output data z, and the part is called as a decoder; these two components are mathematically represented as:
in the formula (1), W1Is a weight matrix of the input layer and the hidden layer, W2Is weight matrix of hidden layer and output layer, b is bias vector of hidden layer, d is bias vector of output layer, Sf、SgSigmoid functions are shown in formula (2), namely:
weighting matrix W between input layer and hidden layer1Is taken as a transpose W of a weight matrix of a hidden layer and an output layer'2Thus reducing the parameters, namely:
W1=W′2=W (3)
therefore, the number of self-coding network parameters is changed into three, namely a weight W, a bias vector b of a hidden layer and a bias vector d of an output layer;
the training goal is to minimize the difference between the output and the input, i.e.:
in equation (4), z is adjusted by W, b, d given x, c (x, z) is the training target for each training sample, then the total training target is:
in equation (5), C (x, z) is the total training target, m is the number of training samples,
the classifier in the step adopts a softmax classifier, and when the training sample set is { (x)(1),X(1)),...,(x(m),X(m)) Where m is the number of data and the ith data is x(i)The label is X(i)E {1, 2.., k }, where k is the number of classes, the assumption of softmax regression is:
wherein a vector h is assumedθ(x(i)) Each element p (X) of(i)=j|x(i)(ii) a θ) represents a sample x(i)Probability belonging to class j, sum of elements of vector being 1, theta1,θ2,...,θkAll classifier parameter vectors are written in the form of a matrix:
the cost function of the softmax classifier is defined as:
in the formula (8), m is the number of data; 1 {. cndot } represents an indication function, when an expression in parentheses is true, the indication function value is 1, otherwise, the indication function value is 0; the calculation formula behind the plus sign is a weight attenuation term, which is used for solving the numerical problem caused by parameter redundancy, wherein lambda is a weight attenuation coefficient,
in order to improve the recognition rate of classification, the evaluation function is redesigned, so the objective function of the self-coding network is reset as follows:
E=ηC(x,y)+βJ(θ) (9)
in the formula (9), η is the coefficient of the target function of the self-stacking encoder, β is the coefficient of the cost function of the softmax classifier, C (x, y) is the total training target of the self-stacking encoder, and J (θ) is the cost function of the softmax classifier;
step 3, training the weight of the self-coding network by using a simulated annealing-brainstorming hybrid optimization algorithm,
on the basis of the head storm optimization algorithm, the head storm optimization algorithm is improved, namely the simulated annealing-head storm hybrid optimization algorithm is improved, and then the simulated annealing-head storm hybrid optimization algorithm is used for training the weight of the self-coding network, so that the recognition capability of the network is improved.
Referring to fig. 4, the simulated annealing-brainstorming hybrid optimization algorithm trains the weight of the self-coding network, and the specific process is as follows:
3.1) setting parameters of a simulated annealing-brain storm hybrid optimization algorithm,
the algorithm parameters mainly comprise: initial number of individuals NP, maximum number of iterations KImaxProbability parameter P1、P2、P3、P4Number of clusters n _ c, initial temperature t0;
3.2) generating NP random distribution hidden layer numbers, hidden nodes and weights which meet constraint conditions according to the initialized hidden layer numbers, hidden nodes and weight formulas,
the initialized hidden layer number, hidden layer nodes and weight formula are as follows:
3.2.a) initializing hidden layer number and hidden layer nodes,
in the formula (10), LiIndicates the ith hidden layer number, NiRepresents the number of ith hidden nodes, max _ L is 10, max _ N is 300, RiA row vector consisting of the ith hidden layer number and the hidden node number, randint () represents a random integer within a prescribed range,
3.2.b) initializing corresponding weight according to the hidden layer number and the hidden layer node,
in the formula (11), n is the number of input layer neurons, q is the number of hidden layer neurons,wherein the scale of n, q, W, b, d of each layer is shown in Table 1, Wi,bi,diGenerating randomly by adopting a decimal coding mode; r isiIs the ith weight;is the jth solution to the ith weight; rand () is a random number between (0, 1);
TABLE 1 initialize weight scale of each layer
3.3) generating and updating NP weights, and the specific process is as follows:
3.3.a) dividing NP weights into n _ c classes in a self-coding network target function space by utilizing a K-means clustering algorithm, wherein the step of K-means clustering is shown in figure 5;
3.3.b) selecting the weight value,
as shown in FIG. 6, a random value is generated between (0,1) if the value is less than the probability parameter P1Then with the probability parameter P2Randomly selecting a clustering center to realize weight updating, and the specific process is as follows: generating a random value of (0,1) if the random value is less than the probability parameter P3Selecting a clustering center and adding a random value to generate a new weight; otherwise, randomly selecting a weight value from the cluster and adding a random value to generate a new weight value;
if the value is greater than the probability parameter P1Randomly selecting two classes to generate a new weight value, wherein the updating process comprises the following steps: generating a (0,1) random value; if it is smaller thanProbability parameter P4Combining the two cluster centers and adding a random value to generate a new weight value; otherwise, selecting two random weights from the two clusters and adding a random value to generate a new weight;
3.3.c) performing variation operation on the weight value,
the calculation formula of the weight variation operation is as follows:
in the formula (12), the reaction mixture is,representing the d-th dimension in the weight after the variation;representing the d-th dimension of the weight values used for updating; ξ represents the weight coefficient value at the time of generating a new weight; n (mu, sigma) represents a Gaussian random function with the mean value mu and the variance sigma;
in the formula (13), logsig () represents a logarithmic sigmoid function; KImaxRepresenting the maximum iteration number; iter represents the current number of iterations; k represents the slope of the change logsig () function; random () represents a random number between (0, 1);
3.3.d) updating the weight value,
solving an objective function value corresponding to the generated new weight by using an objective function formula (9) of the self-coding network, evaluating the weights before and after the variation, and keeping the optimal weight;
3.3.e) if Metropolis acceptance criteria are met, thenWherein E (r)j) Is rjAn objective function of taIs the temperature of the a th timeTurning to step 3.3. h); otherwise, turning to the step 3.3. f);
3.3.f) by rj=rnew+ rand (1, Dim) generates a new weight; dim is the dimension of the weight; r isnewIs the weight after the mutation operation;
3.3.h) update ta,a=a+1;
3.3.i) outputting the optimal weight when the maximum iteration times is reached; otherwise, turning to step 3.3. a);
step 4, optimizing the number of hidden layers and hidden layer nodes of the self-coding network by using a brainstorming optimization algorithm,
in step 3, the simulated annealing-brainstorming hybrid optimization algorithm is used for training the network weight, and in step 4, the brainstorming optimization algorithm is used for optimizing the number of hidden layers and hidden layer nodes of the network, so that the two algorithms do not conflict with each other. In the network, the weight can be determined only by determining the number of hidden layers and the hidden layer nodes, but the number of hidden layers and the hidden layer nodes of the network are changed, and the weight is correspondingly changed. But the objective function must be weighted. Therefore, in the process, the hidden layer number and the hidden layer node are initialized, then the weight is initialized, and then the optimal weight is obtained under the condition of the determined hidden layer number and the hidden layer node, namely the optimal weight is obtained by training by utilizing a simulated annealing-brainstorming hybrid optimization algorithm. And then optimizing the hidden layer number and the hidden layer nodes, namely obtaining the optimal hidden layer number and the optimal hidden layer nodes by utilizing a brain storm optimization algorithm, and then training by utilizing a simulated annealing-brain storm hybrid optimization algorithm to obtain the optimal weight under the hidden layer number and the hidden layer nodes.
This step optimizes the hidden layer number and hidden layer nodes of the network based on the brainstorming optimization algorithm to obtain the optimal hidden layer number and hidden layer nodes and their corresponding optimal weights, the flow chart is shown in fig. 7, and the specific process is as follows:
4.1) generating and updating NP new hidden layer numbers and hidden layer nodes, and the specific process comprises the following steps:
4.1.a) dividing NP hidden layer numbers and hidden layer nodes into 2 classes by using a K-means clustering algorithm;
4.1.b) selecting hidden layer number and hidden layer node,
generating a random value between (0,1) if the random value is less than the probability parameter P1Then with the probability parameter P2Randomly selecting a clustering center to realize the updating of the hidden layer number and the hidden layer nodes, and the specific process comprises the following steps: generating a random value of (0,1) if the value is less than the probability parameter P3Selecting a clustering center and adding a random value to generate a new hidden layer number and a hidden layer node; otherwise, randomly selecting an individual from the cluster and adding a random value to generate a new hidden layer number and a hidden layer node;
if the value is greater than the probability parameter P1Two classes are randomly selected to generate a new hidden layer number and a hidden layer node, and the updating process comprises the following steps: generating a (0,1) random value; if the random value is less than the probability parameter P4Combining the two clustering centers and adding a random value to generate a new hidden layer number and a hidden layer node; otherwise, selecting two random hidden layer numbers and hidden layer nodes from the two clusters and adding a random value to generate new hidden layer numbers and hidden layer nodes, wherein the probability parameter P in the step1、P2、P3、P4And step 3.3b) selecting probability parameter P in weight1、P2、P3、P4Are consistent;
4.1.c) carrying out mutation operation on the hidden layer number and the hidden layer nodes,
the hidden layer number and hidden layer node variation formula is as follows:
in the formula (14), the compound represented by the formula (I),to representThe varied hidden layer number and the d-th dimension in the hidden layer nodes;representing the number of hidden layers used for updating and the d-th dimension in the hidden layer nodes; xi represents the weight coefficient value when a new hidden layer number and a hidden layer node are generated, and the calculation method is the same as formula (13);
4.1.d) initializing the number of hidden layers and the corresponding weight of the hidden layer node, and initializing the weight by using the step 3.2. b);
4.1.e) updating the number of hidden layers and the corresponding weight of the hidden layer node, and updating the weight by using the step 3.3);
4.1.f) evaluating hidden layer numbers and hidden layer nodes before and after mutation according to a self-coding network objective function, and reserving the hidden layer numbers and hidden layer nodes with high recognition rate;
4.2) carrying out iterative search optimization, and outputting the optimal number of hidden layers, hidden layer nodes and corresponding weights when the set convergence precision or the maximum iteration number is reached, thereby obtaining a trained network;
step 5, inputting the data to be recognized into the trained network, and classifying the data to be recognized by utilizing the trained network;
step 6, calculating the recognition rate of the data to be recognized:
in the formula (15), a is the correct classification number of the data to be identified, and s is the number of the data to be identified as a whole, so that the identification rate of the partial discharge signal data can be calculated intuitively.
Examples
The implementation process of the method of the present invention will be described by taking the example of collecting partial discharge data in the field transformer. Partial discharge is mainly classified into suspension discharge, pin plate discharge, air gap discharge, and creeping discharge according to an insulation structure inside the transformer.
And 2, selecting a self-coding network model to perform pattern recognition on the partial discharge data, combining the stack automatic coding machine and the softmax classifier into a self-coding network, and determining a target function of the network, namely a formula (7).
And 3, training the network weight by using a simulated annealing-brainstorming hybrid optimization algorithm.
The algorithm-related parameters are set as: population size NP 30; the clustering number n _ c is 2; and the probability parameter is set to P1=0.2;P2=0.8;P3=0.4;P40.5; the maximum number of iterations is 50; the maximum number of iterations of weight training is 2000,η=0.00002;β=20。
generating an initial hidden layer number, hidden layer nodes and corresponding weights, and setting according to the principle of the step 3.2) in the specific implementation mode.
Updating the weight, and training according to the step 3.3) in the specific implementation mode to obtain the optimal weight under the condition that the number of hidden layers and hidden layer nodes are determined.
And updating the hidden layer number and the hidden layer nodes, and optimizing according to the step 4) in the specific embodiment to obtain the optimal hidden layer number, hidden layer nodes and corresponding weights so as to determine the parameters of the network.
And step 4, bringing the test data into the trained network for pattern recognition.
To more clearly verify the performance of the original self-encoded network (AE), the BSO (brain storm optimization algorithm) self-encoded network (BSO-AE) and the SABSO (simulated annealing-brain storm hybrid optimization algorithm) self-encoded network (SABSO-AE), the AE, the BSO-AE and the SABSO-AE are compared with BSO-AE1 and SABSO-AE1 when evaluation functions of the BSO-AE and the SABSO-AE consist of only error functions of the stack automaton.
As can be seen from tables 2 and 3, the recognition rate of AE in the training data and the test data is higher than that of BSO-AE and BSO-AE1, the recognition rate of SABSO-AE1 in the training data is lower than that of AE but higher than that of AE in the test data, and the recognition rate of SABSO-AE in the training data and the test data is better than that of other methods, so that SABSO-AE can better recognize the local discharge data, and the recognition rate of the local discharge data is improved, indicating that the method provided by the present invention is effective.
TABLE 2 run results of training data on different evaluation functions
TABLE 3 run results of test data on different evaluation functions
The method gets rid of the limitation that the prior research only applies experimental data to carry out mode identification, carries out the partial discharge mode identification under the condition of fully utilizing the complex on-site partial discharge data, and is more suitable for the current engineering practice with mass data; and the parameters of the self-coding network are intelligently adjusted by utilizing a simulated annealing-brainstorming hybrid optimization algorithm. Compared with other algorithms, the method has better identification precision, saves time and labor and improves the universality of the network.
Claims (3)
1.A transformer partial discharge mode identification method based on a self-coding network is characterized by comprising the following steps:
step 1, processing the collected partial discharge signal data,
processing original data by adopting a nonlinear filtering method to eliminate interference random signals; determining test and verification data for a network, and classification thereofType of (a), i.e. data is represented as { (x)(1),X(1)),...,(x(m),X(m)) Or unlabeled data x(1),x(2),...,x(m)Where m is the number of data and the ith data is x(i)The label is X(i)E.g. {1,2,. k }, wherein k is a category number; determining the characteristic number, namely the dimension, of the data;
step 2, selecting a self-coding network as a network model,
2.1) because the stack self-coding machine does not have the classification characteristic, the stack self-coding machine is combined with a classifier to construct a new self-coding network;
2.2) determining an objective function of the self-coding network;
step 3, training the weight of the self-coding network by using a simulated annealing-brainstorming hybrid optimization algorithm, wherein the specific process is as follows:
3.1) setting parameters of a simulated annealing-brain storm hybrid optimization algorithm,
the algorithm parameters mainly comprise: initial number of individuals NP, maximum number of iterations KImaxProbability parameter P1、P2、P3、P4Number of clusters n _ c, initial temperature t0;
3.2) generating NP random distribution hidden layer numbers, hidden nodes and weights which meet constraint conditions according to the initialized hidden layer numbers, hidden nodes and weight formulas;
the initialized hidden layer number, hidden layer nodes and weight formula are as follows:
a) initializing hidden layer number and hidden layer node
In the formula (10), LiIndicates the ith hidden layer number, NiRepresents the number of ith hidden nodes, max _ L is 10, max _ N is 300, RiA row vector consisting of the ith hidden layer number and the hidden node number is represented, and randint () represents a random integer in a specified range;
3.2.b) initializing corresponding weight according to the hidden layer number and the hidden layer node,
in the formula (11), n is the number of input layer neurons, q is the number of hidden layer neurons,wherein the scale of n, q, W, b, d of each layer is shown in Table 1, Wi,bi,diGenerating randomly by adopting a decimal coding mode; r isiIs the ith weight; r isi jIs the jth solution to the ith weight; rand () is a random number between (0, 1);
TABLE 1 initialize weight scale of each layer
3.3) generating and updating NP weight values;
the specific process is as follows:
3.3.a) dividing NP weights into n _ c classes in a self-coding network target function space by utilizing a K-means clustering algorithm;
3.3.b) selecting the weight value,
generating a random value between (0,1) if the value is less than the probability parameter P1Then with the probability parameter P2Randomly selecting a clustering center to realize weight updating, and the specific process is as follows: generating a random value of (0,1) if the random value is less than the probability parameter P3Selecting a clustering center and adding a random value to generate a new weight; otherwise, randomly selecting a weight value from the cluster and adding a random value to generate a new weight value;
if the value is greater than the probability parameter P1Randomly selecting two classes to generate a new weight value, wherein the updating process comprises the following steps: generating a (0,1) random value; if it is less than the probability parameter P4The two cluster centers are merged and added with a random valueTo generate a new weight; otherwise, selecting two random weights from the two clusters and adding a random value to generate a new weight;
3.3.c) carrying out variation operation on the weight, wherein the calculation formula of the weight variation operation is as follows:
in the formula (12), the reaction mixture is,representing the d-th dimension in the weight after the variation;representing the d-th dimension of the weight values used for updating; ξ represents the weight coefficient value at the time of generating a new weight; n (mu, sigma) represents a Gaussian random function with the mean value mu and the variance sigma;
in the formula (13), logsig () represents a logarithmic sigmoid function; KImaxRepresenting the maximum iteration number; iter represents the current number of iterations; k represents the slope of the change logsig () function; random () represents a random number between (0, 1);
3.3.d) updating the weight value,
solving an objective function value corresponding to the generated new weight by using an objective function formula (9) of the self-coding network, evaluating the weights before and after the variation, and keeping the optimal weight;
3.3.e) if Metropolis acceptance criteria are met, thenWherein E (r'new) Is r'newAn objective function of taThe temperature of the a time is changed to the step 3.3. h); otherwise, go to step3.3.f);
3.3.f) by r'new=rnew+ rand (1, Dim) generates a new weight; dim is the dimension of the weight; r isnewIs the weight after the mutation operation;
3.3.h) update ta,a=a+1;
3.3.i) outputting the optimal weight when the maximum iteration times is reached; otherwise, turning to step 3.3. a);
and 4, optimizing the number of hidden layers and hidden layer nodes of the self-coding network by using a brainstorming optimization algorithm, wherein the specific process is as follows:
4.1) generating and updating NP new hidden layer numbers and hidden layer nodes;
4.2) carrying out iterative search optimization, outputting the optimal hidden layer number and hidden layer nodes when the set convergence precision or the maximum iteration number is reached, and obtaining the optimal hidden layer number and the optimal weight corresponding to the hidden layer nodes by utilizing the step 3.3), thereby obtaining a trained network;
step 5, inputting the data to be recognized into the trained network, and classifying the data to be recognized by utilizing the trained network;
step 6, calculating the recognition rate of the data to be recognized:
in the formula (15), a is the correct classification number of the data to be identified, and s is the number of the data to be identified as a whole, so that the identification rate of the partial discharge signal data can be calculated intuitively.
2. The transformer partial discharge pattern recognition method based on self-coding network as claimed in claim 1, wherein in the step 2.2),
in the training process, firstly, mapping input data x into a hidden layer by a network model to obtain hidden layer characteristics y, wherein the hidden layer characteristics y are called as an encoder; then y is mapped to an output layer by the next layer network to obtain output data z, and the part is called as a decoder; these two components are mathematically represented as:
in the formula (1), W1Is a weight matrix of the input layer and the hidden layer, W2Is weight matrix of hidden layer and output layer, b is bias vector of hidden layer, d is bias vector of output layer, Sf、SgSigmoid functions are shown in formula (2), namely:
weighting matrix W between input layer and hidden layer1Transpose W of weight matrix taken as hidden layer and output layer2', thereby reducing the parameters, i.e.:
W1=W′2=W (3)
therefore, the number of self-coding network parameters is changed into three, namely a weight W, a bias vector b of a hidden layer and a bias vector d of an output layer;
the training goal is to minimize the difference between the output and the input, i.e.:
in equation (4), z is adjusted by W, b, d given x, c (x, z) is the training target for each training sample, then the total training target is:
in equation (5), C (x, z) is the total training target, m is the number of training samples,
the classifier in the step adopts a softmax classifier, and when the training sample set is { (x)(1),X(1)),...,(x(m),X(m)) Where m is the number of data and the ith data is x(i)The label is X(i)E {1, 2.., k }, where k is the number of classes, the assumption of softmax regression is:
wherein a vector h is assumedθ(x(i)) Each element p (X) of(i)=j|x(i)(ii) a θ) represents a sample x(i)Probability belonging to class j, sum of elements of vector being 1, theta1,θ2,...,θkAll classifier parameter vectors are represented by theta, and the vectors are written in a matrix form:
the cost function of the softmax classifier is defined as:
in the formula (8), m is the number of data; 1 {. cndot } represents an indication function, when an expression in parentheses is true, the indication function value is 1, otherwise, the indication function value is 0; the calculation formula behind the plus sign is a weight attenuation term, wherein lambda is a weight attenuation coefficient;
the target function of the self-coding network is reset as follows:
E=ηC(x,y)+βJ(θ) (9)
in equation (9), η is the coefficient of the target function of the self-stacking encoder, β is the coefficient of the cost function of the softmax classifier, C (x, y) is the overall training target of the self-stacking encoder, and J (θ) is the cost function of the softmax classifier.
3. The transformer partial discharge pattern recognition method based on the self-coding network as claimed in claim 1, wherein in the step 4.1), the specific process is as follows:
4.1.a) dividing NP hidden layer numbers and hidden layer nodes into 2 classes by using a K-means clustering algorithm;
4.1.b) selecting hidden layer number and hidden layer node,
generating a random value between (0,1) if the random value is less than the probability parameter P1Then with the probability parameter P2Randomly selecting a clustering center to realize the updating of the hidden layer number and the hidden layer nodes, and the specific process comprises the following steps: generating a random value of (0,1) if the value is less than the probability parameter P3Selecting a clustering center and adding a random value to generate a new hidden layer number and a hidden layer node; otherwise, randomly selecting an individual from the cluster and adding a random value to generate a new hidden layer number and a hidden layer node;
if the value is greater than the probability parameter P1Two classes are randomly selected to generate a new hidden layer number and a hidden layer node, and the updating process comprises the following steps: generating a (0,1) random value; if the random value is less than the probability parameter P4Combining the two clustering centers and adding a random value to generate a new hidden layer number and a hidden layer node; otherwise, selecting two random hidden layer numbers and hidden layer nodes from the two clusters and adding a random value to generate new hidden layer numbers and hidden layer nodes;
4.1.c) carrying out mutation operation on the hidden layer number and the hidden layer nodes,
the hidden layer number and hidden layer node variation formula is as follows:
in the formula (14), the compound represented by the formula (I),representing the varied hidden layer number and the d-th dimension in the hidden layer nodes;representing the number of hidden layers used for updating and the d-th dimension in the hidden layer nodes; xi represents the weight coefficient value when a new hidden layer number and a hidden layer node are generated, and the calculation method is the same as formula (13);
4.1.d) initializing the number of hidden layers and the corresponding weight of the hidden layer node, and initializing the weight by using the step 3.2. b);
4.1.e) updating the number of hidden layers and the corresponding weight of the hidden layer node, and updating the weight by using the step 3.3);
4.1.f) evaluating the hidden layer number and the hidden layer nodes before and after the mutation according to the self-coding network objective function, and reserving the hidden layer number and the hidden layer nodes with high recognition rate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910532414.1A CN110348489B (en) | 2019-06-19 | 2019-06-19 | Transformer partial discharge mode identification method based on self-coding network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910532414.1A CN110348489B (en) | 2019-06-19 | 2019-06-19 | Transformer partial discharge mode identification method based on self-coding network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110348489A CN110348489A (en) | 2019-10-18 |
CN110348489B true CN110348489B (en) | 2021-04-06 |
Family
ID=68182398
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910532414.1A Active CN110348489B (en) | 2019-06-19 | 2019-06-19 | Transformer partial discharge mode identification method based on self-coding network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110348489B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111142001B (en) * | 2020-01-10 | 2022-04-22 | 三峡大学 | Transformer multi-source partial discharge mode identification method based on parallel characteristic domain |
CN112327219B (en) * | 2020-10-29 | 2024-03-12 | 国网福建省电力有限公司南平供电公司 | Distribution transformer fault diagnosis method with automatic feature mining and parameter automatic optimizing functions |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104037757A (en) * | 2014-05-20 | 2014-09-10 | 西安理工大学 | Brainstorming-based thermal power plant economic environment scheduling method |
CN106503689A (en) * | 2016-11-14 | 2017-03-15 | 哈尔滨理工大学 | Neutral net local discharge signal mode identification method based on particle cluster algorithm |
CN108040324A (en) * | 2017-11-16 | 2018-05-15 | 南方科技大学 | A kind of localization method and alignment system of survival capsule robot |
CN108399105A (en) * | 2018-02-27 | 2018-08-14 | 天津大学 | A kind of Method for HW/SW partitioning based on improvement brainstorming algorithm |
CN108573225A (en) * | 2018-03-30 | 2018-09-25 | 国网天津市电力公司电力科学研究院 | A kind of local discharge signal mode identification method and system |
CN108694473A (en) * | 2018-06-15 | 2018-10-23 | 常州瑞信电子科技有限公司 | Building energy consumption prediction technique based on RBF neural |
CN108957251A (en) * | 2018-05-18 | 2018-12-07 | 深圳供电局有限公司 | A kind of cable connector Partial Discharge Pattern Recognition Method |
CN109102012A (en) * | 2018-07-30 | 2018-12-28 | 上海交通大学 | A kind of defect identification method and system of local discharge signal |
CN109388858A (en) * | 2018-09-17 | 2019-02-26 | 西安航空电子科技有限公司 | Nonlinear transducer bearing calibration based on brainstorming optimization algorithm |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156245B (en) * | 2011-03-11 | 2016-08-03 | 太原理工大学 | A kind of mine high-voltage cable on-line fault diagnosis and method for early warning |
CN103323755A (en) * | 2013-06-17 | 2013-09-25 | 广东电网公司电力科学研究院 | Method and system for recognition of GIS ultrahigh frequency partial discharge signal |
CN105334436B (en) * | 2015-10-30 | 2018-08-10 | 山东电力研究院 | Crosslinked cable Partial Discharge Pattern Recognition Method based on SOM-BP combination neural nets |
CN109188211B (en) * | 2018-07-30 | 2021-02-05 | 上海交通大学 | High-voltage equipment insulation fault diagnosis method and system |
CN109375116B (en) * | 2018-08-09 | 2021-12-14 | 上海国际汽车城(集团)有限公司 | Battery system abnormal battery identification method based on self-encoder |
-
2019
- 2019-06-19 CN CN201910532414.1A patent/CN110348489B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104037757A (en) * | 2014-05-20 | 2014-09-10 | 西安理工大学 | Brainstorming-based thermal power plant economic environment scheduling method |
CN106503689A (en) * | 2016-11-14 | 2017-03-15 | 哈尔滨理工大学 | Neutral net local discharge signal mode identification method based on particle cluster algorithm |
CN108040324A (en) * | 2017-11-16 | 2018-05-15 | 南方科技大学 | A kind of localization method and alignment system of survival capsule robot |
CN108399105A (en) * | 2018-02-27 | 2018-08-14 | 天津大学 | A kind of Method for HW/SW partitioning based on improvement brainstorming algorithm |
CN108573225A (en) * | 2018-03-30 | 2018-09-25 | 国网天津市电力公司电力科学研究院 | A kind of local discharge signal mode identification method and system |
CN108957251A (en) * | 2018-05-18 | 2018-12-07 | 深圳供电局有限公司 | A kind of cable connector Partial Discharge Pattern Recognition Method |
CN108694473A (en) * | 2018-06-15 | 2018-10-23 | 常州瑞信电子科技有限公司 | Building energy consumption prediction technique based on RBF neural |
CN109102012A (en) * | 2018-07-30 | 2018-12-28 | 上海交通大学 | A kind of defect identification method and system of local discharge signal |
CN109388858A (en) * | 2018-09-17 | 2019-02-26 | 西安航空电子科技有限公司 | Nonlinear transducer bearing calibration based on brainstorming optimization algorithm |
Non-Patent Citations (2)
Title |
---|
"An Improved Brain Storm Optimization with Differential Evolution Strategy for Applications of ANNs";Zijian Cao et al.;《Mathematical Problems in Engineering》;20151231;第1-18页 * |
"Hybrid brain storm optimisation and simulated annealing algorithm for continuous optimisation problems";Zhengxuan Jia et al.;《International Journal of Bio-Inspired Computation》;20161231;第8卷(第2期);第109-121页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110348489A (en) | 2019-10-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | Analog circuit incipient fault diagnosis method using DBN based features extraction | |
CN110929763B (en) | Multi-source data fusion-based mechanical fault diagnosis method for medium-voltage vacuum circuit breaker | |
CN111237134B (en) | Offshore double-fed wind driven generator fault diagnosis method based on GRA-LSTM-stacking model | |
CN108304623B (en) | Probability load flow online calculation method based on stack noise reduction automatic encoder | |
CN112116058A (en) | Transformer fault diagnosis method for optimizing multi-granularity cascade forest model based on particle swarm algorithm | |
CN109492748B (en) | Method for establishing medium-and-long-term load prediction model of power system based on convolutional neural network | |
CN101404071A (en) | Electronic circuit fault diagnosis neural network method based on grouping particle swarm algorithm | |
CN110879373B (en) | Oil-immersed transformer fault diagnosis method with neural network and decision fusion | |
CN110348489B (en) | Transformer partial discharge mode identification method based on self-coding network | |
CN115563563A (en) | Fault diagnosis method and device based on transformer oil chromatographic analysis | |
Wang et al. | A remaining useful life prediction model based on hybrid long-short sequences for engines | |
Chen et al. | Research on wind power prediction method based on convolutional neural network and genetic algorithm | |
CN116562114A (en) | Power transformer fault diagnosis method based on graph convolution neural network | |
CN110766215A (en) | Wind power climbing event prediction method based on feature adaptive selection and WDNN | |
CN112686404B (en) | Power distribution network fault first-aid repair-based collaborative optimization method | |
Zhang et al. | Encoding time series as images: A robust and transferable framework for power system DIM identification combining rules and VGGNet | |
Xu et al. | Short-term electricity consumption forecasting method for residential users based on cluster classification and backpropagation neural network | |
CN116167465A (en) | Solar irradiance prediction method based on multivariate time series ensemble learning | |
CN116400168A (en) | Power grid fault diagnosis method and system based on depth feature clustering | |
CN115146739A (en) | Power transformer fault diagnosis method based on stacked time series network | |
CN115659258A (en) | Power distribution network fault detection method based on multi-scale graph convolution twin network | |
CN114841266A (en) | Voltage sag identification method based on triple prototype network under small sample | |
CN112651183A (en) | Reliability evaluation method for quantum distributed countermeasure unified deep hash network | |
Atira et al. | Medium Term Load Forecasting Using Statistical Feature Self Organizing Maps (SOM) | |
Wang et al. | Fault Diagnosis of Wind Turbine Generator with Stacked Noise Reduction Autoencoder Based on Group Normalization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |