CN110348489B - Transformer partial discharge mode identification method based on self-coding network - Google Patents

Transformer partial discharge mode identification method based on self-coding network Download PDF

Info

Publication number
CN110348489B
CN110348489B CN201910532414.1A CN201910532414A CN110348489B CN 110348489 B CN110348489 B CN 110348489B CN 201910532414 A CN201910532414 A CN 201910532414A CN 110348489 B CN110348489 B CN 110348489B
Authority
CN
China
Prior art keywords
hidden layer
weight
data
self
new
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910532414.1A
Other languages
Chinese (zh)
Other versions
CN110348489A (en
Inventor
吴亚丽
王鑫睿
李国婷
付玉龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201910532414.1A priority Critical patent/CN110348489B/en
Publication of CN110348489A publication Critical patent/CN110348489A/en
Application granted granted Critical
Publication of CN110348489B publication Critical patent/CN110348489B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/50Testing of electric apparatus, lines, cables or components for short-circuits, continuity, leakage current or incorrect line connections
    • G01R31/62Testing of transformers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • H02J13/0013
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Power Engineering (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a transformer partial discharge mode identification method based on a self-coding network, which comprises the following steps: step 1, processing collected partial discharge signal data; step 2, selecting a self-coding network as a network model; step 3, training the weight of the self-coding network by using a simulated annealing-brainstorming hybrid optimization algorithm; step 4, optimizing the number of hidden layers and hidden layer nodes of the self-coding network by using a brainstorming optimization algorithm so as to obtain a trained network; step 5, inputting the data to be recognized into the trained network, and classifying the data to be recognized by utilizing the trained network; and 6, calculating the identification rate of the data to be identified. The method of the invention has better identification precision, saves time and labor and improves the universality of the network.

Description

Transformer partial discharge mode identification method based on self-coding network
Technical Field
The invention belongs to the technical field of power equipment monitoring, and relates to a transformer partial discharge mode identification method based on a self-coding network.
Background
Electrical equipment is a major component of an electrical power system, and if the electrical equipment fails, a large loss is caused. With the development of science and technology, the functions of the power system become complex, and higher automation is realized. As the functions and performances of electrical equipment are improved, the influence factors thereof are increased, and thus the possibility of malfunction is also increased. A single faulty component can cause a chain reaction that results in the power system not functioning properly, and therefore the power system must ensure proper functioning of the electrical equipment.
The transformer is one of key equipment of the power system, has the functions of electric energy conversion and distribution, has high manufacturing cost and complex structure, and can cause the abnormal operation of the power system if the transformer fails, thereby bringing inconvenience to the life of people and causing huge economic loss, so that the normal and safe operation of the transformer plays an important role in the normal operation of the power system. There are many causes of the transformer failure, such as artificial damage, environmental influence, loss of the transformer itself, and the like, wherein the loss of the transformer itself, i.e., insulation deterioration caused by long-term operation, is a main cause of the transformer failure.
The insulation performance of the transformer is seriously damaged by local discharge, which is mainly shown as follows: charged particles impact the molecular structure to damage or even damage the insulation; the insulating temperature rises sharply due to the large amount of heat generated by the particles during impact; a large amount of oxides are generated in the discharging process, and the oxides generate chemical reaction when meeting water, namely nitric acid is generated, so that the insulation corrosion phenomenon is caused; schottky injection causes oil breakdown, thereby degrading its heat dissipation performance, which is a long slow process. The cause and manifestation of the insulation degradation of the transformer is partial discharge. However, the types of partial discharge are different, and the insulation damage caused by the partial discharge is also different, so that the type of the partial discharge can be quickly and accurately judged by carrying out pattern recognition on the partial discharge, and the method has a vital significance on the normal operation of the transformer.
Because the mass data acquired by the existing monitoring system causes the partial discharge monitoring to enter a big data era, and the identification of the partial discharge signal by the traditional artificial feature extraction method and the shallow neural network is very difficult or even impossible, the research and the utilization of advanced theories and methods can extract the features from the partial discharge data and accurately identify the features to become a new problem facing the partial discharge of the transformer.
Disclosure of Invention
The invention aims to provide a partial discharge mode identification method based on a self-coding network, which is free from the limitation that the conventional research only applies experimental data to carry out mode identification and carries out partial discharge mode identification under the condition of fully utilizing complex on-site partial discharge data, so that the method is more suitable for the current engineering practice with mass data.
The technical scheme of the invention is that a transformer partial discharge mode identification method based on a self-coding network is implemented according to the following steps:
step 1, processing the collected partial discharge signal data
Processing original data by adopting a nonlinear filtering method to eliminate interference random signals; determining the type of test data and validation data for the network, and its classification, i.e. the data is expressed as { (x)(1),X(1)),...,(x(m),X(m)) Or unlabeled data x(1),x(2),...,x(m)Where m is the number of data and the ith data is x(i)The label is X(i)E.g. {1,2,. k }, wherein k is a category number; determining the characteristic number, namely the dimension, of the data;
step 2, selecting a self-coding network as a network model,
2.1) because the stack self-coding machine does not have the classification characteristic, the stack self-coding machine is combined with a classifier to construct a new self-coding network;
2.2) determining an objective function of the self-coding network;
step 3, training the weight of the self-coding network by using a simulated annealing-brainstorming hybrid optimization algorithm, wherein the specific process is as follows:
3.1) setting parameters of simulated annealing-brainstorming hybrid optimization algorithm
The algorithm parameters mainly comprise: initial number of individuals NP, maximum number of iterations KImaxProbability parameter P1、P2、P3、P4Number of clusters n _ c, initial temperature t0
3.2) generating NP random distribution hidden layer numbers, hidden nodes and weights which meet constraint conditions according to the initialized hidden layer numbers, hidden nodes and weight formulas;
3.3) generating and updating NP weight values;
and 4, optimizing the number of hidden layers and hidden layer nodes of the self-coding network by using a brainstorming optimization algorithm, wherein the specific process is as follows:
4.1) generating and updating NP new hidden layer numbers and hidden layer nodes;
4.2) carrying out iterative search optimization, and outputting the optimal number of hidden layers, hidden layer nodes and corresponding weights when the set convergence precision or the maximum iteration number is reached, thereby obtaining a trained network;
step 5, inputting the data to be recognized into the trained network, and classifying the data to be recognized by utilizing the trained network;
step 6, calculating the recognition rate of the data to be recognized
Figure GDA0002152453420000041
In the formula (15), a is the correct classification number of the data to be identified, and s is the number of the data to be identified as a whole, so that the identification rate of the partial discharge signal data can be calculated intuitively.
The invention has the advantages that the collected partial discharge data is subjected to feature extraction and classification so as to effectively solve the problem of partial discharge mode identification, and the method specifically comprises the following steps:
1) due to the complexity of mass data on the partial discharge site, the self-coding network is utilized to train the partial discharge data, and the identification precision of partial discharge is improved.
2) In the training process of the self-coding network, the number of hidden layers of the network, hidden layer nodes and corresponding weights are trained by using a simulated annealing-brainstorming hybrid optimization algorithm, so that the recognition accuracy is optimal, and the universality of the network is improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a simplified diagram of a self-encoding network architecture in accordance with the present invention;
FIG. 3 is a schematic diagram of a single layer codec of the present invention;
FIG. 4 is a flow chart of simulated annealing-brainstorming hybrid optimization algorithm versus weight training in the present invention;
FIG. 5 is a flow chart of a K-means clustering method in the present invention;
FIG. 6 is a flow chart of selecting weights in the present invention;
fig. 7 is a flow chart of the optimization of the number of hidden layers and hidden layer nodes by the brainstorming optimization algorithm in the invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to a transformer partial discharge mode identification method based on a self-coding network, which mainly comprises the steps of establishing a self-coding network model and solving network parameters by using a simulated annealing-brain storm hybrid optimization algorithm, as shown in figure 1, and is specifically implemented according to the following steps:
step 1, processing the collected partial discharge signal data,
the collected massive partial discharge original data contains noise and other interferences, and preprocessing is needed before analysis and application to ensure stability and reliability of the data. In the step, a nonlinear filtering method is adopted to process the original data, and the random signal of interference is eliminated.
Determining the type of test data and validation data for the network, and its classification, i.e. the data is expressed as { (x)(1),X(1)),...,(x(m),X(m)) Or unlabeled data x(1),x(2),...,x(m)Where m is the number of data and the ith data is x(i)The label is X(i)E.g. {1,2,. k }, wherein k is a category number; determining the characteristic number, namely the dimension, of the data;
step 2, selecting a self-coding network as a network model,
2.1) because the stack self-coding machine does not have the classification characteristic, the stack self-coding machine is combined with a classifier to construct a new self-coding network, and the classifier adopts a softmax classifier as shown in FIG. 2;
2.2) determining an objective function of the self-encoding network,
as shown in fig. 3, in the training process, the network model first maps the input data x into the hidden layer to obtain the hidden layer feature y, which is called as an encoder; then y is mapped to an output layer by the next layer network to obtain output data z, and the part is called as a decoder; these two components are mathematically represented as:
Figure GDA0002152453420000061
in the formula (1), W1Is a weight matrix of the input layer and the hidden layer, W2Is weight matrix of hidden layer and output layer, b is bias vector of hidden layer, d is bias vector of output layer, Sf、SgSigmoid functions are shown in formula (2), namely:
Figure GDA0002152453420000062
weighting matrix W between input layer and hidden layer1Is taken as a transpose W of a weight matrix of a hidden layer and an output layer'2Thus reducing the parameters, namely:
W1=W′2=W (3)
therefore, the number of self-coding network parameters is changed into three, namely a weight W, a bias vector b of a hidden layer and a bias vector d of an output layer;
the training goal is to minimize the difference between the output and the input, i.e.:
Figure GDA0002152453420000063
in equation (4), z is adjusted by W, b, d given x, c (x, z) is the training target for each training sample, then the total training target is:
Figure GDA0002152453420000064
in equation (5), C (x, z) is the total training target, m is the number of training samples,
the classifier in the step adopts a softmax classifier, and when the training sample set is { (x)(1),X(1)),...,(x(m),X(m)) Where m is the number of data and the ith data is x(i)The label is X(i)E {1, 2.., k }, where k is the number of classes, the assumption of softmax regression is:
Figure GDA0002152453420000071
wherein a vector h is assumedθ(x(i)) Each element p (X) of(i)=j|x(i)(ii) a θ) represents a sample x(i)Probability belonging to class j, sum of elements of vector being 1, theta12,...,θkAll classifier parameter vectors are written in the form of a matrix:
Figure GDA0002152453420000072
the cost function of the softmax classifier is defined as:
Figure GDA0002152453420000073
in the formula (8), m is the number of data; 1 {. cndot } represents an indication function, when an expression in parentheses is true, the indication function value is 1, otherwise, the indication function value is 0; the calculation formula behind the plus sign is a weight attenuation term, which is used for solving the numerical problem caused by parameter redundancy, wherein lambda is a weight attenuation coefficient,
in order to improve the recognition rate of classification, the evaluation function is redesigned, so the objective function of the self-coding network is reset as follows:
E=ηC(x,y)+βJ(θ) (9)
in the formula (9), η is the coefficient of the target function of the self-stacking encoder, β is the coefficient of the cost function of the softmax classifier, C (x, y) is the total training target of the self-stacking encoder, and J (θ) is the cost function of the softmax classifier;
step 3, training the weight of the self-coding network by using a simulated annealing-brainstorming hybrid optimization algorithm,
on the basis of the head storm optimization algorithm, the head storm optimization algorithm is improved, namely the simulated annealing-head storm hybrid optimization algorithm is improved, and then the simulated annealing-head storm hybrid optimization algorithm is used for training the weight of the self-coding network, so that the recognition capability of the network is improved.
Referring to fig. 4, the simulated annealing-brainstorming hybrid optimization algorithm trains the weight of the self-coding network, and the specific process is as follows:
3.1) setting parameters of a simulated annealing-brain storm hybrid optimization algorithm,
the algorithm parameters mainly comprise: initial number of individuals NP, maximum number of iterations KImaxProbability parameter P1、P2、P3、P4Number of clusters n _ c, initial temperature t0
3.2) generating NP random distribution hidden layer numbers, hidden nodes and weights which meet constraint conditions according to the initialized hidden layer numbers, hidden nodes and weight formulas,
the initialized hidden layer number, hidden layer nodes and weight formula are as follows:
3.2.a) initializing hidden layer number and hidden layer nodes,
Figure GDA0002152453420000081
in the formula (10), LiIndicates the ith hidden layer number, NiRepresents the number of ith hidden nodes, max _ L is 10, max _ N is 300, RiA row vector consisting of the ith hidden layer number and the hidden node number, randint () represents a random integer within a prescribed range,
3.2.b) initializing corresponding weight according to the hidden layer number and the hidden layer node,
Figure GDA0002152453420000082
in the formula (11), n is the number of input layer neurons, q is the number of hidden layer neurons,
Figure GDA0002152453420000091
wherein the scale of n, q, W, b, d of each layer is shown in Table 1, Wi,bi,diGenerating randomly by adopting a decimal coding mode; r isiIs the ith weight;
Figure GDA0002152453420000092
is the jth solution to the ith weight; rand () is a random number between (0, 1);
TABLE 1 initialize weight scale of each layer
Figure GDA0002152453420000093
3.3) generating and updating NP weights, and the specific process is as follows:
3.3.a) dividing NP weights into n _ c classes in a self-coding network target function space by utilizing a K-means clustering algorithm, wherein the step of K-means clustering is shown in figure 5;
3.3.b) selecting the weight value,
as shown in FIG. 6, a random value is generated between (0,1) if the value is less than the probability parameter P1Then with the probability parameter P2Randomly selecting a clustering center to realize weight updating, and the specific process is as follows: generating a random value of (0,1) if the random value is less than the probability parameter P3Selecting a clustering center and adding a random value to generate a new weight; otherwise, randomly selecting a weight value from the cluster and adding a random value to generate a new weight value;
if the value is greater than the probability parameter P1Randomly selecting two classes to generate a new weight value, wherein the updating process comprises the following steps: generating a (0,1) random value; if it is smaller thanProbability parameter P4Combining the two cluster centers and adding a random value to generate a new weight value; otherwise, selecting two random weights from the two clusters and adding a random value to generate a new weight;
3.3.c) performing variation operation on the weight value,
the calculation formula of the weight variation operation is as follows:
Figure GDA0002152453420000101
Figure GDA0002152453420000102
in the formula (12), the reaction mixture is,
Figure GDA0002152453420000103
representing the d-th dimension in the weight after the variation;
Figure GDA0002152453420000104
representing the d-th dimension of the weight values used for updating; ξ represents the weight coefficient value at the time of generating a new weight; n (mu, sigma) represents a Gaussian random function with the mean value mu and the variance sigma;
in the formula (13), logsig () represents a logarithmic sigmoid function; KImaxRepresenting the maximum iteration number; iter represents the current number of iterations; k represents the slope of the change logsig () function; random () represents a random number between (0, 1);
3.3.d) updating the weight value,
solving an objective function value corresponding to the generated new weight by using an objective function formula (9) of the self-coding network, evaluating the weights before and after the variation, and keeping the optimal weight;
3.3.e) if Metropolis acceptance criteria are met, then
Figure GDA0002152453420000105
Wherein E (r)j) Is rjAn objective function of taIs the temperature of the a th timeTurning to step 3.3. h); otherwise, turning to the step 3.3. f);
3.3.f) by rj=rnew+ rand (1, Dim) generates a new weight; dim is the dimension of the weight; r isnewIs the weight after the mutation operation;
3.3.g) if
Figure GDA0002152453420000111
Then r isnew=rj(ii) a Otherwise, let rnew=rnewStep 3.3. e);
3.3.h) update ta,a=a+1;
3.3.i) outputting the optimal weight when the maximum iteration times is reached; otherwise, turning to step 3.3. a);
step 4, optimizing the number of hidden layers and hidden layer nodes of the self-coding network by using a brainstorming optimization algorithm,
in step 3, the simulated annealing-brainstorming hybrid optimization algorithm is used for training the network weight, and in step 4, the brainstorming optimization algorithm is used for optimizing the number of hidden layers and hidden layer nodes of the network, so that the two algorithms do not conflict with each other. In the network, the weight can be determined only by determining the number of hidden layers and the hidden layer nodes, but the number of hidden layers and the hidden layer nodes of the network are changed, and the weight is correspondingly changed. But the objective function must be weighted. Therefore, in the process, the hidden layer number and the hidden layer node are initialized, then the weight is initialized, and then the optimal weight is obtained under the condition of the determined hidden layer number and the hidden layer node, namely the optimal weight is obtained by training by utilizing a simulated annealing-brainstorming hybrid optimization algorithm. And then optimizing the hidden layer number and the hidden layer nodes, namely obtaining the optimal hidden layer number and the optimal hidden layer nodes by utilizing a brain storm optimization algorithm, and then training by utilizing a simulated annealing-brain storm hybrid optimization algorithm to obtain the optimal weight under the hidden layer number and the hidden layer nodes.
This step optimizes the hidden layer number and hidden layer nodes of the network based on the brainstorming optimization algorithm to obtain the optimal hidden layer number and hidden layer nodes and their corresponding optimal weights, the flow chart is shown in fig. 7, and the specific process is as follows:
4.1) generating and updating NP new hidden layer numbers and hidden layer nodes, and the specific process comprises the following steps:
4.1.a) dividing NP hidden layer numbers and hidden layer nodes into 2 classes by using a K-means clustering algorithm;
4.1.b) selecting hidden layer number and hidden layer node,
generating a random value between (0,1) if the random value is less than the probability parameter P1Then with the probability parameter P2Randomly selecting a clustering center to realize the updating of the hidden layer number and the hidden layer nodes, and the specific process comprises the following steps: generating a random value of (0,1) if the value is less than the probability parameter P3Selecting a clustering center and adding a random value to generate a new hidden layer number and a hidden layer node; otherwise, randomly selecting an individual from the cluster and adding a random value to generate a new hidden layer number and a hidden layer node;
if the value is greater than the probability parameter P1Two classes are randomly selected to generate a new hidden layer number and a hidden layer node, and the updating process comprises the following steps: generating a (0,1) random value; if the random value is less than the probability parameter P4Combining the two clustering centers and adding a random value to generate a new hidden layer number and a hidden layer node; otherwise, selecting two random hidden layer numbers and hidden layer nodes from the two clusters and adding a random value to generate new hidden layer numbers and hidden layer nodes, wherein the probability parameter P in the step1、P2、P3、P4And step 3.3b) selecting probability parameter P in weight1、P2、P3、P4Are consistent;
4.1.c) carrying out mutation operation on the hidden layer number and the hidden layer nodes,
the hidden layer number and hidden layer node variation formula is as follows:
Figure GDA0002152453420000121
in the formula (14), the compound represented by the formula (I),
Figure GDA0002152453420000122
to representThe varied hidden layer number and the d-th dimension in the hidden layer nodes;
Figure GDA0002152453420000123
representing the number of hidden layers used for updating and the d-th dimension in the hidden layer nodes; xi represents the weight coefficient value when a new hidden layer number and a hidden layer node are generated, and the calculation method is the same as formula (13);
4.1.d) initializing the number of hidden layers and the corresponding weight of the hidden layer node, and initializing the weight by using the step 3.2. b);
4.1.e) updating the number of hidden layers and the corresponding weight of the hidden layer node, and updating the weight by using the step 3.3);
4.1.f) evaluating hidden layer numbers and hidden layer nodes before and after mutation according to a self-coding network objective function, and reserving the hidden layer numbers and hidden layer nodes with high recognition rate;
4.2) carrying out iterative search optimization, and outputting the optimal number of hidden layers, hidden layer nodes and corresponding weights when the set convergence precision or the maximum iteration number is reached, thereby obtaining a trained network;
step 5, inputting the data to be recognized into the trained network, and classifying the data to be recognized by utilizing the trained network;
step 6, calculating the recognition rate of the data to be recognized:
Figure GDA0002152453420000131
in the formula (15), a is the correct classification number of the data to be identified, and s is the number of the data to be identified as a whole, so that the identification rate of the partial discharge signal data can be calculated intuitively.
Examples
The implementation process of the method of the present invention will be described by taking the example of collecting partial discharge data in the field transformer. Partial discharge is mainly classified into suspension discharge, pin plate discharge, air gap discharge, and creeping discharge according to an insulation structure inside the transformer.
Step 1, processing data collected on site, and determining training data and test data, wherein 700 pieces of data are collected for each partial discharge type, 2000 pieces of training data are collected, 500 pieces of test samples are collected, and the sample dimension is 400.
And 2, selecting a self-coding network model to perform pattern recognition on the partial discharge data, combining the stack automatic coding machine and the softmax classifier into a self-coding network, and determining a target function of the network, namely a formula (7).
And 3, training the network weight by using a simulated annealing-brainstorming hybrid optimization algorithm.
The algorithm-related parameters are set as: population size NP 30; the clustering number n _ c is 2; and the probability parameter is set to P1=0.2;P2=0.8;P3=0.4;P40.5; the maximum number of iterations is 50; the maximum number of iterations of weight training is 2000,
Figure GDA0002152453420000141
η=0.00002;β=20。
generating an initial hidden layer number, hidden layer nodes and corresponding weights, and setting according to the principle of the step 3.2) in the specific implementation mode.
Updating the weight, and training according to the step 3.3) in the specific implementation mode to obtain the optimal weight under the condition that the number of hidden layers and hidden layer nodes are determined.
And updating the hidden layer number and the hidden layer nodes, and optimizing according to the step 4) in the specific embodiment to obtain the optimal hidden layer number, hidden layer nodes and corresponding weights so as to determine the parameters of the network.
And step 4, bringing the test data into the trained network for pattern recognition.
To more clearly verify the performance of the original self-encoded network (AE), the BSO (brain storm optimization algorithm) self-encoded network (BSO-AE) and the SABSO (simulated annealing-brain storm hybrid optimization algorithm) self-encoded network (SABSO-AE), the AE, the BSO-AE and the SABSO-AE are compared with BSO-AE1 and SABSO-AE1 when evaluation functions of the BSO-AE and the SABSO-AE consist of only error functions of the stack automaton.
As can be seen from tables 2 and 3, the recognition rate of AE in the training data and the test data is higher than that of BSO-AE and BSO-AE1, the recognition rate of SABSO-AE1 in the training data is lower than that of AE but higher than that of AE in the test data, and the recognition rate of SABSO-AE in the training data and the test data is better than that of other methods, so that SABSO-AE can better recognize the local discharge data, and the recognition rate of the local discharge data is improved, indicating that the method provided by the present invention is effective.
TABLE 2 run results of training data on different evaluation functions
Figure GDA0002152453420000151
TABLE 3 run results of test data on different evaluation functions
Figure GDA0002152453420000152
The method gets rid of the limitation that the prior research only applies experimental data to carry out mode identification, carries out the partial discharge mode identification under the condition of fully utilizing the complex on-site partial discharge data, and is more suitable for the current engineering practice with mass data; and the parameters of the self-coding network are intelligently adjusted by utilizing a simulated annealing-brainstorming hybrid optimization algorithm. Compared with other algorithms, the method has better identification precision, saves time and labor and improves the universality of the network.

Claims (3)

1.A transformer partial discharge mode identification method based on a self-coding network is characterized by comprising the following steps:
step 1, processing the collected partial discharge signal data,
processing original data by adopting a nonlinear filtering method to eliminate interference random signals; determining test and verification data for a network, and classification thereofType of (a), i.e. data is represented as { (x)(1),X(1)),...,(x(m),X(m)) Or unlabeled data x(1),x(2),...,x(m)Where m is the number of data and the ith data is x(i)The label is X(i)E.g. {1,2,. k }, wherein k is a category number; determining the characteristic number, namely the dimension, of the data;
step 2, selecting a self-coding network as a network model,
2.1) because the stack self-coding machine does not have the classification characteristic, the stack self-coding machine is combined with a classifier to construct a new self-coding network;
2.2) determining an objective function of the self-coding network;
step 3, training the weight of the self-coding network by using a simulated annealing-brainstorming hybrid optimization algorithm, wherein the specific process is as follows:
3.1) setting parameters of a simulated annealing-brain storm hybrid optimization algorithm,
the algorithm parameters mainly comprise: initial number of individuals NP, maximum number of iterations KImaxProbability parameter P1、P2、P3、P4Number of clusters n _ c, initial temperature t0
3.2) generating NP random distribution hidden layer numbers, hidden nodes and weights which meet constraint conditions according to the initialized hidden layer numbers, hidden nodes and weight formulas;
the initialized hidden layer number, hidden layer nodes and weight formula are as follows:
a) initializing hidden layer number and hidden layer node
Figure FDA0002900865290000021
In the formula (10), LiIndicates the ith hidden layer number, NiRepresents the number of ith hidden nodes, max _ L is 10, max _ N is 300, RiA row vector consisting of the ith hidden layer number and the hidden node number is represented, and randint () represents a random integer in a specified range;
3.2.b) initializing corresponding weight according to the hidden layer number and the hidden layer node,
Figure FDA0002900865290000022
in the formula (11), n is the number of input layer neurons, q is the number of hidden layer neurons,
Figure FDA0002900865290000023
wherein the scale of n, q, W, b, d of each layer is shown in Table 1, Wi,bi,diGenerating randomly by adopting a decimal coding mode; r isiIs the ith weight; r isi jIs the jth solution to the ith weight; rand () is a random number between (0, 1);
TABLE 1 initialize weight scale of each layer
Figure FDA0002900865290000024
3.3) generating and updating NP weight values;
the specific process is as follows:
3.3.a) dividing NP weights into n _ c classes in a self-coding network target function space by utilizing a K-means clustering algorithm;
3.3.b) selecting the weight value,
generating a random value between (0,1) if the value is less than the probability parameter P1Then with the probability parameter P2Randomly selecting a clustering center to realize weight updating, and the specific process is as follows: generating a random value of (0,1) if the random value is less than the probability parameter P3Selecting a clustering center and adding a random value to generate a new weight; otherwise, randomly selecting a weight value from the cluster and adding a random value to generate a new weight value;
if the value is greater than the probability parameter P1Randomly selecting two classes to generate a new weight value, wherein the updating process comprises the following steps: generating a (0,1) random value; if it is less than the probability parameter P4The two cluster centers are merged and added with a random valueTo generate a new weight; otherwise, selecting two random weights from the two clusters and adding a random value to generate a new weight;
3.3.c) carrying out variation operation on the weight, wherein the calculation formula of the weight variation operation is as follows:
Figure FDA0002900865290000031
Figure FDA0002900865290000032
in the formula (12), the reaction mixture is,
Figure FDA0002900865290000033
representing the d-th dimension in the weight after the variation;
Figure FDA0002900865290000034
representing the d-th dimension of the weight values used for updating; ξ represents the weight coefficient value at the time of generating a new weight; n (mu, sigma) represents a Gaussian random function with the mean value mu and the variance sigma;
in the formula (13), logsig () represents a logarithmic sigmoid function; KImaxRepresenting the maximum iteration number; iter represents the current number of iterations; k represents the slope of the change logsig () function; random () represents a random number between (0, 1);
3.3.d) updating the weight value,
solving an objective function value corresponding to the generated new weight by using an objective function formula (9) of the self-coding network, evaluating the weights before and after the variation, and keeping the optimal weight;
3.3.e) if Metropolis acceptance criteria are met, then
Figure FDA0002900865290000041
Wherein E (r'new) Is r'newAn objective function of taThe temperature of the a time is changed to the step 3.3. h); otherwise, go to step3.3.f);
3.3.f) by r'new=rnew+ rand (1, Dim) generates a new weight; dim is the dimension of the weight; r isnewIs the weight after the mutation operation;
3.3.g) if
Figure FDA0002900865290000042
Then r isnew=r′new(ii) a Otherwise, let rnew=rnewStep 3.3. e);
3.3.h) update ta,a=a+1;
3.3.i) outputting the optimal weight when the maximum iteration times is reached; otherwise, turning to step 3.3. a);
and 4, optimizing the number of hidden layers and hidden layer nodes of the self-coding network by using a brainstorming optimization algorithm, wherein the specific process is as follows:
4.1) generating and updating NP new hidden layer numbers and hidden layer nodes;
4.2) carrying out iterative search optimization, outputting the optimal hidden layer number and hidden layer nodes when the set convergence precision or the maximum iteration number is reached, and obtaining the optimal hidden layer number and the optimal weight corresponding to the hidden layer nodes by utilizing the step 3.3), thereby obtaining a trained network;
step 5, inputting the data to be recognized into the trained network, and classifying the data to be recognized by utilizing the trained network;
step 6, calculating the recognition rate of the data to be recognized:
Figure FDA0002900865290000043
in the formula (15), a is the correct classification number of the data to be identified, and s is the number of the data to be identified as a whole, so that the identification rate of the partial discharge signal data can be calculated intuitively.
2. The transformer partial discharge pattern recognition method based on self-coding network as claimed in claim 1, wherein in the step 2.2),
in the training process, firstly, mapping input data x into a hidden layer by a network model to obtain hidden layer characteristics y, wherein the hidden layer characteristics y are called as an encoder; then y is mapped to an output layer by the next layer network to obtain output data z, and the part is called as a decoder; these two components are mathematically represented as:
Figure FDA0002900865290000051
in the formula (1), W1Is a weight matrix of the input layer and the hidden layer, W2Is weight matrix of hidden layer and output layer, b is bias vector of hidden layer, d is bias vector of output layer, Sf、SgSigmoid functions are shown in formula (2), namely:
Figure FDA0002900865290000052
weighting matrix W between input layer and hidden layer1Transpose W of weight matrix taken as hidden layer and output layer2', thereby reducing the parameters, i.e.:
W1=W′2=W (3)
therefore, the number of self-coding network parameters is changed into three, namely a weight W, a bias vector b of a hidden layer and a bias vector d of an output layer;
the training goal is to minimize the difference between the output and the input, i.e.:
Figure FDA0002900865290000053
in equation (4), z is adjusted by W, b, d given x, c (x, z) is the training target for each training sample, then the total training target is:
Figure FDA0002900865290000054
in equation (5), C (x, z) is the total training target, m is the number of training samples,
the classifier in the step adopts a softmax classifier, and when the training sample set is { (x)(1),X(1)),...,(x(m),X(m)) Where m is the number of data and the ith data is x(i)The label is X(i)E {1, 2.., k }, where k is the number of classes, the assumption of softmax regression is:
Figure FDA0002900865290000061
wherein a vector h is assumedθ(x(i)) Each element p (X) of(i)=j|x(i)(ii) a θ) represents a sample x(i)Probability belonging to class j, sum of elements of vector being 1, theta12,...,θkAll classifier parameter vectors are represented by theta, and the vectors are written in a matrix form:
Figure FDA0002900865290000062
the cost function of the softmax classifier is defined as:
Figure FDA0002900865290000063
in the formula (8), m is the number of data; 1 {. cndot } represents an indication function, when an expression in parentheses is true, the indication function value is 1, otherwise, the indication function value is 0; the calculation formula behind the plus sign is a weight attenuation term, wherein lambda is a weight attenuation coefficient;
the target function of the self-coding network is reset as follows:
E=ηC(x,y)+βJ(θ) (9)
in equation (9), η is the coefficient of the target function of the self-stacking encoder, β is the coefficient of the cost function of the softmax classifier, C (x, y) is the overall training target of the self-stacking encoder, and J (θ) is the cost function of the softmax classifier.
3. The transformer partial discharge pattern recognition method based on the self-coding network as claimed in claim 1, wherein in the step 4.1), the specific process is as follows:
4.1.a) dividing NP hidden layer numbers and hidden layer nodes into 2 classes by using a K-means clustering algorithm;
4.1.b) selecting hidden layer number and hidden layer node,
generating a random value between (0,1) if the random value is less than the probability parameter P1Then with the probability parameter P2Randomly selecting a clustering center to realize the updating of the hidden layer number and the hidden layer nodes, and the specific process comprises the following steps: generating a random value of (0,1) if the value is less than the probability parameter P3Selecting a clustering center and adding a random value to generate a new hidden layer number and a hidden layer node; otherwise, randomly selecting an individual from the cluster and adding a random value to generate a new hidden layer number and a hidden layer node;
if the value is greater than the probability parameter P1Two classes are randomly selected to generate a new hidden layer number and a hidden layer node, and the updating process comprises the following steps: generating a (0,1) random value; if the random value is less than the probability parameter P4Combining the two clustering centers and adding a random value to generate a new hidden layer number and a hidden layer node; otherwise, selecting two random hidden layer numbers and hidden layer nodes from the two clusters and adding a random value to generate new hidden layer numbers and hidden layer nodes;
4.1.c) carrying out mutation operation on the hidden layer number and the hidden layer nodes,
the hidden layer number and hidden layer node variation formula is as follows:
Figure FDA0002900865290000071
in the formula (14), the compound represented by the formula (I),
Figure FDA0002900865290000072
representing the varied hidden layer number and the d-th dimension in the hidden layer nodes;
Figure FDA0002900865290000073
representing the number of hidden layers used for updating and the d-th dimension in the hidden layer nodes; xi represents the weight coefficient value when a new hidden layer number and a hidden layer node are generated, and the calculation method is the same as formula (13);
4.1.d) initializing the number of hidden layers and the corresponding weight of the hidden layer node, and initializing the weight by using the step 3.2. b);
4.1.e) updating the number of hidden layers and the corresponding weight of the hidden layer node, and updating the weight by using the step 3.3);
4.1.f) evaluating the hidden layer number and the hidden layer nodes before and after the mutation according to the self-coding network objective function, and reserving the hidden layer number and the hidden layer nodes with high recognition rate.
CN201910532414.1A 2019-06-19 2019-06-19 Transformer partial discharge mode identification method based on self-coding network Active CN110348489B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910532414.1A CN110348489B (en) 2019-06-19 2019-06-19 Transformer partial discharge mode identification method based on self-coding network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910532414.1A CN110348489B (en) 2019-06-19 2019-06-19 Transformer partial discharge mode identification method based on self-coding network

Publications (2)

Publication Number Publication Date
CN110348489A CN110348489A (en) 2019-10-18
CN110348489B true CN110348489B (en) 2021-04-06

Family

ID=68182398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910532414.1A Active CN110348489B (en) 2019-06-19 2019-06-19 Transformer partial discharge mode identification method based on self-coding network

Country Status (1)

Country Link
CN (1) CN110348489B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111142001B (en) * 2020-01-10 2022-04-22 三峡大学 Transformer multi-source partial discharge mode identification method based on parallel characteristic domain
CN112327219B (en) * 2020-10-29 2024-03-12 国网福建省电力有限公司南平供电公司 Distribution transformer fault diagnosis method with automatic feature mining and parameter automatic optimizing functions

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104037757A (en) * 2014-05-20 2014-09-10 西安理工大学 Brainstorming-based thermal power plant economic environment scheduling method
CN106503689A (en) * 2016-11-14 2017-03-15 哈尔滨理工大学 Neutral net local discharge signal mode identification method based on particle cluster algorithm
CN108040324A (en) * 2017-11-16 2018-05-15 南方科技大学 A kind of localization method and alignment system of survival capsule robot
CN108399105A (en) * 2018-02-27 2018-08-14 天津大学 A kind of Method for HW/SW partitioning based on improvement brainstorming algorithm
CN108573225A (en) * 2018-03-30 2018-09-25 国网天津市电力公司电力科学研究院 A kind of local discharge signal mode identification method and system
CN108694473A (en) * 2018-06-15 2018-10-23 常州瑞信电子科技有限公司 Building energy consumption prediction technique based on RBF neural
CN108957251A (en) * 2018-05-18 2018-12-07 深圳供电局有限公司 A kind of cable connector Partial Discharge Pattern Recognition Method
CN109102012A (en) * 2018-07-30 2018-12-28 上海交通大学 A kind of defect identification method and system of local discharge signal
CN109388858A (en) * 2018-09-17 2019-02-26 西安航空电子科技有限公司 Nonlinear transducer bearing calibration based on brainstorming optimization algorithm

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156245B (en) * 2011-03-11 2016-08-03 太原理工大学 A kind of mine high-voltage cable on-line fault diagnosis and method for early warning
CN103323755A (en) * 2013-06-17 2013-09-25 广东电网公司电力科学研究院 Method and system for recognition of GIS ultrahigh frequency partial discharge signal
CN105334436B (en) * 2015-10-30 2018-08-10 山东电力研究院 Crosslinked cable Partial Discharge Pattern Recognition Method based on SOM-BP combination neural nets
CN109188211B (en) * 2018-07-30 2021-02-05 上海交通大学 High-voltage equipment insulation fault diagnosis method and system
CN109375116B (en) * 2018-08-09 2021-12-14 上海国际汽车城(集团)有限公司 Battery system abnormal battery identification method based on self-encoder

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104037757A (en) * 2014-05-20 2014-09-10 西安理工大学 Brainstorming-based thermal power plant economic environment scheduling method
CN106503689A (en) * 2016-11-14 2017-03-15 哈尔滨理工大学 Neutral net local discharge signal mode identification method based on particle cluster algorithm
CN108040324A (en) * 2017-11-16 2018-05-15 南方科技大学 A kind of localization method and alignment system of survival capsule robot
CN108399105A (en) * 2018-02-27 2018-08-14 天津大学 A kind of Method for HW/SW partitioning based on improvement brainstorming algorithm
CN108573225A (en) * 2018-03-30 2018-09-25 国网天津市电力公司电力科学研究院 A kind of local discharge signal mode identification method and system
CN108957251A (en) * 2018-05-18 2018-12-07 深圳供电局有限公司 A kind of cable connector Partial Discharge Pattern Recognition Method
CN108694473A (en) * 2018-06-15 2018-10-23 常州瑞信电子科技有限公司 Building energy consumption prediction technique based on RBF neural
CN109102012A (en) * 2018-07-30 2018-12-28 上海交通大学 A kind of defect identification method and system of local discharge signal
CN109388858A (en) * 2018-09-17 2019-02-26 西安航空电子科技有限公司 Nonlinear transducer bearing calibration based on brainstorming optimization algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"An Improved Brain Storm Optimization with Differential Evolution Strategy for Applications of ANNs";Zijian Cao et al.;《Mathematical Problems in Engineering》;20151231;第1-18页 *
"Hybrid brain storm optimisation and simulated annealing algorithm for continuous optimisation problems";Zhengxuan Jia et al.;《International Journal of Bio-Inspired Computation》;20161231;第8卷(第2期);第109-121页 *

Also Published As

Publication number Publication date
CN110348489A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
Zhang et al. Analog circuit incipient fault diagnosis method using DBN based features extraction
CN110929763B (en) Multi-source data fusion-based mechanical fault diagnosis method for medium-voltage vacuum circuit breaker
CN111237134B (en) Offshore double-fed wind driven generator fault diagnosis method based on GRA-LSTM-stacking model
CN108304623B (en) Probability load flow online calculation method based on stack noise reduction automatic encoder
CN112116058A (en) Transformer fault diagnosis method for optimizing multi-granularity cascade forest model based on particle swarm algorithm
CN109492748B (en) Method for establishing medium-and-long-term load prediction model of power system based on convolutional neural network
CN101404071A (en) Electronic circuit fault diagnosis neural network method based on grouping particle swarm algorithm
CN110879373B (en) Oil-immersed transformer fault diagnosis method with neural network and decision fusion
CN110348489B (en) Transformer partial discharge mode identification method based on self-coding network
CN115563563A (en) Fault diagnosis method and device based on transformer oil chromatographic analysis
Wang et al. A remaining useful life prediction model based on hybrid long-short sequences for engines
Chen et al. Research on wind power prediction method based on convolutional neural network and genetic algorithm
CN116562114A (en) Power transformer fault diagnosis method based on graph convolution neural network
CN110766215A (en) Wind power climbing event prediction method based on feature adaptive selection and WDNN
CN112686404B (en) Power distribution network fault first-aid repair-based collaborative optimization method
Zhang et al. Encoding time series as images: A robust and transferable framework for power system DIM identification combining rules and VGGNet
Xu et al. Short-term electricity consumption forecasting method for residential users based on cluster classification and backpropagation neural network
CN116167465A (en) Solar irradiance prediction method based on multivariate time series ensemble learning
CN116400168A (en) Power grid fault diagnosis method and system based on depth feature clustering
CN115146739A (en) Power transformer fault diagnosis method based on stacked time series network
CN115659258A (en) Power distribution network fault detection method based on multi-scale graph convolution twin network
CN114841266A (en) Voltage sag identification method based on triple prototype network under small sample
CN112651183A (en) Reliability evaluation method for quantum distributed countermeasure unified deep hash network
Atira et al. Medium Term Load Forecasting Using Statistical Feature Self Organizing Maps (SOM)
Wang et al. Fault Diagnosis of Wind Turbine Generator with Stacked Noise Reduction Autoencoder Based on Group Normalization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant