CN112763780B - Intelligent mutual inductor - Google Patents

Intelligent mutual inductor Download PDF

Info

Publication number
CN112763780B
CN112763780B CN202011149506.0A CN202011149506A CN112763780B CN 112763780 B CN112763780 B CN 112763780B CN 202011149506 A CN202011149506 A CN 202011149506A CN 112763780 B CN112763780 B CN 112763780B
Authority
CN
China
Prior art keywords
layer
intelligent
module
signal
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011149506.0A
Other languages
Chinese (zh)
Other versions
CN112763780A (en
Inventor
丁飞
石颉
杜国庆
苏新雅
胡倩
黄佳悦
朱家坤
申海锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Xinrui Yirong Information Technology Co ltd
Suzhou University of Science and Technology
Original Assignee
Suzhou Xinrui Yirong Information Technology Co ltd
Suzhou University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Xinrui Yirong Information Technology Co ltd, Suzhou University of Science and Technology filed Critical Suzhou Xinrui Yirong Information Technology Co ltd
Priority to CN202011149506.0A priority Critical patent/CN112763780B/en
Publication of CN112763780A publication Critical patent/CN112763780A/en
Application granted granted Critical
Publication of CN112763780B publication Critical patent/CN112763780B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R15/00Details of measuring arrangements of the types provided for in groups G01R17/00 - G01R29/00, G01R33/00 - G01R33/26 or G01R35/00
    • G01R15/14Adaptations providing voltage or current isolation, e.g. for high-voltage or high-current networks
    • G01R15/18Adaptations providing voltage or current isolation, e.g. for high-voltage or high-current networks using inductive devices, e.g. transformers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R35/00Testing or calibrating of apparatus covered by the other groups of this subclass
    • G01R35/02Testing or calibrating of apparatus covered by the other groups of this subclass of auxiliary devices, e.g. of instrument transformers according to prescribed transformation ratio, phase angle, or wattage rating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Power Engineering (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an intelligent mutual inductor, which has the design points that: the intelligent diagnosis system comprises a current detection module, a temperature detection module, a data storage module, a remote communication module, an intelligent diagnosis module and an intelligent terminal; the current detection module is used for detecting the current of the lead; the temperature detection module is used for measuring the temperatures of a plurality of parts of the intelligent mutual inductor; the data storage module is used for storing the information collected by the current detection module and the temperature detection module, exchanging information with the information obtained by the intelligent diagnosis module and exchanging information with the intelligent terminal. By adopting the intelligent mutual inductor, the diagnosis efficiency and the diagnosis precision can be greatly improved, the current and the temperature of the mutual inductor can be detected in real time, the detected data can be transmitted and stored in real time, meanwhile, the mutual inductor is provided with an intelligent diagnosis module, an ART neural network is adopted to fuse a BP algorithm, and the state diagnosis of the mutual inductor can be carried out according to the real-time data.

Description

Intelligent mutual inductor
Technical Field
The invention belongs to the field of intelligent diagnosis of power equipment, and relates to an intelligent diagnosis system of a power transformer and a diagnosis method based on decision tree classification.
Background
The transformer is widely applied to a modern power grid, the operation reliability and the performance stability of the transformer have great influence on the stable and reliable operation of the power grid, and in view of the above, the academic world provides a lot of solutions to the health management problems of fault diagnosis, aging, service life and the like of the power transformer, but a large number of problems are still not solved at present, and most of the existing solutions do not produce commercial technical products.
For example, in the patent CN103531340A, only one point is collected for the temperature detection of the transformer (in this solution 1, only one temperature detection point is set at 5), but in actual operation, because the measured wire cannot make the transformer uniformly receive the magnetic field (as in fig. 1, the magnetic field distribution of the wire in different operating states), the temperature generated by the current effect at each point of the transformer is different.
In addition, due to the influence of the production process, the specific heat capacity and the heat conduction coefficient of materials of all layers of the mutual inductor are different, so that the single-point temperature of the mutual inductor cannot reflect the whole condition.
At present, a mathematical model is obtained by analyzing the existing data in the fault diagnosis of the mutual inductor, the accuracy of the method in the initial operation stage of the mutual inductor is high, but as the operation time of equipment increases, the mathematical model is misaligned due to material aging, mechanical vibration, electromagnetic interference and the like, so that misjudgment or missed judgment is caused, and the problem needs to be continuously researched.
Disclosure of Invention
The invention aims to provide an intelligent transformer aiming at the defects of the prior art.
An intelligent transformer, comprising: the system comprises a current detection module, a temperature detection module, a data storage module, a remote communication module, an intelligent diagnosis module and an intelligent terminal;
the current detection module is used for detecting the current of the lead;
the temperature detection module is used for measuring the temperatures of a plurality of parts of the intelligent mutual inductor;
the data storage module is used for storing the information acquired by the current detection module and the temperature detection module, exchanging information with the information acquired by the intelligent diagnosis module and exchanging information with the intelligent terminal;
the intelligent diagnosis module reads the data of the data storage module, is used for diagnosing the state of the mutual inductor and can write the diagnosis result into the data storage module;
the intelligent terminal is used for inquiring and displaying the information stored in the data storage module;
the output end of the current detection module is connected with the input end of the data storage module;
the output end of the temperature detection module is connected with the input end of the data storage module;
wherein the data storage module is connected with the intelligent diagnosis module;
the data storage module is connected with the intelligent terminal, and the data storage module is in communication connection with the intelligent terminal through the remote communication module.
Furthermore, the current detection module adopts patch type thermistor sensors, and 8 patch type thermistor sensors are averagely attached to the inner ring and the outer ring of the mutual inductor; the current signal is converted into a digital signal by a high-resolution analog-to-digital conversion chip and then transmitted to the data storage module 3.
Furthermore, the temperature detection module adopts thermistor sensors, the thermistor sensors are arranged at a plurality of positions of the mutual inductor to realize the detection of the temperature of the mutual inductor, in addition, 8 surface-mounted thermistor sensors are averagely attached to the inner ring and the outer ring of the mutual inductor to form a Wheatstone bridge for detecting the temperature of each point and simultaneously detecting the difference value of the temperature of a cable and the ambient temperature, a high-resolution analog-to-digital conversion chip is adopted for converting a current signal into a digital signal, and then the digital signal is transmitted to the data storage module.
Further, the data storage module is a mobile data storage.
Furthermore, the intelligent diagnosis module adopts a digital processing chip, can acquire, store, remotely communicate and intelligently diagnose the collected current and temperature information, and simultaneously adopts a metal mesh enclosure to form a shielding enclosure to prevent electromagnetic interference of the digital chip under the condition of a strong magnetic field.
Further, the intelligent diagnosis module stores a mutual inductor calculation model, and the mutual inductor calculation model adopts an ART calculation model;
the ART computational model consists of two layers of neurons including two subsystems: a comparison layer C and an identification layer R; further comprising: three control signal RESET signals, logic control signals G1 and G2;
wherein, the comparison layer C has n nodes, and each node receives signals from 3 aspects: one is an input signal x from the outside world i The other is an outward weight vector T from the R-layer winning neuron j Is returned to ij And a control signal from G1; the outputs of the nodes of the C-layer are generated according to the 2/3 principle of "majority voting", i.e. the output value C i And x i 、t ij Most of the G1 and 3 signals have the same value; when the network starts to operate, G1=1, the identification layer does not generate competition winning neurons, so that the feedback return signal is 0, and the output of the C layer is determined by the input signal according to the 2/3 rule, and C = X; when the comparison signal, x, of the feedback loop-back signal and the feedback signal occurs in the identification layer i =t ij Then c is i =x i Otherwise c i =0; that is, the control signal G1 is used for the comparison layer to distinguish different stages of the network operation, the network operation starting stage G1 is used for enabling the C layer to directly output the input signal, then G1 is used for enabling the C layer to perform the comparison function, and at the moment C i Is to x i And t ij When both are 1, c i Is 1, otherwise is 0, i.e. signal t returned from the R layer ij The output of the layer C is regulated;
the identification layer R is composed of a multi-layer feedforward neural network and is provided with m nodes for representing m input mode classes, and m can be dynamically increased to establish a new mode class; the internal weight vector connected from layer C to the jth node of R is used as B j =(b 1j ,b 2j ……b nj ) Represents; output vector of C layerC along m inner weight vectors B j (j =1,2, \8230;. M) forward, and after reaching each neuron node of the R layer, a winning node j is generated through competition, and the winning node j indicates the category of the input mode; winning node output r j =1, the remaining node outputs are 0;
each neuron of the R layer corresponds to two weight vectors: one is an inner weight vector B for converging the C-layer feedforward signal to the R-layer j (ii) a The other is an outward weight vector T for distributing the R layer feedback signal to the C layer j The vector is a typical vector corresponding to each mode class node of the R layer;
the control signals G1, G2, reset respectively function as: g1 is X0 when the logical or of X elements of the input pattern is X, and the logical nor of R elements is R0, then G1= X0R0, that is, G1=1 only when all the R layer output vectors R are 0 and all the input X are not 0, otherwise G1=0; the signal G2 detects whether the input pattern X is 0, which is equal to the logical OR of the X components, if X is i (i =1,2, \8230;, n) are all 0, then G2=0, otherwise G2=1; the Reset signal acts to invalidate the R layer winning neurons by competition, if according to some pre-set measurement criterion, T j If the similarity between the X and the X is not equal to the preset similarity rho, the two are not sufficiently close to each other, and then the system sends out a Reset signal to invalidate the winning neuron;
the input layer is responsible for receiving external information and transmitting an input sample to the competition layer to play an observation role, the competition layer is responsible for analyzing and comparing, analyzing according to a known training model and correctly classifying, and if the result obtained by analysis does not exist in the known model, a new category is automatically created; and the control signal is responsible for controlling the similarity rho of the analysis result of each layer, and if the result does not reach the preset similarity rho, the outward weight vector of the outward weight vector is analyzed again.
Further, the operation flow of the ART calculation model is as follows:
when the network runs, receiving an input sample from an environment, checking the matching degree between the input sample and all categories of the R layer, and for the sample with the highest matching degree, continuously examining the similarity degree between a typical vector of the sample and the current input mode by the network; the similarity degree is examined according to a pre-designed reference threshold, and no two situations occur:
1. if the similarity exceeds the reference threshold, selecting the mode class as a representative class of the current input mode; the weight adjustment rule is that the mode class with similarity exceeding the reference threshold adjusts the corresponding internal and external weight vectors so as to obtain greater similarity when encountering a sample close to the current input mode in the future and not change other weight vectors;
2. if the similarity does not exceed the threshold value, performing similarity investigation on the mode class with the next-highest matching degree of the R layer, returning to the case 1 if the operation of the network exceeds the reference threshold, and otherwise, returning to the case 2; the operation returns to the case 2 repeatedly, which means that the similarity between all the mode classes and the current input mode does not exceed the reference threshold, and at the moment, a node representing a new mode class needs to be established at the network output end to represent and store the mode so as to participate in the subsequent matching process;
the network performs the above operation process for each new input sample received; for each input, the mode network operation process can be generalized into three stages, namely an identification stage, a comparison stage and a search stage:
(1) Identification phase
Before the network has no input mode, the network is in a waiting state; at this time, X =0 is input, and the concatenation control signal G2=0; therefore, the output of the R layer units is all 0, and the same winning chance exists in competition; when the network input is not all 0, setting G2=1; information flows from bottom to top, G1= G2, R0=1, as can be seen from the 2/3 rule, when C layer outputs C = X and C feeds upwards, acting on the upward weight vector B, generating a vector T which feeds upwards into the R layer, causing the inside of the R layer to start competing; assuming the winning node is j, R layer outputs R j =1 and the other node outputs are 0;
(2) Comparison phase
The output information of the R layer returns to the C layer from top to bottom, R j =1 activating from top to bottom Tj connected to the R tier j node and returning down to the C tier;
at this time, the R-layer output is not all 0, and G1=0, so the C-layer next output C' depends on the weight vector T from top to bottom of the R-layer j And an input mode X of the network;
testing the similarity by using a preset threshold, if C' gives sufficiently similar information, indicating that the competition is correct, otherwise, indicating that the competition result does not meet the requirement, sending a Reset signal to invalidate the last winning node, and enabling the node not to win any more in the matching process of the current mode; then entering a searching stage;
(3) Search phase
Starting from invalidity of a Reset signal in a winning stage, entering a searching stage by the network, wherein R is all 0, G1=1, and obtaining the current input mode X at the output end of the layer C; therefore, the network enters the stage of identification and comparison to obtain a new winning node; repeating the steps until a certain winning node K is searched and is fully matched with the input vector X to meet the requirement, compiling the mode X into the mode category connected with the R-layer K nodes, namely modifying the weight vectors of the points from bottom to top and from top to bottom according to a certain method, and adding an R-layer node to represent the mode of the X or the mode close to the X if the network meets the X or does not find the mode close to the X if all R-layer output nodes are searched;
if the reference threshold is greater than rho, accepting j as a winning node, modifying weight vectors of the R layer nodes from bottom to top and from top to bottom to make the X similar inputs more easily obtained and have higher similarity, recovering the R layer nodes restrained by Reset signals, and turning to a comparison stage to meet the next input; otherwise, a Reset signal is sent, j is set to be 0, and the search phase is started.
Further, the identification layer R is a feedforward neural network model, namely a multilayer feedforward neural network formed by two layers of neurons by adopting a BP neural network algorithm;
the feedforward neural network comprises an input layer formed by 10 neurons, a hidden layer formed by 10 neurons, an output layer formed by 2 outputs, and the input layer corresponds to the hidden layer: temperature, current, resistance, voltage, load ratio, current ratio difference, current phase difference, composite error, deviation; the output layer corresponds to: running state, remaining life.
Further, the training method of the feedforward neural network model comprises the following steps:
firstly, data acquisition: the method comprises the following steps of carrying out operation experiments through a common sensor to obtain m groups of parameters under different operation states, recording the parameters as a data set D, recording the parameters as a vector X, recording the parameters as a vector Y, and judging the operation state of the mutual inductor to be a j classification task with i characteristic parameters, wherein each group of parameters comprises X1-Xi, and the operation state comprises Y1-yj (the operation state can be manually divided into j states, and also can be used for j classification learning through unsupervised learning):
secondly, sampling data obtained by an experiment by using a self-service sampling method, and dividing a training set and a testing set: in particular, given a data set D containing m samples, sampling it produces a data set D': randomly picking a sample from D each time, copying the sample into D', and then putting the sample back into the initial data set D, so that the sample can still be picked when sampling next time; after the process is repeatedly executed m times, a data set D' containing m samples is obtained, and the result is the self-help sampling result;
obviously, some samples in D will appear in D' multiple times, while another part of samples will not appear; a simple estimate can be made that the probability that a sample will never be taken in m samples is (1-1/m) m When m is infinite, (1-1/m) m =1/e, D' is the training set of the machine learning model,
Figure GDA0003895821280000051
for test set
Figure GDA0003895821280000052
Thirdly, training is carried out: for each training sample, the BP algorithm executes the following operations of providing the input sample to an input layer neuron, and then forwarding signals layer by layer until a result of an output layer is generated; then calculating the error of an output layer, reversely transmitting the error to a hidden layer nerve, and finally adjusting the connection weight and the threshold value according to the error of the hidden layer nerve cell; the iterative process loops until a stop condition is reached.
The application has the advantages that: the intelligent mutual inductor can realize real-time detection of current and temperature of the mutual inductor, can transmit and store detection data in real time, is provided with an intelligent diagnosis module, adopts an ART neural network fusion BP algorithm, and can perform mutual inductor state diagnosis according to the real-time data.
Drawings
The invention will be further described in detail with reference to examples of embodiments shown in the drawings to which, however, the invention is not restricted.
Fig. 1 is a layout diagram of a prior art CN 103531340A.
Fig. 2 is a diagram of the magnetic field distribution of the wire in different operating states.
Fig. 3 is a configuration diagram of a multilayer feedforward neural network of the present application.
FIG. 4 is a diagram of threshold control functions of the hidden layer and the output layer of the present application.
Fig. 5 is an ART neural network of the present application.
Fig. 6 is a configuration diagram of the intelligent transformer.
The reference numerals in fig. 2-6 are illustrated as follows:
the intelligent diagnosis system comprises a current detection module 1, a temperature detection module 2, a data storage module 3, a remote communication module 4, an intelligent diagnosis module 5 and an intelligent terminal 6.
Detailed Description
Example 1: an intelligent transformer, comprising: the intelligent diagnosis system comprises a current detection module 1, a temperature detection module 2, a data storage module 3, a remote communication module 4, an intelligent diagnosis module 5 and an intelligent terminal 6;
the current detection module 1 is used for detecting the current of the lead, the current detection module can adopt patch type thermistor sensors, and 8 patch type thermistor sensors are averagely attached to the inner ring and the outer ring of the mutual inductor;
8 SMD thermistor sensors still constitute the difference that the wheatstone bridge detected each point temperature and detected cable and ambient temperature simultaneously, adopt high resolution analog-to-digital conversion chip to convert current signal into digital signal and be used for data storage, remote communication and intelligent diagnosis.
The temperature detection module 2 adopts a thermistor sensor, thermistors are arranged at different positions of the mutual inductor to detect the temperature of the mutual inductor, and a high-resolution analog-to-digital conversion chip is adopted to convert a current signal into a digital signal for data storage, remote communication and intelligent diagnosis.
The data storage module 3 is a data storage (for example, an SD card, a mobile hard disk, a U disk), and can store collected information, for example: the data storage can be selected from a pluggable type with selectable capacity, is used for storing data and preventing loss and distortion of detection data caused by communication interruption or signal interference, and can be read again after communication is recovered, or copied data can be manually taken out.
The telecommunications module 4 employs common devices such as: the method comprises the following steps that wifi, CDMA and other modes are adopted, namely common field communication protocols and free protocols are adopted, a dial switch is used for selecting the protocols, the requirements of common components in the market are met, and collected data are transmitted to an intelligent terminal through a communication network; the free protocol supports the communication protocol defined by the user to realize the safe encryption of data transmission.
The intelligent diagnosis module 5 reads the data of the data storage module 3 for diagnosing the state of the mutual inductor. Specifically, the intelligent diagnosis module 5 includes a transformer model; a mathematical model is established for manufacturing parameters and operation data of the existing mutual inductor, the mutual inductor model is learned and trained through an artificial intelligence algorithm, a machine learning model which is commonly used is established, meanwhile, a data iteration algorithm is adopted, and during equipment operation, iterative optimization is carried out on the machine learning model while fault diagnosis is carried out, so that the fault diagnosis accuracy is continuously improved.
The intelligent diagnosis module 5 can adopt a digital processing chip, can acquire, store, remotely communicate and intelligently diagnose the collected current and temperature information, and simultaneously adopts a metal mesh enclosure to form a shielding enclosure to prevent electromagnetic interference of the digital chip under the condition of a strong magnetic field.
For the mathematical model of the intelligent diagnostic module 5: a mathematical model is established for manufacturing parameters and operation data of the existing mutual inductor, the mutual inductor model is learned and trained through an artificial intelligence algorithm, a machine learning model which is commonly used is constructed, meanwhile, a data iteration algorithm is adopted, and during equipment operation, iterative optimization is carried out on the machine learning model while fault diagnosis is carried out, so that the fault diagnosis accuracy is continuously improved;
the establishment of the mathematical model is to collect and analyze the material parameters in the production process of the prior common mutual inductor, and comprises the following steps: thickness, size, magnetic conductivity, specific heat capacity, interlayer gap of the silicon steel sheet, and magnetic conductivity, specific heat capacity, heat transfer coefficient and expansion rate of the shaping resin; and analyzing and determining a mathematical model among the material, the temperature and the current change of the transformer through the material parameters of the common transformer and the current and the temperature parameters in the operation detection experiment.
For the machine learning model, a BP neural network algorithm and an ART neural network are mainly adopted, and model training is carried out on a training set of random sampling strokes to construct the machine learning model with strong generalization capability on new data.
Performing operation experiment by using a common sensor to obtain m groups of parameters under different operation states, recording the parameters as a data set D, wherein each group of parameters is represented by X 1 ~X i Composition, denoted as vector X, running state by y 1 ~y j The method comprises the following steps of (j states can be manually divided in the running state, and j classification learning can also be carried out through unsupervised learning), recording the j classification learning as a vector Y, and converting the running state judgment of the mutual inductor into a j classification task with i characteristic parameters:
firstly, sampling data obtained by an experiment by using a self-service sampling method, and dividing a training set and a testing set: in particular, given a data set D containing m samples, we sample it to produce a data set D': randomly picking one sample at a time from D, copying it into D', and then putting the sample back into the original data set D so that the sample is in the next timeThe sampling is still possible to be obtained; after the process is repeatedly executed m times, a data set D' containing m samples is obtained, and the result is self-service sampling. Obviously, some samples in D will appear in D' multiple times, while some samples will not appear. A simple estimate can be made that the probability that a sample is never taken in m samples is (1-1/m) m M is at infinity, (1-1/m) m =1/e, D' and is a training set of machine learning models,
Figure GDA0003895821280000071
is a test set;
secondly, a BP neural network algorithm is adopted, and a multilayer feedforward neural network is formed by two layers of neurons. Specifically, given a training set D = { X, Y }, i.e., the input samples are described by i feature attributes, a j-dimensional real-valued vector is output. For the convenience of discussion, fig. 3 shows a multi-layer feedforward network structure with i input neurons, j output neurons, and q hidden neurons, where the threshold of the output layer d-th neuron is represented by θ d The threshold of the h-th neuron in the hidden layer is expressed by mu h The connection weight between the d-th neuron of the output layer and the h-th neuron of the hidden layer is represented as omega hd The weight of the connection between the h-th neuron of the hidden layer and the t-th neuron of the input layer is gamma th . Input received by h neuron of memory layer
Figure GDA0003895821280000072
Input received by the d-th neuron of the output layer
Figure GDA0003895821280000073
Third, the hidden layer and the output layer both use the Sigmoid (x) as the threshold control function (see fig. 4), so that the hidden layer neuron outputs b h =Sigmoid(α hh ) Output layer neuron output y d =Sigmoid(βd-θ d ). For training example (X) K ,Y K ) Assuming that the output of the neural network is
Figure GDA0003895821280000074
I.e. y k =Sigmoid(β dd ) Then the neural network is in (X) K ,Y K ) Mean square error of
Figure GDA0003895821280000081
There are (i + j + 1) × q + j parameters to be determined in the whole neural network: i × q weights from the input layer to the hidden layer, q × j weights from the hidden layer to the output layer, q hidden layer neuron thresholds and j output neuron thresholds.
Fourthly, BP is an iterative learning algorithm, and parameters are updated and estimated by adopting a generalized perceptron learning rule in each iteration. The estimation update form of δ of an arbitrary parameter is δ ← δ + Δ δ. Determining (i + j + 1) xq + j parameters in the neural network through a BP algorithm on a training set through specified iteration times to obtain a machine learning model F (x), verifying each performance index of the learning model through a test set, determining the learning model if the learning model reaches the standard, and increasing the iteration times if the performance index does not reach the standard until the performance reaches the standard.
Fifth, a work flow diagram (see fig. 5) of the BP algorithm is given below, and for each training sample, the BP algorithm performs the following operations of providing the input sample to neurons in an input layer, and then forwarding signals layer by layer until a result of an output layer is generated; and then calculating the error of an output layer, reversely transmitting the error to the hidden layer nerve, and finally adjusting the connection weight and the threshold value according to the error of the hidden layer nerve cell. The iterative process loops until some stopping condition is reached, e.g., the training error has reached a small value.
Figure GDA0003895821280000082
Transplanting a neural network machine model F (x) obtained by training on a common transformer data set to an intelligent transformer. F (x) is used as a hidden layer of an ART (adaptive resonance theory) neural network, and unsupervised incremental learning or online learning is carried out while state discrimination is carried out when the intelligent transformer operates.
For the ART neural network, it is constructed as follows:
first, ART consists of two layers of neurons forming two subsystems: a comparison layer C and a recognition layer R; there are three other control signal RESET signals, logic control signals G1 and G2 (see fig. 5).
Second, the comparison layer C has n nodes, each receiving signals from 3 aspects: one is an input signal x from the outside world i The other is an outward weight vector T from the R-layer winning neuron j Is returned to ij There is also a control signal from G1.
The outputs of the nodes of the C-level are generated according to the principle of 2/3 'majority voting', i.e. the output value C i And x i 、t ij And G1 signals have the same value. When the network starts to operate, G1=1, the identification layer does not generate competition winning neurons, so that the feedback return signal is 0, and the C layer output is determined by the input signal according to the 2/3 rule, and C = X is provided. When the comparison signal, x, of the feedback loop-back signal and the feedback signal occurs in the identification layer i =t ij Then c is i =x i Otherwise, c i =0. It can be seen that the control signal G1 is used to distinguish different stages of network operation from each other by the comparison layer, the network start operation stage G1 is used to make the layer C output the input signal directly, and then G1 is used to make the layer C perform the comparison function, at this moment C i Is to x i And t ij When both are 1 c i Is 1, otherwise is 0. It can be seen that the signal t returns from the R layer ij The output of the C layer is regulated.
Thirdly, the recognition layer R is composed of the aforementioned multilayer feedforward neural network. There are m nodes to represent m input pattern classes, and m can be dynamically incremented to set up a new pattern class. The internal weight vector connected from layer C to the jth node of R is used as B j =(b 1j ,b 2j ,……b nj ) And (4) showing. The output vector C of layer C is along m inner weight vectors B j (j =1,2, \8230;. M) forward, and after reaching each neuron node of the R layer, the winning node j is generated through competition, indicating the inputThe category to which the mode belongs.
Winning node output r j =1, and the remaining node outputs are 0. Each neuron of the R layer corresponds to two weight vectors: one is an inner weight vector B for converging the C-layer feedforward signal to the R-layer j (ii) a The other is an outward weight vector T for distributing R-layer feedback signals to C-layer j The vector is a typical vector corresponding to each mode class node of the R layer.
Fourth, the control signals G1, G2, reset respectively function as: g1 is X0 when the logical or of X elements of the input pattern is X, and R0 when the logical nor of R elements is R, G1= X0R0, that is, G1=1 when all R-layer output vectors R are 0 and all input X are not 0, and G1=0 otherwise; the signal G2 detects whether the input pattern X is 0, which is equal to the logical OR of the X components, if X is i (i =1,2, \ 8230;, n) are all 0, then G2=0, otherwise G2=1; the response signal is effective when the R layer competition winning neuron is ineffective, if according to a certain preset measurement standard, T j Failure to reach the predetermined similarity ρ with X indicates that the two are not sufficiently close, and the system signals Reset to invalidate the winning neuron.
And fifthly, the input layer is responsible for receiving external information and transmitting the input sample to the competition layer to play an observation role, the competition layer is responsible for analyzing and comparing, analyzing according to a known training model, correctly classifying, and if the result obtained by analyzing does not exist in the known model, automatically creating a new category. The control signal is responsible for controlling the similarity rho of the analysis result of each layer, and if the result does not reach the preset similarity rho, the analysis is carried out again.
Sixth, the flow of ART calculation model operation is given below:
Figure GDA0003895821280000101
the process is as follows:
Figure GDA0003895821280000102
Figure GDA0003895821280000111
specifically, the network runtime accepts an input sample from the environment and checks the degree of matching between the input sample and all classes in the R layer, and for the sample with the highest degree of matching, the network continues to examine the degree of similarity between the typical vector of the sample and the current input pattern. The similarity is examined according to a pre-designed reference threshold, and there are no two possible situations:
(1) And if the similarity exceeds a reference threshold, selecting the mode class as a representative class of the current input mode. The weight adjustment rule is that the mode class with the similarity exceeding the reference threshold adjusts the corresponding internal and external weight vectors so as to obtain larger similarity when encountering a sample close to the current input mode in the future and not change other weight vectors.
(2) And if the similarity does not exceed the threshold value, performing similarity investigation on the mode class with the next highest matching degree of the R layer, returning to the case 1 if the operation of the network exceeds the reference threshold, and otherwise, still returning to the case 2. It is conceivable that the operation returns to the case 2 repeatedly, which means that eventually all the pattern classes have similarity to the current input pattern not exceeding the reference threshold, and at this time, a node representing the new pattern class needs to be established at the network output end to represent and store the pattern so as to participate in the subsequent matching process.
The network performs the above operation for each new input sample received. For each input, the pattern network operation process can be generalized into three stages, namely, a recognition stage, a comparison stage, and a search stage.
(1) Identification phase
The network is in a wait state before it has no input mode. At this time, X =0 is input, and the concatenation control signal G2=0. Therefore, the output of the R-layer units is all 0, and the same winning chance is obtained in competition. When the network inputs are not all 0, G2=1 is set.
Information flows from bottom to top, G1= G2R0=1, as can be seen from the 2/3 rule, when C layer outputs C = X and C feeds upwards, acting on the upward weight vector B, generating a vector T which feeds upwards into the R layer, causing competition inside the R layer to start. Assuming that the winning node is j, the R-level input Rj =1, and the other node outputs 0.
(2) Comparison phase
The output information of the R layer returns to the C layer from top to bottom, R j =1 top-down T to which R layer j node is connected j Activated and returns down to layer C.
At this time, the R-layer outputs are not all 0, and G1=0, so the C-layer next output C' depends on the weight vector Tj from top to bottom of the R-layer and the input pattern X of the network.
And (3) testing the similarity by using a threshold specified in advance, if the C' gives sufficiently similar information, indicating that the competition is correct, otherwise, indicating that the competition result is not satisfactory, sending a Reset signal to invalidate the last winning node, and enabling the node not to win any more in the matching process of the current mode. And then enters a search phase.
(3) Search phase
And starting invalidation of the winning stage by the Reset signal, entering a searching stage by the network, wherein R is all 0, G1=1, and obtaining the current input mode X at the output end of the C layer. Therefore, the network enters the identification and comparison stage again to obtain a new winning node (the former winning node does not participate in competition). Repeating the steps until a certain winning node K is searched and is fully matched with the input vector X to meet the requirement, compiling the pattern X into the pattern category connected with the R-layer K nodes, namely modifying the weight vectors of the points from bottom to top and from top to bottom according to a certain method, and adding an R-layer node to represent the pattern of the X or the pattern close to the X if the network meets the X or does not find the pattern close to the X if all the R-layer output nodes are searched.
If the reference threshold is greater than rho, then j is accepted as a winning node, weight vectors of the R layer nodes from bottom to top and from top to bottom are modified to make the X similar inputs more easily obtained and have higher similarity, the R layer nodes restrained by the Reset signal are recovered, and the R layer nodes go to a comparison stage to meet the next input. Otherwise, a Reset signal is asserted, setting j to 0 (not allowing it to participate in the contention), and the search phase is started.
The research flow of the intelligent transformer is given as follows:
1. taking a transformer produced in a certain plant as an example, the parameters of the transformer are shown in the following table I, and the transformer on the same production line is sampled to carry out the type test of the common transformer. Obtaining a mathematical model T of the transformer through comprehensive analysis of the test data j [t]=f(X i ). The statistical data are shown in table 2, taking resistance measurement and error measurement tests as examples.
Mathematical model T through transformer type data j [t]=f(X i ) Therefore, a complete parameter model of the mutual inductor can be obtained through the temperature and the current values detected in real time.
TABLE 1
Figure GDA0003895821280000121
2. The data of current, temperature, operation state classification and the like during normal operation of the common transformer are collected, and a data set of multiple index parameters is obtained by combining a mathematical model.
As shown in table 4, the data set is divided into a training set and a test set (gray background data is the training set) by a random sampling method, and the weights of each layer of the neural network and the connection weights between layers are obtained by a BP algorithm to determine the feedforward neural network.
The determined feedforward neural network comprises 10 neurons to form an input layer, 10 neurons to form a hidden layer and 2 outputs to form an output layer, the percentage error rate and the input layer of the previous 10 times in the training process of the neural network are given in table 5, and the weight of the output layer is given in table 6.
TABLE 2
Figure GDA0003895821280000131
Figure GDA0003895821280000141
TABLE 3
Figure GDA0003895821280000142
Figure GDA0003895821280000151
TABLE 4
Figure GDA0003895821280000152
TABLE 5 input layer weights
0.14311935 0.10318176 -0.03177137 -0.09643330 0.00450989 -0.03802635 0.11351944 -0.07867491 -0.00936122 0.03335282
0.16324853 0.00187474 -0.08726486 0.10232168 0.04734760 -0.09979746 0.16389850 0.19311419 0.12408689 0.16086638
-0.07464349 0.09193270 0.15953532 0.07359357 -0.01114291 -0.15971952 -0.02633127 0.04435479 0.16520442 0.18664255
0.18159131 0.14612397 -0.09580308 0.12201113 0.01947972 -0.19438332 0.08788187 -0.04047058 0.12993799 0.06726128
-0.19952562 -0.00256885 0.14704111 -0.10243565 -0.06991825 0.14818849 -0.12357316 0.02700430 -0.10455363 0.18701610
0.12138683 -0.02081217 -0.16782167 -0.07197816 0.00317626 0.17313353 -0.15637686 0.02050690 0.08262456 0.01897636
0.12573770 0.01611344 0.18553542 0.04127425 0.03504683 -0.02200439 0.03851474 -0.04603954 0.03026041 -0.08386820
-0.12337706 -0.12530819 0.04510927 0.06266376 -0.00938760 -0.16407026 0.10304157 0.15070815 0.16935241 0.13698409
0.15932320 0.16923298 0.01623997 -0.04348158 0.08211336 -0.08974635 0.12465148 0.13979439 0.15801559 0.03592047
0.17986211 0.03187800 -0.01977476 0.06409815 0.19850314 0.16677649 0.11733003 -0.16705080 0.04511324 -0.00542232
0.05231305 0.13803103 -0.10278575 0.09259569 -0.15314628 -0.11181579 0.11783319 -0.06698554 0.12636524 -0.15975699
3. The network is deployed into an intelligent transformer as an R layer of an ART neural network. Apply intelligent mutual-inductor in actual circuit: the intelligent mutual inductor analyzes acquired current parameters and the temperature of the intelligent mutual inductor as samples by adopting an ART neural network to obtain the working state and the service life aging condition of the mutual inductor at the moment, if an analysis result exists in an original sample library, the acquired parameters and the samples are directly stored in a memory after the iteration of the weight parameters and then output through communication, if the analysis result does not exist in the original sample library, the analysis result is newly added in the sample library, the acquired parameters and the samples are stored in the memory after the iteration of the weight parameters and then output through communication, and the new sample library is used for intelligently analyzing the newly acquired samples. If the analysis result shows that the transformer has faulty operation or short life expectancy, warning or alarm information can be preferentially sent through the communication bus. For example, the average temperature and current at this time are measured to be [21.0559,0.0133], full parameter samples are obtained through a mathematical model [21.0559,0.0133,5.1059,750.7928,0.0679,0.4527, -0.193,3.4095,0.2123, -0.1725], and the result is given after ART calculation model operation [0,0.9], which indicates that the transformer works normally, and 0.9 full life cycle is predicted to remain; the average temperature and current at the moment are measured to be 25.3549,0.9736, full parameter samples are obtained through a mathematical model to obtain 25.3549,0.9736,13.5636,1916.5353,13.2055,88.0366,0.0547,0.2646 and 0.21230.1725, and a result is given out through ART calculation model operation to indicate that the transformer works in overload, and 0.2 full life cycle is predicted to remain.
TABLE 6 output layer weights
0.06546346 0.05629297
-0.79611583 0.85110899
0.61711617 -0.41885297
1.74530442 -1.33756798
The above-mentioned embodiments are only for convenience of description of the invention, and are not intended to limit the invention in any way, and it will be apparent to those skilled in the art that the invention can be embodied in many different forms without departing from the spirit and scope of the invention.

Claims (5)

1. An intelligent transformer, comprising: the system comprises a current detection module, a temperature detection module, a data storage module, a remote communication module, an intelligent diagnosis module and an intelligent terminal;
the current detection module is used for detecting the current of the lead;
the temperature detection module is used for measuring the temperatures of a plurality of parts of the intelligent mutual inductor;
the data storage module is used for storing the information acquired by the current detection module and the temperature detection module, exchanging information with the information acquired by the intelligent diagnosis module and exchanging information with the intelligent terminal;
the intelligent diagnosis module reads the data of the data storage module, is used for diagnosing the state of the mutual inductor and can write the result of the diagnosis of the state of the mutual inductor into the data storage module;
the intelligent terminal is used for inquiring and displaying the information stored in the data storage module;
the output end of the current detection module is connected with the input end of the data storage module;
the output end of the temperature detection module is connected with the input end of the data storage module;
wherein the data storage module is connected with the intelligent diagnosis module;
the data storage module is connected with the intelligent terminal, and the data storage module is in communication connection with the intelligent terminal through the remote communication module;
the intelligent diagnosis module stores a mutual inductor calculation model, and the mutual inductor calculation model adopts an ART calculation model;
the ART computational model consists of two layers of neurons including two subsystems: a comparison layer C and a recognition layer R; further comprising: three control signal RESET signals, logic control signals G1 and G2;
wherein, the comparison layer C has n nodes, each node receives signals from 3 aspects: one is an input signal x from the outside world i The other is an outward weight vector T from the R-layer winning neuron j Is returned to ij And a control signal from G1; the outputs of the nodes of the C-level are generated according to the principle of 2/3 'majority voting', i.e. the output value C i And x i 、t ij Most of the G1 and 3 signals have the same value; at the start of the operation of the network,g1=1, the identification layer has not generated a competitive winning neuron, so the feedback loopback signal is 0, and as known from the 2/3 rule, the output of the C layer is determined by the input signal, and has C = X; when the identification layer has a comparison signal of the feedback signal and the feedback signal, x i =t ij Then c is i =x i Otherwise c i =0; that is, the control signal G1 is used for the comparison layer to distinguish different stages of the network operation, the network operation starting stage G1 is used for enabling the C layer to directly output the input signal, then G1 is used for enabling the C layer to perform the comparison function, and at the moment C i Is to x i And t ij When both are 1, c i Is 1, otherwise is 0, i.e. signal t returned from the R layer ij The output of the C layer is regulated;
the identification layer R is composed of a multilayer feedforward neural network and is provided with m nodes for representing m input mode classes, and m can be dynamically increased to set up a new mode class; the internal weight vector connected from layer C to the jth node of R is B j =(b 1j ,b 2j ……b nj ) Represents; the output vector C of layer C is along m inner weight vectors B j Forward transmission, wherein j =1,2, \8230m, m, after reaching each neuron node of the R layer, a winning node j is generated through competition, and the category of the input mode at this time is indicated; winning node output r j =1, the remaining node outputs are 0;
each neuron of the R layer corresponds to two weight vectors: one is an inner weight vector B for converging the C-layer feedforward signal to the R-layer j (ii) a The other is an outward weight vector T for distributing the R layer feedback signal to the C layer j The vector is a typical vector corresponding to each mode class node of the R layer;
the control signals G1, G2, reset respectively function as: g1 is X0 when the logical or of X elements of the input pattern is X, and R0 when the logical nor of R elements is R, G1= X0R0, that is, G1=1 when all R-layer output vectors R are 0 and all input X are not 0, and G1=0 otherwise; the signal G2 detects whether the input pattern X is 0, which is equal to the logical OR of the X components, if X is i All 0, where i =1,2, \ 8230, n, then G2=0, otherwise G2=1;the Reset signal acts to invalidate the R-layer competition winning neurons if, according to some pre-set measurement criterion, T j If the similarity between the X and the X is not equal to the preset similarity rho, the two are not sufficiently close to each other, and then the system sends out a Reset signal to invalidate the winning neuron;
the input layer is responsible for receiving external information and transmitting an input sample to the competition layer to play an observation role, the competition layer is responsible for analyzing and comparing, analyzing according to a known training model and correctly classifying, and if the result obtained by analysis does not exist in the known model, a new category is automatically created; the control signal is responsible for controlling the similarity rho of the analysis result of each layer, and if the result does not reach the preset similarity rho, the analysis is carried out again.
2. The intelligent transformer according to claim 1, wherein the temperature detection module employs thermistor sensors, and the thermistor sensors are installed at a plurality of positions of the transformer to realize detection of the temperature of the transformer; and 8 patch type thermistor sensors are averagely attached to the inner ring and the outer ring of the mutual inductor to form a Wheatstone bridge for detecting the temperature of each point and simultaneously detecting the difference value between the cable and the ambient temperature, and a high-resolution analog-to-digital conversion chip is adopted for converting a current signal into a digital signal and then transmitting the digital signal to a data storage module.
3. The intelligent transformer of claim 1, wherein the data storage module is a mobile data storage.
4. The intelligent mutual inductor according to claim 1, wherein the intelligent diagnosis module employs a digital processing chip, which can acquire, store, remotely communicate and intelligently diagnose the collected current and temperature information, and employs a metal mesh enclosure to form a shielding enclosure to prevent electromagnetic interference of the digital chip under strong magnetic field.
5. The intelligent mutual inductor according to claim 1, wherein the recognition layer R is a feedforward neural network model, that is, a multilayer feedforward neural network composed of two layers of neurons and adopting a BP neural network algorithm;
the feedforward neural network comprises an input layer formed by 10 neurons, a hidden layer formed by 10 neurons, an output layer formed by 2 outputs, and the input layer corresponds to the hidden layer: temperature, current, resistance, voltage, load ratio, current ratio difference, current phase difference, composite error, deviation; the output layer corresponds to: operating state, remaining life.
CN202011149506.0A 2020-10-23 2020-10-23 Intelligent mutual inductor Active CN112763780B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011149506.0A CN112763780B (en) 2020-10-23 2020-10-23 Intelligent mutual inductor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011149506.0A CN112763780B (en) 2020-10-23 2020-10-23 Intelligent mutual inductor

Publications (2)

Publication Number Publication Date
CN112763780A CN112763780A (en) 2021-05-07
CN112763780B true CN112763780B (en) 2022-11-18

Family

ID=75693126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011149506.0A Active CN112763780B (en) 2020-10-23 2020-10-23 Intelligent mutual inductor

Country Status (1)

Country Link
CN (1) CN112763780B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116029220B (en) * 2023-03-24 2023-07-18 国网福建省电力有限公司 Voltage transformer operation error assessment method, system, equipment and medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101113510B1 (en) * 2009-12-28 2012-03-02 주식회사 효성 Diagnosis control system for transformer
CN202057425U (en) * 2011-05-06 2011-11-30 自贡市三人实业有限公司 An intelligent long-distance temperature acquisition and transmission module
CN106680755A (en) * 2015-11-10 2017-05-17 中国电力科学研究院 Extra-high voltage all-fiber current transformer temperature cycling test device and method
US10132697B2 (en) * 2015-12-23 2018-11-20 Schneider Electric USA, Inc. Current transformer with enhanced temperature measurement functions
US11513480B2 (en) * 2018-03-27 2022-11-29 Terminus (Beijing) Technology Co., Ltd. Method and device for automatically diagnosing and controlling apparatus in intelligent building
CN109520548A (en) * 2018-11-28 2019-03-26 徐州江煤科技有限公司 A kind of Multifunction Sensor trouble-shooter
CN109828227A (en) * 2019-01-24 2019-05-31 东南大学 A kind of electronic current mutual inductor method for diagnosing faults based on current information feature
CN110661342B (en) * 2019-10-22 2023-04-11 成都高斯电子技术有限公司 Electrical equipment hidden danger monitoring system and working method thereof

Also Published As

Publication number Publication date
CN112763780A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN110213788B (en) WSN (Wireless sensor network) anomaly detection and type identification method based on data flow space-time characteristics
CN107505133B (en) The probability intelligent diagnosing method of rolling bearing fault based on adaptive M RVM
CN116757534B (en) Intelligent refrigerator reliability analysis method based on neural training network
CN111353153B (en) GEP-CNN-based power grid malicious data injection detection method
CN115018021B (en) Machine room abnormity detection method and device based on graph structure and abnormity attention mechanism
CN109446804B (en) Intrusion detection method based on multi-scale feature connection convolutional neural network
CN111860446A (en) Detection system and method for unknown mode of satellite remote measurement time sequence data
CN116738868B (en) Rolling bearing residual life prediction method
CN109240276B (en) Multi-block PCA fault monitoring method based on fault sensitive principal component selection
CN112763780B (en) Intelligent mutual inductor
CN117113218A (en) Visual data analysis system and method thereof
Guan et al. Application of a novel PNN evaluation algorithm to a greenhouse monitoring system
CN116259161B (en) Power failure early warning system
CN117056865B (en) Method and device for diagnosing operation faults of machine pump equipment based on feature fusion
CN116930042B (en) Building waterproof material performance detection equipment and method
CN115800274B (en) 5G distribution network feeder automation self-adaptation method, device and storage medium
Fang et al. An enhanced fault diagnosis method with uncertainty quantification using Bayesian convolutional neural network
CN114046816B (en) Sensor signal fault diagnosis method based on lightweight gradient lifting decision tree
CN116470504A (en) Self-healing control system of distributed power distribution network
CN115130617B (en) Detection method for continuous increase of self-adaptive satellite data mode
CN115659323A (en) Intrusion detection method based on information entropy theory and convolution neural network
Lu et al. Relation-aware attentive neural processes model for remaining useful life prediction
Gao et al. A novel intrusion detection method based on WOA optimized hybrid kernel RVM
Shi et al. Optimal Test Point Placement Based on Fault Diagnosability Quantitative Evaluation
CN118035873B (en) Fault diagnosis method for parallel convolution improved triple network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant