CN115758899B - Transformer fault diagnosis system based on graph Markov neural network - Google Patents

Transformer fault diagnosis system based on graph Markov neural network Download PDF

Info

Publication number
CN115758899B
CN115758899B CN202211486848.0A CN202211486848A CN115758899B CN 115758899 B CN115758899 B CN 115758899B CN 202211486848 A CN202211486848 A CN 202211486848A CN 115758899 B CN115758899 B CN 115758899B
Authority
CN
China
Prior art keywords
target
fault data
fault
feature
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211486848.0A
Other languages
Chinese (zh)
Other versions
CN115758899A (en
Inventor
杨会轩
张瑞照
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Huaqing Zhihui Energy Technology Co ltd
Shandong Huake Information Technology Co ltd
Original Assignee
Beijing Huaqing Zhihui Energy Technology Co ltd
Shandong Huake Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Huaqing Zhihui Energy Technology Co ltd, Shandong Huake Information Technology Co ltd filed Critical Beijing Huaqing Zhihui Energy Technology Co ltd
Priority to CN202211486848.0A priority Critical patent/CN115758899B/en
Publication of CN115758899A publication Critical patent/CN115758899A/en
Application granted granted Critical
Publication of CN115758899B publication Critical patent/CN115758899B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Abstract

The present disclosure describes a diagnostic system for transformer faults based on a graph markov neural network. The diagnostic system comprises an acquisition module and a prediction module, wherein the acquisition module is configured to acquire a plurality of fault data comprising initial characteristics with the type of text; the prediction module is configured to preprocess a plurality of fault data to acquire a plurality of target fault data comprising a plurality of target features, construct a diagnostic model based on the graph Markov neural network to model the dependency relationship between fault types of the target fault data and the feature representation of the target fault data at the same time, construct a graph structure by using the target fault data and optimize the diagnostic model based on the graph Markov neural network based on the graph structure to acquire a target diagnostic model, and preprocess the fault data to be diagnosed and input the fault data to the target diagnostic model to output the fault type to which the fault data to be diagnosed belongs. Thus, the accuracy of the failure type prediction can be improved.

Description

Transformer fault diagnosis system based on graph Markov neural network
The present application is a divisional application of patent application with application date 2021, 06 month and 28, application number 2021107198738 and invention name of a method for diagnosing transformer faults based on a graph markov neural network.
Technical Field
The present disclosure relates generally to the field of transformer fault diagnosis, and more particularly to a diagnostic system for transformer faults based on a graph markov neural network.
Background
The transformer is used as core equipment of a power grid system, if the transformer fails, the stable operation of the whole power grid system is seriously threatened, the power consumption life of residents is affected, other related problems are caused by power failure, and disastrous results are caused. Therefore, it is important to accurately diagnose the faults of the transformer in time so as to accurately maintain the transformer after determining the fault type, and the safety and the reliability of the power grid system are improved.
The existing transformer fault diagnosis method generally collects fault data of a transformer, predicts the fault type of the transformer by using a machine learning algorithm, such as decision trees, support sequence vector machines, clustering, association analysis and the like. However, since fault data of a transformer generally includes a large amount of text-type data, it often takes a long time to convert the collected fault data of the transformer into data usable for training a machine learning-based model after preprocessing, and existing fault diagnosis methods of the transformer often focus on predicting fault types of the transformer, but neglecting dependency relationships between the fault types. The accuracy of diagnosing the type of faults of the transformer is still to be improved.
Disclosure of Invention
In view of the above-mentioned situation, the present disclosure provides a method for diagnosing a transformer fault based on a graph markov neural network, which models a dependency relationship between fault types of fault data and a feature representation of the fault data at the same time and can improve accuracy of fault type prediction through multiple researches and experiments.
To this end, a first aspect of the present disclosure provides a method for diagnosing a transformer fault based on a graph markov neural network, including: obtaining a plurality of fault data from a plurality of transformers, each fault data comprising a plurality of initial features, the types of the initial features comprising text; preprocessing the plurality of fault data to obtain a plurality of target fault data comprising a plurality of target features, wherein the plurality of target fault data comprises data of known fault type and data of unknown fault type, the preprocessing comprises missing value processing and construction sequence vector processing, wherein the missing value processing is performed on the plurality of fault data to obtain a plurality of first fault data comprising a plurality of first target features, the construction sequenceThe vector processing is to update a first target feature with a text type into a sequence vector by using a continuous word bag model, take the updated first target feature as a second target feature to obtain a plurality of second fault data comprising a plurality of second target features, take the second fault data as the target fault data and take the second target feature as the target feature, wherein the continuous word bag model is trained by using the values of the plurality of first target features with the text type; constructing a diagnostic model based on a graph Markov neural network to model both a dependency relationship between fault types of the target fault data and a feature representation of the target fault data, constructing a graph structure using the target fault data and optimizing the diagnostic model based on the graph structure to obtain a target diagnostic model, wherein the graph structure G is represented as G= (V, E, x) V ) V is a set of a plurality of the target fault data, x V A set of target features for a plurality of the target fault data, E being a set of relationships between the respective target fault data; and inputting the fault data to be diagnosed into the target diagnosis model after the preprocessing to output the fault type of the fault data to be diagnosed, wherein the initial characteristics comprise the concentration of copper in oil of the transformer, the concentration of iron in oil of the transformer, the content of dissolved gas in oil of the transformer and defect information of the transformer. In the present disclosure, fault data, characteristics of the fault data, and relationships between the fault data can be represented by graph structures and used for training of diagnostic models based on graph markov neural networks, and further, feature values of text types in the fault data of transformers are converted into sequence vectors by using continuous word bag models. Under the condition, the characteristic values in the fault data of the transformer can be quickly and accurately converted into the sequence vectors, so that the preprocessing efficiency is improved, meanwhile, the dependency relationship between the fault types of the fault data and the characteristic representation of the fault data are modeled, and the fault types can be predicted by combining the dependency relationship between the fault types. Thus, the accuracy of the failure type prediction can be improved.
In addition, in the diagnostic method according to the first aspect of the present disclosure, optionally, the missing value is processed to detect a missing proportion of the initial feature, delete the initial feature whose missing proportion is greater than a preset proportion, and fill the missing value of the initial feature whose missing proportion is not greater than the preset proportion to acquire the first target feature. Thereby, the failure data for training can be perfected.
In addition, in the diagnostic method according to the first aspect of the present disclosure, optionally, the preprocessing further includes preliminary deduplication, where the preliminary deduplication is to preserve at least one initial feature among a plurality of initial features for which correlation exists. Thus, preliminary screening can be performed on the preliminary features in the fault data to quickly reduce the dimensionality of the features.
In addition, in the diagnostic method according to the first aspect of the present disclosure, optionally, the preprocessing further includes feature dimension reduction processing and data normalization processing; the feature dimension reduction processing is to extract voting coefficients of each second target feature of the second fault data by training linear regression or logistic regression and to sort the importance so as to obtain second target features with importance greater than preset importance, and then to extract main features in the second target features with importance greater than preset importance by principal component analysis and/or factor analysis; the data normalization process is to normalize the second fault data using the mean and variance of the second target feature. Thus, the main feature can be extracted and the second failure data can be normalized.
Further, in the diagnostic method according to the first aspect of the present disclosure, optionally, the initial characteristic further includes at least one of a temperature of oil of the transformer, a device model of the transformer, a manufacturer of the transformer, a service life of the transformer, a load of the transformer, a number of times of burst short circuits of the transformer, information of bad weather, and an insulation aging condition, wherein the dissolved gas includes hydrogen, methane, ethane, ethylene, and acetylene. Thus, more features can be acquired for subsequent preprocessing.
In addition, in the diagnostic method according to the first aspect of the present disclosure, optionally, values of a plurality of first target features of which types are text are represented by one-hot encoding to obtain a plurality of one-hot encoded vectors, and each one of the one-hot encoded vectors is multiplied by a first weight matrix to obtain a sequence vector corresponding to the value of the first target feature, where the first weight matrix is obtained by training the continuous bag-of-word model using the plurality of one-hot encoded vectors. In this case, the first target feature of the type text is quickly transformed into a sequence vector by the continuous bag of words model. This can improve the efficiency of the pretreatment.
Further, in the diagnostic method according to the first aspect of the present disclosure, optionally, the diagnostic model models a joint distribution between fault types of the target fault data by the target feature using a conditional random distribution and optimizes using a variational EM algorithm including an E step in which a feature representation of the target fault data is learned by a first graph neural network to predict a fault type, and an M step in which a dependency relationship between fault types of the target fault data is modeled by a second graph neural network. In this case, the failure type can be predicted in combination with the dependency relationship between the failure types. Thus, the accuracy of the failure type prediction can be improved.
In addition, in the diagnostic method according to the first aspect of the present disclosure, optionally, the fault type includes high-temperature overheat, medium-low-temperature overheat, high-energy discharge, low-energy discharge, discharge-combined overheat, and partial discharge. Thus, various fault types can be predicted.
A second aspect of the present disclosure provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the diagnostic method described above when the computer program is executed by the processor.
A third aspect of the present disclosure provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the diagnostic method described above.
According to the present disclosure, it is possible to provide a method of diagnosing a transformer fault based on a graph markov neural network that models a dependency relationship between fault types of fault data and a characteristic representation of the fault data at the same time and can improve accuracy of prediction of the fault types.
Drawings
The present disclosure will now be explained in further detail by way of example only with reference to the accompanying drawings, in which:
fig. 1 is a schematic view showing an application scenario of a method of diagnosing a transformer failure based on a graph markov neural network according to an example of the present disclosure.
Fig. 2 is a schematic diagram illustrating a variational EM algorithm to which examples of the present disclosure relate.
Fig. 3 is a flow chart illustrating a method of training a diagnostic model based on a graph markov neural network in accordance with examples of the present disclosure.
Fig. 4 is a flow chart illustrating a preprocessing process involved in examples of the present disclosure.
Fig. 5 is a flowchart illustrating a method of diagnosing transformer faults based on a graph markov neural network according to an example of the present disclosure.
Fig. 6 is a block diagram illustrating a diagnostic system of transformer faults based on a graph markov neural network according to examples of the present disclosure.
Detailed Description
Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the following description, the same members are denoted by the same reference numerals, and overlapping description thereof is omitted. In addition, the drawings are schematic, and the ratio of the sizes of the components to each other, the shapes of the components, and the like may be different from actual ones. It should be noted that the terms "comprises" and "comprising," and any variations thereof, in this disclosure, such as a process, method, system, article, or apparatus that comprises or has a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include or have other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. All methods described in this disclosure can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context.
The diagnosis method and the diagnosis system for the transformer faults based on the graph Markov neural network can quickly and accurately convert the characteristic values in the fault data of the transformer into the sequence vectors, model the dependency relationship between the fault types of the fault data and the characteristic representation (object representation) of the fault data at the same time, and further predict the fault types by combining the dependency relationship between the fault types. Thus, the accuracy of the failure type prediction can be improved. The diagnostic method to which the present disclosure relates is applied to a diagnostic system (described later). The present disclosure is described in detail below with reference to the accompanying drawings. In addition, the application scenario described in the examples of the present disclosure is for more clearly explaining the technical solution of the present disclosure, and does not constitute a limitation on the technical solution provided by the present disclosure.
Fig. 1 is a schematic view showing an application scenario of a method of diagnosing a transformer failure based on a graph markov neural network according to an example of the present disclosure. As shown in fig. 1, the monitoring server 20 may store information of the transformer 10 such as information of a manufacturer and collect data of the transformer 10 such as fault data. In some examples, the diagnostic methods to which the present disclosure relates may be stored in the form of computer program instructions in the monitoring server 20 and executed by the monitoring server 20, and the monitoring server 20 may implement predicting the fault type of the transformer 10 based on the fault data of the transformer 10 by executing the diagnostic methods. In some examples, the monitoring client 30 may obtain the data of the transformer 10 collected by the monitoring server 20, and if abnormal data occurs, the monitoring client 30 may perform maintenance according to the corresponding maintenance personnel 40 notified of the predicted fault type. Therefore, accurate maintenance can be realized.
In some examples, the monitoring server 20 may include one or more processors and one or more memories. The processor may include, among other things, a central processing unit, a graphics processing unit, and any other electronic components capable of processing data, capable of executing computer program instructions. The memory may be used to store computer program instructions. In some examples, the diagnostic method may be implemented by executing computer program instructions in memory. In some examples, the monitoring server 20 may also be a cloud server. Additionally, in some examples, the monitoring client 30 may be a smart phone, a notebook computer, a personal computer (Personal Computer, PC), or other types of electronic devices. In some examples, maintenance personnel 40 may be personnel having expertise in maintaining transformers.
Fig. 2 is a schematic diagram illustrating a variational EM algorithm to which examples of the present disclosure relate. As described above, the diagnostic method of transformer faults based on the graph markov neural network according to the present disclosure can model both the dependency relationship between the fault types of the fault data and the characteristic representation of the fault data. In general, the graph markov neural network (Graph Markov Neural Network, GMNN) is capable of combining the advantages of statistical relationship learning (Statistical Relational Learning, SRL) and graph neural network (Graph Neural Networks, GNN). In some examples, the joint distribution of fault types of fault data may be modeled with conditional random fields (conditional random distributions) to obtain a diagnostic model based on a graph markov neural network (described later), and the diagnostic model is trained with a variational EM algorithm, which not only learns an efficient feature representation of the fault data, but also learns the dependency of fault types between different fault data. In some examples, as shown in FIG. 2, the variational EM algorithm may include E-step (E step) and M-step (M step) by alternately executing E-step and M-step to update the variational distribution q θ And a joint distribution p φ (described later) until convergence.
The following describes a method of training a diagnostic model based on a graph markov neural network in conjunction with the accompanying drawings. Fig. 3 is a flow chart illustrating a method of training a diagnostic model based on a graph markov neural network in accordance with examples of the present disclosure. In some examples, as shown in fig. 3, the training method may include acquiring a plurality of fault data (step S110), and in step S110, the plurality of fault data may be acquired from the plurality of transformers 10. For example, a plurality of fault data can be obtained by observing the operation conditions of 3000 to 5000 transformers 10, defect records and data information provided by power grid related personnel. Because the data of the transformer 10 is not easily collected, in some examples, initial fault data (e.g., initial characteristics or incomplete values of the initial characteristics) may be initially collected and multiple fault data obtained by continuously updating the collected data. In some examples, the source of the plurality of transformers 10 may be different, e.g., the plurality of transformers 10 may be provinces or cities of different sources. This can improve the generalization ability of the diagnostic model.
Additionally, in some examples, each fault data may include a plurality of initial characteristics. In some examples, the type of initial feature may include text. Thus, the text-like features can be subsequently quickly and accurately converted into sequence vectors by the continuous bag-of-word model. In some examples, the plurality of fault data may be formed in a two-dimensional array or matrix, where each row may represent one fault data and each column may represent the same value of one initial characteristic of the plurality of fault data. In some examples, the type of initial feature may also include a numerical value.
Additionally, in some examples, the initial characteristics may include a concentration of copper in the oil of the transformer 10, a concentration of iron in the oil of the transformer 10, a content of dissolved gases in the oil of the transformer 10, and defect information of the transformer 10, wherein the oil of the transformer 10 may refer to oil in a tank of the transformer 10. The defect information of the transformer 10 may be used to describe the fault information of the transformer 10 that has occurred. For example, the defect information of the transformer 10 may include, but is not limited to, one or more of abnormal sound production of the transformer 10, poor grounding of the core, oil leakage of the transformer 10, overvoltage or overload occurrence, and excessive oil temperature. In this case, analyzing the defect information enables to determine the type of failure and train the diagnostic model as a gold standard, i.e. the type of failure can be determined based on the defect information. Thereby, as many fault data of known fault types as possible can be acquired for training. In some examples, the fault type may be determined based on the content of dissolved gas in the oil of the transformer 10 and the diagnostic model trained as a gold standard. Thereby, as many fault data of known fault types as possible can be acquired for training.
In addition, in some examples, the initial characteristics further include at least one of a temperature of oil of the transformer 10, a device model of the transformer 10, a manufacturer of the transformer 10, a operational age of the transformer 10, a load of the transformer 10, a number of sudden shorts of the transformer 10, information of bad weather, and an insulation aging condition. Thus, more features can be acquired for subsequent preprocessing. In some examples, the condition of insulation aging may indicate whether the transformer 10 is experiencing aging. For example, the case of insulation aging of the transformer 10 may be classified into four classes, which may be a first class (may also be called insulation good), a second class (may also be called insulation pass), a third class (may also be called insulation unreliable), and a fourth class (may also be called insulation aging). In some examples, the dissolved gas may include hydrogen (H2), methane (CH 4), ethane (C2H 6), ethylene (C2H 4), and acetylene (C2H 2).
In some examples, as shown in fig. 3, the training method may include preprocessing a plurality of fault data to obtain a plurality of target fault data (step S120). In some examples, each target fault data may include a plurality of target features. In some examples, the plurality of target fault data may include data of known fault types and data of unknown fault types. This enables training of the diagnostic model by means of semi-supervision. In some examples, preprocessing may include missing value processing and build sequence vector processing.
In addition, in some examples, the missing value processing may be to detect a missing proportion of the initial feature and delete the initial feature whose missing proportion is equal to or greater than a preset proportion, for example, 50% and fill the missing value of the initial feature which is not greater than the preset proportion. Thereby, the failure data for training can be perfected. In some examples, the method of missing value padding may include, but is not limited to, mean padding, random difference padding, median padding, dummy variable padding, and the like. For example, if the missing value of the initial characteristic of one transformer 10, such as the acetylene (C2H 2) content, may be filled with an average of the acetylene (C2H 2) content of the other transformers 10. Additionally, in some examples, missing value processing of the plurality of fault data may obtain a plurality of first fault data, wherein each of the first fault data may include a plurality of first target features. In some examples, the number of first target features may be less than the number of initial features.
Additionally, in some examples, the build sequence vector process may be to update a first target feature of a type text to a sequence vector and to take the updated first target feature as a second target feature using a Continuous Bag of words Model (CBOW). Thus, the subsequent preprocessing can be performed by using the algorithm of the data processing. In some examples, a plurality of second fault data may be obtained via a build sequence vector process, wherein each second fault data may include a plurality of second target features. In some examples, the second fault data may be targeted and the second targeted feature may be targeted. In some examples, the target fault data may be an m×n matrix, where m is the number of target fault data and n is the number of target features.
In some examples, the continuous bag of words model may be trained with a plurality of values of a first target feature of the type text. Therefore, training of the continuous bag-of-words model can be completed rapidly. In some examples, the number of values of the first target feature of the type text is greater, e.g., the number of values of the first target feature may be 3000 to 5000. In this case, the first target feature of the type text can be updated to the sequence vector quickly and accurately by the continuous bag-of-word model. In some examples, values of a plurality of first target features of a type of text may be represented with One-Hot encoding (One-Hot) to obtain a plurality of One-Hot encoded vectors, each One of the One-Hot encoded vectors being multiplied by a first weight matrix, respectively, where the first weight matrix may be obtained by training a continuous bag-of-words model with the plurality of One-Hot encoded vectors, to obtain a sequence vector corresponding to the values of the first target features. In this case, the first target feature of the type text is quickly transformed into a sequence vector by the continuous bag of words model. This can improve the efficiency of the pretreatment.
Specifically, the training process of the continuous bag-of-word model may be that a plurality of first target features of which the types are texts are represented by single-hot codes to obtain a plurality of single-hot code vectors, one sequence vector in the plurality of single-hot code vectors is used as an intermediate sequence vector, and other sequence vectors are used as context sequence vectors; multiplying each context sequence vector with a first weight matrix to obtain a plurality of first sequence vectors; averaging the plurality of first sequence vector additions to obtain a second sequence vector; multiplying the second sequence vector with the second weight matrix to obtain a third sequence vector; processing the third sequence vector with an activation function to obtain a probability distribution; and updating the first weight matrix and the second weight matrix through back propagation operation, and updating the probability distribution until the error of the probability distribution and the intermediate sequence vector accords with a preset error, wherein the size of the first weight matrix can be dim×number, the size of the second weight matrix can be number×dim, dim can be the number of a plurality of single-heat coding vectors, number can be the dimension of the sequence vector, the initial value of the first weight matrix can be a random value, and the initial value of the second weight matrix can be a random value. In this case, the finally obtained first weight matrix may be used for the sequence vector corresponding to the above-described respective one-hot encoded vector multiplied by the value for obtaining the first target feature.
Examples of the present disclosure are not limited thereto and in other examples, the continuous bag-of-words model may be trained separately using values of a single type of first target feature that is text.
In some examples, the preprocessing in step S120 may also include preliminary deduplication. In some examples, the preliminary deduplication may preserve at least one initial feature for a plurality of initial features for which there is a correlation. For example, if one initial characteristic of the transformer 10 is obtained by performing a corresponding calculation with another initial characteristic, that initial characteristic or another initial characteristic may be retained. Other initial features may include one initial feature or multiple initial features. Thus, preliminary screening can be performed on the preliminary features in the fault data to quickly reduce the dimensionality of the features. Examples of the present disclosure are not limited thereto and in some examples, less relevant initial features may be discarded, such as air humidity in multiple initial features.
In some examples, the preprocessing in step S120 may further include feature dimension reduction processing and data normalization processing. In some examples, the feature dimension reduction process may be extracting voting coefficients of respective second target features of the second fault data using training linear regression (Linear Regression) or logistic regression (logistics regression), and ranking the importance to obtain second target features of importance greater than a preset importance, and then extracting main features of the second target features of importance greater than the preset importance using principal component analysis (Principal Component Analysis, PCA) and/or Factor Analysis (FA). Thus, the main features can be extracted. In some examples, the primary feature may be the target feature. In some examples, the preset importance may be set according to actual conditions.
Additionally, in some examples, the data normalization process may normalize the second fault data using the mean and variance of the second target feature. Thereby, the second failure data can be normalized. Specifically, the value of the second target feature may be divided by the variance, i.e., (value of the second target feature-mean)/variance, by the difference from the mean. In some examples, the second fault data processed via data normalization may be the target fault data.
Fig. 4 is a flow chart illustrating a preprocessing process involved in examples of the present disclosure.
In order to better illustrate the pretreatment process described above, as shown in fig. 4, fig. 4 is a flowchart showing a pretreatment process. In some examples, the preprocessing process may be a missing value process (step S121), a preliminary deduplication process (step S122), a build sequence vector process (step S123), a feature dimension reduction process (step S124), and a data normalization process (step S125) in this order. However, examples of the present disclosure are not limited thereto, and in other examples, the preliminary deduplication may be performed first and then the missing value processing may be performed, or the advanced data normalization processing may be performed first and then the feature dimension reduction processing may be performed.
In some examples, as shown in fig. 3, the training method may include constructing a diagnostic model based on a graph markov neural network and optimizing the diagnostic model with target fault data to obtain a target diagnostic model (step S130). In step S130, a diagnostic model based on the graph markov neural network is constructed to model both the dependency relationship between the fault types of the target fault data and the characteristic representation of the target fault data. In this case, the failure type can be predicted in combination with the dependency relationship between the failure types. Thus, the accuracy of the failure type prediction can be improved. In some examples, fault types may include high temperature superheat, medium low temperature superheat, high energy discharge, low energy discharge, discharge and superheat, and partial discharge. Thus, various fault types can be predicted.
In some examples, a graph structure may be constructed using the target fault data and the diagnostic model optimized based on the graph structure to obtain the target diagnostic model, where the graph structure G may be represented as g= (V, E, x) V ) V is a set of multiple target fault data, x V For a set of target features of a plurality of target fault data, E is a set of relationships between the respective target fault data.
In some examples, the diagnostic model may model joint distributions between fault types of the target fault data by target features using conditional random distributions and optimize using a variational EM algorithm, as shown in fig. 2, which may include E-step (E-step) in which a feature representation of the target fault data is learned by a first graph neural network to predict fault types, and M-step (M-step) in which dependencies between fault types of the target fault data are modeled by a second graph neural network. In this case, the failure type can be predicted in combination with the dependency relationship between the failure types. Thus, the accuracy of the failure type prediction can be improved.
In particular, the following description is based on the graph Markov god in connection with the target fault dataTraining is performed via the diagnostic model of the network to obtain a target diagnostic model. First, pre-training a first graph neural network with data of known fault types to obtain an initial variation distribution q θ Wherein the variation distribution q θ The distribution of single target fault data can be modeled using amortization reasoning (Amortized Inference) and parameterized using a first graph neural network, varying the distribution q θ Can be expressed as:
q θ (y n |x V )=Cat(y n |softmax(W θ h θ,n )),
wherein Cat is a classification function, n is an index of data of unknown fault type, h θ,n For a set x of target features by integrating a plurality of target fault data V Characteristic representation of data of the nth unknown fault type obtained by training the first graph neural network as characteristics, wherein θ is a parameter of the first graph neural network, W θ Is a linear transformation matrix.
Next, in E-step, a joint distribution p between fault types of a plurality of target fault data is fitted φ And updates the variation distribution q θ De-approximating joint distribution p φ . In some examples, the joint distribution p φ (also referred to as a diagnostic model) is a set x of target features randomly distributed using conditions and passing through multiple target fault data according to a statistical relationship learning method V Obtained by modeling, joint distribution p φ Expressed as:
p φ (y V |x V ),
wherein, phi is a parameter of the diagnostic model, the parameter phi of the diagnostic model is obtained by optimizing the lower evidence bound of the log-likelihood function, which is expressed as:
logp φ (y V |x V )≥Ε qθ(yU|xV) [logp φ (y L ,y U |x V )-logq θ (y U |x V )],
wherein y is V Fault type, y, for multiple target fault data L Is of known fault classFault type, y of data U The fault type of the data being of unknown fault type,
Figure BDA0003962809210000121
u=v\l, e is the desired symbol.
Next, in M-step, the variation distribution q is fitted θ And update the joint distribution p φ Thereby maximizing pseudo-likelihood function
Figure BDA0003962809210000122
The pseudo-likelihood function is expressed as:
Figure BDA0003962809210000123
where NB (n) is the neighbor set of data for the nth unknown failure type, y NB(n) Fault type, p, of the neighbor set of data for the nth unknown fault type φ (y n |y NB(n) ,x V ) Parameterized using a second graph neural network to represent:
p φ (y n |y NB(n) ,x V )=Cat(y n |softmax(W φ h φ,n )),
where n is the index of the data of unknown fault type, h φ,n For a set x of target features by integrating a plurality of target fault data V And fault type y of the neighbor set of data for the nth unknown fault type NB(n) As a characteristic representation of the data of the nth unknown fault type obtained by training the second graph neural network, W phi is a linear transformation matrix, and v\n represents the set V minus the data of the nth unknown fault type.
Finally, the variation distribution q is updated alternately θ And a joint distribution p φ Until convergence and distribute the variation q θ The corresponding first graph neural network serves as a target diagnosis model.
The training method can be used for representing fault data, characteristics of the fault data and relations among the fault data by using graph structures and training a diagnosis model based on a graph Markov neural network, and in addition, the continuous word bag model is used for converting the characteristic values into sequence vectors for the condition that the characteristic values of text types in the fault data of the transformer 10 are more. In this case, the eigenvalues in the fault data of the transformer 10 can be quickly and accurately converted into the sequence vectors, so that the preprocessing efficiency is improved, and meanwhile, the dependency relationship between the fault types of the fault data and the characteristic representation of the fault data are modeled, so that the fault types can be predicted in combination with the dependency relationship between the fault types. Thus, the accuracy of the failure type prediction can be improved.
Fig. 5 is a flowchart illustrating a method of diagnosing transformer faults based on a graph markov neural network according to an example of the present disclosure.
In some examples, as shown in fig. 5, the diagnostic method may include acquiring fault data to be diagnosed (step S210). In step S210, the feature of the fault data to be diagnosed may be the same as the initial feature of the fault data for training described above, in which case the fault data to be diagnosed later may be input to the target diagnosis model for prediction of the fault type via preprocessing consistent with the training method described above. Examples of the present disclosure are not limited thereto and in other examples, features in fault data to be diagnosed may be consistent with target features.
In some examples, as shown in fig. 5, the diagnosis method may include preprocessing fault data to be diagnosed and inputting the preprocessed fault data into the target diagnosis model to output a fault type to which the fault data to be diagnosed belongs (step S220). In step S220, the target diagnostic model may be obtained by training the diagnostic model based on the graph markov neural network using the fault data of the transformer 10, and the relevant description is referred to the relevant description of the training method of the diagnostic model based on the graph markov neural network.
In some examples, if the characteristics of the fault data to be diagnosed are the same as the initial characteristics of the fault data for training described above, the fault data to be diagnosed may be input to the target diagnostic model for prediction of the fault type via the preprocessing described above. For example, the fault data to be diagnosed may be preprocessed by selecting one or more of the above-mentioned missing value processing, preliminary deduplication, construction sequence vector processing, feature dimension reduction processing, or data normalization processing, which are matched with the preprocessing process of the diagnostic model, according to the preprocessing process of the diagnostic model. In other examples, if the feature in the fault data to be diagnosed is consistent with the target feature, since the feature in the fault data to be diagnosed is already the target feature for training the diagnostic model, a corresponding preprocessing, such as missing value processing, construction sequence vector processing, or data normalization processing, may be performed on the feature value of the fault data to be diagnosed.
The diagnosis method can quickly and accurately convert the characteristics in the fault data of the transformer 10 into the sequence vector, and simultaneously model the dependency relationship between the fault types of the fault data and the characteristic representation of the fault data, so that the dependency relationship between the fault types can be combined to predict the fault types. Thus, the accuracy of the failure type prediction can be improved.
Fig. 6 is a block diagram illustrating a diagnostic system 1 of transformer faults based on a graph markov neural network according to an example of the present disclosure.
In some examples, the diagnostic method of the present disclosure may be applied to the diagnostic system 1 for transformer faults based on a graph markov neural network. As shown in fig. 6, the diagnostic system 1 may include an acquisition module 100 and a prediction module 200.
In some examples, the acquisition module 100 may be used to acquire fault data to be diagnosed. In some examples, the characteristics of the fault data to be diagnosed may be the same as the initial characteristics of the fault data for training described above, in which case the fault data to be diagnosed subsequently may be input into the target diagnostic model for prediction of the fault type via preprocessing consistent with the training method of the diagnostic model based on the graph markov neural network described above. Examples of the present disclosure are not limited thereto and in other examples, features in fault data to be diagnosed may be consistent with target features.
In some examples, the prediction module 200 may be configured to pre-process the fault data to be diagnosed and then input the pre-processed fault data to the target diagnostic model to output the fault type to which the fault data to be diagnosed belongs. In some examples, the target diagnostic model may be obtained by training a diagnostic model based on a graph markov neural network using fault data of the transformer 10, and the relevant description is referred to above for a relevant description of a training method of a diagnostic model based on a graph markov neural network. In some examples, fault types may include high temperature superheat, medium low temperature superheat, high energy discharge, low energy discharge, discharge and superheat, and partial discharge.
In addition, in some examples, if the characteristics of the fault data to be diagnosed are the same as the initial characteristics of the fault data for training described above, the fault data to be diagnosed may be input to the target diagnostic model for prediction of the fault type via the preprocessing described above. For example, the fault data to be diagnosed may be preprocessed by selecting one or more of the above-mentioned missing value processing, preliminary deduplication, construction sequence vector processing, feature dimension reduction processing, or data normalization processing, which are matched with the preprocessing process of the diagnostic model, according to the preprocessing process of the diagnostic model. In other examples, if the feature in the fault data to be diagnosed is consistent with the target feature, since the feature in the fault data to be diagnosed is already the target feature for training the diagnostic model, a corresponding preprocessing, such as missing value processing, construction sequence vector processing, or data normalization processing, may be performed on the feature value of the fault data to be diagnosed.
The diagnosis system 1 can quickly and accurately convert the characteristics in the fault data of the transformer 10 into the sequence vector, and simultaneously model the dependency relationship between the fault types of the fault data and the characteristic representation of the fault data, so that the fault types can be predicted by combining the dependency relationship between the fault types. Thus, the accuracy of the failure type prediction can be improved.
While the disclosure has been described in detail in connection with the drawings and examples, it is to be understood that the foregoing description is not intended to limit the disclosure in any way. Modifications and variations of the present disclosure may be made as desired by those skilled in the art without departing from the true spirit and scope of the disclosure, and such modifications and variations fall within the scope of the disclosure.

Claims (10)

1. A transformer fault diagnosis system based on a graph Markov neural network is characterized in that,
comprising the following steps: an acquisition module configured to acquire fault data to be diagnosed; and a prediction module configured to preprocess the fault data to be diagnosed and input the preprocessed fault data to a target diagnosis model so as to output a fault type to which the fault data to be diagnosed belongs, wherein obtaining the target diagnosis model includes: obtaining a plurality of fault data from a plurality of transformers, each fault data comprising a plurality of initial features, the type of the initial features comprising text, the initial features comprising copper concentration in oil of the transformer, iron concentration in oil of the transformer, dissolved gas content in oil of the transformer, defect information of the transformer, equipment model number of the transformer, manufacturer of the transformer, operational age of the transformer, information of bad weather and insulation aging, said preprocessing comprising preliminary de-duplication and construction sequence vector processing of said plurality of fault data to obtain a plurality of target fault data comprising a plurality of target features, said preliminary de-duplication being to preserve at least one initial feature in a plurality of initial features of a correlation, said construction sequence vector processing being to update an intermediate feature of a type of text with a continuous word bag model as a sequence vector and to take the updated intermediate feature as a second target feature to obtain a plurality of second fault data comprising a plurality of said second target features, said second fault data being as said target fault data, said second target feature being subjected to said preliminary de-duplication and construction sequence vector processing of said preliminary de-duplication being to preserve at least one initial feature in a plurality of initial features, said initial feature being a plurality of intermediate feature being a sequence vector, said intermediate feature being a plurality of said intermediate feature being a second target feature, said fault feature being a second fault feature being a target feature, said fault feature being a plurality of said intermediate feature being a type of a text, and further obtaining the target diagnostic model.
2. The diagnostic system of claim 1, wherein:
the initial characteristics further include at least one of a temperature of oil of the transformer, a load of the transformer, and a number of sudden shorts of the transformer, wherein the dissolved gas includes hydrogen, methane, ethane, ethylene, and acetylene.
3. The diagnostic system of claim 1, wherein:
the type of the initial feature also includes a numerical value.
4. The diagnostic system of claim 1, wherein:
the plurality of fault data is obtained by initially collecting initial fault data and continuously updating the collected initial fault data.
5. The diagnostic system of claim 1, wherein:
the plurality of target fault data includes data of known fault types and data of unknown fault types to train the diagnostic model in a semi-supervised manner.
6. The diagnostic system of claim 1, wherein:
the preprocessing further comprises missing value processing, feature dimension reduction processing and data normalization processing, wherein the missing value processing is used for processing the plurality of fault data to obtain a plurality of first fault data comprising a plurality of first target features; the feature dimension reduction processing is to extract voting coefficients of each second target feature of the second fault data by training linear regression or logistic regression and to sort the importance so as to obtain second target features with importance greater than preset importance, and then to extract main features in the second target features with importance greater than preset importance by principal component analysis and/or factor analysis; the data normalization process is to normalize the second fault data using the mean and variance of the second target feature.
7. The diagnostic system of claim 1, wherein:
the continuous bag-of-words model is trained with a plurality of values of the intermediate feature of the text; or (b)
Training is performed using the values of the intermediate features of the single type text, respectively.
8. The diagnostic system of claim 1, wherein:
the graph structure G is denoted as g= (V, E, x V ) V is a set of a plurality of the target fault data, x V And E is a set of relationships among the target fault data.
9. The diagnostic system of claim 1, wherein:
and expressing the values of the intermediate features of the text in a plurality of types by using one-hot coding to obtain a plurality of one-hot coding vectors, and multiplying each one-hot coding vector by a first weight matrix to obtain a sequence vector corresponding to the value of the intermediate feature, wherein the first weight matrix is obtained by training the continuous word bag model by using the plurality of one-hot coding vectors.
10. The diagnostic system of claim 5, wherein:
the known fault type is determined based on the content of the dissolved gas and/or the analysis defect information.
CN202211486848.0A 2021-06-28 2021-06-28 Transformer fault diagnosis system based on graph Markov neural network Active CN115758899B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211486848.0A CN115758899B (en) 2021-06-28 2021-06-28 Transformer fault diagnosis system based on graph Markov neural network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110719873.8A CN113343581B (en) 2021-06-28 2021-06-28 Transformer fault diagnosis method based on graph Markov neural network
CN202211486848.0A CN115758899B (en) 2021-06-28 2021-06-28 Transformer fault diagnosis system based on graph Markov neural network

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110719873.8A Division CN113343581B (en) 2021-06-28 2021-06-28 Transformer fault diagnosis method based on graph Markov neural network

Publications (2)

Publication Number Publication Date
CN115758899A CN115758899A (en) 2023-03-07
CN115758899B true CN115758899B (en) 2023-05-09

Family

ID=77479155

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202110719873.8A Active CN113343581B (en) 2021-06-28 2021-06-28 Transformer fault diagnosis method based on graph Markov neural network
CN202211486848.0A Active CN115758899B (en) 2021-06-28 2021-06-28 Transformer fault diagnosis system based on graph Markov neural network
CN202211486842.3A Pending CN115935807A (en) 2021-06-28 2021-06-28 Diagnostic model training method based on graph Markov neural network

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110719873.8A Active CN113343581B (en) 2021-06-28 2021-06-28 Transformer fault diagnosis method based on graph Markov neural network

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202211486842.3A Pending CN115935807A (en) 2021-06-28 2021-06-28 Diagnostic model training method based on graph Markov neural network

Country Status (1)

Country Link
CN (3) CN113343581B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114152825B (en) * 2021-11-16 2023-11-14 国网北京市电力公司 Transformer fault diagnosis method and device and transformer fault diagnosis system
CN115204280A (en) * 2022-06-29 2022-10-18 昆明理工大学 Rolling bearing fault diagnosis method based on graph Markov attention network
CN116150604B (en) * 2023-02-08 2023-10-24 正泰电气股份有限公司 Transformer fault diagnosis method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003307316A (en) * 2002-04-15 2003-10-31 Toshiba Corp Heating cooker
CN103218662A (en) * 2013-04-16 2013-07-24 郑州航空工业管理学院 Transformer fault diagnosis method based on back propagation (BP) neural network
CN107063349A (en) * 2017-04-17 2017-08-18 云南电网有限责任公司电力科学研究院 A kind of method and device of Fault Diagnosis Method of Power Transformer
CN108268905A (en) * 2018-03-21 2018-07-10 广东电网有限责任公司电力科学研究院 A kind of Diagnosis Method of Transformer Faults and system based on support vector machines
CN109993756A (en) * 2019-04-09 2019-07-09 中康龙马(北京)医疗健康科技有限公司 A kind of general medical image cutting method based on graph model Yu continuous successive optimization
CN111737496A (en) * 2020-06-29 2020-10-02 东北电力大学 Power equipment fault knowledge map construction method

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102221651B (en) * 2011-03-11 2015-05-27 太原理工大学 Fault on-line diagnosis and early warning method of flameproof dry-type transformer for mine
CN103245861B (en) * 2013-05-03 2016-06-08 云南电力试验研究院(集团)有限公司电力研究院 A kind of transformer fault diagnosis method based on Bayesian network
CN105137328B (en) * 2015-07-24 2017-09-29 四川航天系统工程研究所 Analogous Integrated Electronic Circuits early stage soft fault diagnosis method and system based on HMM
CN105095918B (en) * 2015-09-07 2018-06-26 上海交通大学 A kind of multi-robot system method for diagnosing faults
CN108090558B (en) * 2018-01-03 2021-06-08 华南理工大学 Automatic filling method for missing value of time sequence based on long-term and short-term memory network
CN109800861A (en) * 2018-12-28 2019-05-24 上海联影智能医疗科技有限公司 A kind of equipment fault recognition methods, device, equipment and computer system
KR102097595B1 (en) * 2019-05-29 2020-05-26 한국기계연구원 Diagnosis method for wind generator
CN110426415A (en) * 2019-07-15 2019-11-08 武汉大学 Based on thermal fault detection method inside depth convolutional neural networks and the oil-immersed transformer of image segmentation
CN110689069A (en) * 2019-09-25 2020-01-14 贵州电网有限责任公司 Transformer fault type diagnosis method based on semi-supervised BP network
CN110542819B (en) * 2019-09-25 2022-03-22 贵州电网有限责任公司 Transformer fault type diagnosis method based on semi-supervised DBNC
CN112379325A (en) * 2019-11-25 2021-02-19 国家电网公司 Fault diagnosis method and system for intelligent electric meter
CN111340248A (en) * 2020-02-27 2020-06-26 中国电力科学研究院有限公司 Transformer fault diagnosis method and system based on intelligent integration algorithm
CN111694879B (en) * 2020-05-22 2023-10-31 北京科技大学 Multielement time sequence abnormal mode prediction method and data acquisition monitoring device
CN112415337B (en) * 2020-12-11 2022-05-13 国网福建省电力有限公司 Power distribution network fault diagnosis method based on dynamic set coverage
CN112990258A (en) * 2021-02-01 2021-06-18 山东建筑大学 Fault diagnosis method and system for water chilling unit

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003307316A (en) * 2002-04-15 2003-10-31 Toshiba Corp Heating cooker
CN103218662A (en) * 2013-04-16 2013-07-24 郑州航空工业管理学院 Transformer fault diagnosis method based on back propagation (BP) neural network
CN107063349A (en) * 2017-04-17 2017-08-18 云南电网有限责任公司电力科学研究院 A kind of method and device of Fault Diagnosis Method of Power Transformer
CN108268905A (en) * 2018-03-21 2018-07-10 广东电网有限责任公司电力科学研究院 A kind of Diagnosis Method of Transformer Faults and system based on support vector machines
CN109993756A (en) * 2019-04-09 2019-07-09 中康龙马(北京)医疗健康科技有限公司 A kind of general medical image cutting method based on graph model Yu continuous successive optimization
CN111737496A (en) * 2020-06-29 2020-10-02 东北电力大学 Power equipment fault knowledge map construction method

Also Published As

Publication number Publication date
CN115758899A (en) 2023-03-07
CN115935807A (en) 2023-04-07
CN113343581A (en) 2021-09-03
CN113343581B (en) 2022-11-11

Similar Documents

Publication Publication Date Title
CN115758899B (en) Transformer fault diagnosis system based on graph Markov neural network
CN111210024B (en) Model training method, device, computer equipment and storage medium
CN112632972B (en) Method for rapidly extracting fault information in power grid equipment fault report
JP7460864B2 (en) Method and apparatus for performing state classification of power grid assets
CN110232395A (en) A kind of fault diagnosis method of electric power system based on failure Chinese text
CN110851654A (en) Industrial equipment fault detection and classification method based on tensor data dimension reduction
CN117034143B (en) Distributed system fault diagnosis method and device based on machine learning
Kefalas et al. Automated machine learning for remaining useful life estimation of aircraft engines
CN117034123B (en) Fault monitoring system and method for fitness equipment
Yong-Li et al. Transformer fault diagnosis based on naive Bayesian classifier and SVR
Xu et al. Integrated system health management-oriented maintenance decision-making for multi-state system based on data mining
CN116562114A (en) Power transformer fault diagnosis method based on graph convolution neural network
CN116384223A (en) Nuclear equipment reliability assessment method and system based on intelligent degradation state identification
CN115734274A (en) Cellular network fault diagnosis method based on deep learning and knowledge graph
CN116796617A (en) Rolling bearing equipment residual life prediction method and system based on data identification
CN114298213A (en) Satellite communication equipment fault diagnosis method based on multi-dimensional feature vectors and classification models
KR20230075150A (en) Method for managing system health
CN115616437B (en) Intelligent debugging control method and system for EPC special power supply
Shan et al. Root Cause Analysis of Failures for Power Communication Network Based on CNN
CN117725491B (en) SCINet-based power system fault state detection and classification method
CN115936072A (en) System risk assessment calculation method and device based on multi-channel convolution
CN115684835B (en) Power distribution network fault diagnosis method, device, equipment and storage medium
CN116011412A (en) GIS equipment insulation defect evaluation method, system, equipment and medium
CN113920361A (en) Power grid fault diagnosis method facing alarm information text
Bayer et al. TUM Data Innovation Lab Munich Data Science Institute (MDSI) Technical University of Munich & PROCON IT GmbH

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Transformer Fault Diagnosis System Based on Graph Markov Neural Network

Effective date of registration: 20231214

Granted publication date: 20230509

Pledgee: Bank of China Limited Jinan Huaiyin sub branch

Pledgor: Shandong Huake Information Technology Co.,Ltd.

Registration number: Y2023980071669

PE01 Entry into force of the registration of the contract for pledge of patent right