CN109490814B - Metering automation terminal fault diagnosis method based on deep learning and support vector data description - Google Patents
Metering automation terminal fault diagnosis method based on deep learning and support vector data description Download PDFInfo
- Publication number
- CN109490814B CN109490814B CN201811046099.3A CN201811046099A CN109490814B CN 109490814 B CN109490814 B CN 109490814B CN 201811046099 A CN201811046099 A CN 201811046099A CN 109490814 B CN109490814 B CN 109490814B
- Authority
- CN
- China
- Prior art keywords
- layer
- data
- fault
- automation terminal
- metering automation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003745 diagnosis Methods 0.000 title claims abstract description 37
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000013135 deep learning Methods 0.000 title claims abstract description 14
- 238000012549 training Methods 0.000 claims abstract description 52
- 210000002569 neuron Anatomy 0.000 claims description 24
- 238000012360 testing method Methods 0.000 claims description 23
- 230000006870 function Effects 0.000 claims description 19
- 238000013507 mapping Methods 0.000 claims description 13
- 238000004891 communication Methods 0.000 claims description 9
- 238000010606 normalization Methods 0.000 claims description 9
- 238000005070 sampling Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 6
- 238000005315 distribution function Methods 0.000 claims description 5
- 230000003213 activating effect Effects 0.000 claims description 3
- 230000004069 differentiation Effects 0.000 claims description 3
- 230000009977 dual effect Effects 0.000 claims description 3
- 230000005284 excitation Effects 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000000644 propagated effect Effects 0.000 claims description 2
- 238000000605 extraction Methods 0.000 abstract description 7
- 230000008569 process Effects 0.000 abstract description 3
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 238000012706 support-vector machine Methods 0.000 description 3
- 230000002457 bidirectional effect Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R35/00—Testing or calibrating of apparatus covered by the other groups of this subclass
- G01R35/04—Testing or calibrating of apparatus covered by the other groups of this subclass of instruments for measuring time integral of power or current
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R35/00—Testing or calibrating of apparatus covered by the other groups of this subclass
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Test And Diagnosis Of Digital Computers (AREA)
Abstract
The invention discloses a metering automation terminal fault diagnosis method based on deep learning and support vector data description, and relates to the technical field of power metering fault diagnosis. The metering automation terminal fault diagnosis method based on deep learning and support vector data description is characterized in that fault data collected by a metering automation terminal are subjected to feature extraction through a deep confidence network model in the deep learning, and fault diagnosis and classification are carried out by using the support vector data description; the deep confidence network model can directly start from low-level original signals and obtain high-level feature representation through greedy training layer by layer, so that manual operation of feature extraction and selection is avoided, complexity and uncertainty caused by traditional manual feature extraction and feature selection are effectively eliminated, and intelligence of a diagnosis process is enhanced; the invention uses the support vector data description to classify and identify the samples, thereby effectively improving the accuracy and efficiency of the multi-class classification problem of the fault diagnosis of the metering automation terminal.
Description
Technical Field
The invention belongs to the technical field of power metering fault diagnosis, and particularly relates to a metering automation terminal fault diagnosis method based on deep learning and support vector data description.
Background
The main detection method of the current metering automation terminal comprises terminal acquisition detection (meter code, three-phase voltage, three-phase current and three-phase power), communication protocol detection, abnormal event detection and the like. The related technology of the traditional metering automation terminal fault diagnosis is relatively simple, a large amount of manual operation and data processing are needed, the fault diagnosis efficiency is low, and the accuracy, rapidity and reliability of the fault diagnosis are difficult to guarantee.
However, deep learning is rapidly developed in the field of fault diagnosis at present, but some traditional deep learning methods have the following disadvantages:
1. the traditional method utilizes a single Support Vector Machine (SVM) to carry out fault diagnosis, and has the advantages of solving the problem of small samples and being incapable of solving the problems of larger fault samples, more fault feature dimensions and the like of the data of the automatic metering terminal.
2. The method is a fault diagnosis method for evaluating the state of the metering automation terminal by establishing an observer by using a BP neural network and establishing the nonlinear mapping of input and output of fault reasons of fault data by using a large amount of data. The method has the defects of gradient attenuation, overfitting, local minimum and the like in the traditional shallow neural network, so that the fault diagnosis effect is greatly reduced.
3. Intelligent diagnosis is performed using an Extreme Learning Machine (ELM). The ELM method has high training speed, but poor stability, belongs to a shallow machine learning method, has limited learning capability, is difficult to improve when the accuracy reaches a certain height, and requires accurate and complete fault data samples.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a metering automation terminal fault diagnosis method based on deep learning and support vector data description.
The invention solves the technical problems through the following technical scheme: a metering automation terminal fault diagnosis method based on deep learning and support vector data description comprises the following steps:
step (1): collecting sample data;
collecting voltage data and current data of the metering automation terminal in batches, reading and writing data streams of a local communication module, data streams of a remote communication module and switching value input and output state data, wherein the number of sampling points of each batch is kept consistent; after normalization preprocessing is carried out on the acquired data, the acquired data are divided into fault training samples and fault testing samples;
step (2): building a DBN model;
establishing a Deep Belief Network (DBN) model with multiple hidden layers, determining the number of nodes of an input layer of the DBN model according to the sample dimensions of the fault training sample and the fault testing sample in the step (1), and performing unsupervised training on the DBN model by adopting the fault training sample; determining the number of output layer nodes of the DBN model according to the fault type of the metering automation terminal, and obtaining a connection weight and a bias parameter of the DBN model by adopting an unsupervised layer-by-layer greedy training method; optimizing the connection weight to obtain reference characteristics of various fault types;
and (3): diagnosing faults;
and (3) establishing the bandwidth of each fault type Support Vector Data Description (SVDD) model by using the reference characteristics of the step (2), and performing weighted normalization processing on the bandwidth radius of each fault hypersphere, so as to judge the fault type of the metering automation terminal and realize fault diagnosis of the metering automation terminal.
Further, in the step (2), the training of the DBN model includes two parts, one part is to perform unsupervised training on a Restricted Boltzmann Machine (RBM) layer by layer, and the other part is to apply a back propagation algorithm to perform fine tuning on the DBN model, so that the network structure of the DBN model is optimized.
Further, the specific training step of the DBN model includes the following substeps:
step (2.1): taking a fault training sample as the input of a DBN model, inputting a given training sample to a first layer RBM visible layer node, activating all nodes of a hidden layer by using a joint probability distribution function of the RBM, and simultaneously, regaining the visible layer node by using the excitation of the hidden layer node; then, calculating the conditional distribution of the visible layer data by using a contrast divergence algorithm to obtain hidden layer data, calculating the visible layer data by using the conditional distribution data of the hidden layer, reconstructing the visible layer data, and adjusting and updating RBM model parameters;
step (2.2): taking the output of the first layer RBM hidden layer as the input of the visible layer of the second layer RBM until the state is stable;
step (2.3): and (3) repeating the step (2.2) until the last layer of RBM is completed, and finishing the RBM parameter theta ═ wij,ai,bj) Wherein a isiIs the bias of the ith node of the visible layer; bjIs the bias of the jth node of the hidden layer, wijIs the connection weight of the ith node of the visible layer and the jth node of the hidden layer;
step (2.4): after the last layer of RBM hidden layer training is completed, the fault type output by the last layer of hidden layer of the DBN model is trained through a back propagation network, the type error of the fault type result output by the training prediction and the type error of the actual type result of the fault training sample are propagated backwards layer by layer, the connection weight of each layer of the whole DBN model is optimized, the original data sample with the minimum error is reconstructed, and therefore the essential characteristic of the original metering automation terminal data sample is obtained and serves as the reference characteristic of the metering automation terminal fault type.
Further, in the step (2.1), the joint probability distribution function of the RBM is:
in the formula, Z (theta) is a normalization factor, h is a hidden layer neuron and v is a visible layer neuron.
Further, in the step (2.1), the contrast divergence learning algorithm is as follows:
Δwij=ε(<vihj>data-<vihj>model)
Δai=ε(<vi>data-<vi>model)
Δbj=ε(<hj>data-<hj>model)
wherein, because< >modelIt is difficult to calculate, so the use of the contrast bifurcation algorithm reduces the amount of computation, resulting in an improved learning algorithm, as follows:
Δwij=ε(<vihj>data-<vihj>1)
Δai=ε(<vi>data-<vi>1)
Δbj=ε(<hj>data-<hj>1)
wherein,< >1the method comprises the steps of carrying out Gibbs sampling on a sample to obtain a reconstructed sample; epsilon is the learning rate, representing the step length of each parameter adjustment; h isjFor hidden layer neurons, viVisible layer neurons.
Further, in the step (3), the specific step of determining the fault type of the metering automation terminal includes:
step (3.1): constructing a minimum hypersphere containing a fault target training sample in a high-dimensional space subjected to kernel mapping, and using the fault test sample data divided in the step (1), regarding test data x falling outside the hypersphere as a non-target class, and regarding test data falling inside the hypersphere and at the boundary as a fault target class;
suppose a sample set X of reference features of a fault training sample is { X ═ X1,x2,...,xn},xi∈RnEstablishing a Lagrangian function:
in the formula, alphaiAnd betaiIs Lagrange factor, ξi(ξi≧ 0) is a relaxation variable factor, C represents a penalty factor, φ (x)i) Mapping the original space to a high-dimensional space through a nonlinear mapping function, wherein r is the radius of a hyper-sphere;
step (3.2): lagrange function a, ξ for said step (3.1)iAnd r is obtained by partial differentiation:
by optimizing the above formula, the optimal hypersphere classification problem is converted into its dual form:
K(xi,xj) Mapping the inner product of the fault data to a kernel function space for the kernel function with the constraint condition of
According to the KKT condition, utilizing a boundary support vector x satisfying a constraint conditionkFrom this, the hypersphere radius is determined as:
step (3.3): determining a Support Vector Data Description (SVDD), i.e., satisfying 0 ≦ αiC, and the radius of the hyper-sphere is the distance value from any Support Vector Data Description (SVDD) to the center; if the radius distance from the test data point to the center of a certain hyper-sphere is less than or equal to r, the test point is indicated to belong to the fault data type, and the purpose of classifying the fault types of the metering automation terminal is achieved.
Compared with the prior art, the metering automation terminal fault diagnosis method based on deep learning and support vector data description provided by the invention has the advantages that the characteristic extraction is carried out on the fault data acquired by the metering automation terminal through the DBN model, and the fault diagnosis and classification are carried out by utilizing SVDD; the DBN model can directly start from a low-level original signal, high-level feature representation is obtained through greedy training layer by layer, manual operation of feature extraction and selection is avoided, complexity and uncertainty caused by traditional manual feature extraction and feature selection are effectively eliminated, and intelligence of a diagnosis process is enhanced;
according to the two-class classifier of the traditional SVM, if the multi-class classification problem of fault separation is processed, the two-class classifier needs to be converted into a one-to-many or one-to-one form, and the conversion can lead to the repeated use of training samples.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only one embodiment of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a network structure of the DBN model of the present invention and its training process;
FIG. 2 is a flow chart of the SVDD algorithm for realizing the fault classification of the metering automation terminal.
Detailed Description
The technical solutions in the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a metering automation terminal fault diagnosis method based on deep learning and support vector data description, which comprises the following steps:
(1) the voltage data and the current data of the metering automation terminal are collected in batches by using the alternating current sampling module, the local communication module, the remote communication module and the input-output module, the read-write data stream of the local communication module, the data stream of the remote communication module and the switching value input-output state data, and the number of sampling points in each batch is kept consistent; after normalization preprocessing is carried out on the acquired data, the acquired data are divided into fault training samples and fault testing samples;
(2) establishing a Deep Belief Network (DBN) model with multiple hidden layers, determining the number of nodes of an input layer of the DBN model according to the sample dimensions of the fault training sample and the fault testing sample in the step (1), and performing unsupervised training on the DBN model by adopting the fault training sample; determining the number of output layer nodes of the DBN model according to the fault type of the metering automation terminal, and obtaining a connection weight and a bias parameter of the DBN model by adopting an unsupervised layer-by-layer greedy training method; and adjusting and optimizing the connection weight to obtain reference characteristics of various fault types, as shown in fig. 1.
The DBN is a typical deep learning method, can form more abstract high-level representation by combining bottom-level features, and finds out distributed features of data, and the motivation is to establish a model to simulate a neural network connection structure of a human brain and perform distributed representation on input data through a plurality of multilayer perceptrons with nonlinear operation hidden layers. The DBN is a multi-hidden-layer neural network which simulates the function of a human brain for processing external signals and consists of a plurality of RBMs (restricted Boltzmann machines), the core of the DBN is optimized by a layer-by-layer greedy learning algorithm, and compared with other traditional fault diagnosis methods, the DBN has the advantages that dependence on a large number of signal processing technologies and diagnosis experiences can be eliminated, and self-adaptive extraction of fault characteristics and intelligent diagnosis of health states can be completed. The RBM is a nerve perceptron and consists of a visible layer and a hidden layer, and the neurons of the visible layer and the neurons of the hidden layer are in full bidirectional connection. In the RBM, a weight w between any two connected neurons represents the connection strength of the neurons, and each neuron has a bias coefficient b (explicit layer neuron) and c (implicit layer neuron) to represent the weight of the neuron itself. Thus, the energy of an RBM can be represented by the following function:
since the state distribution of the RBM follows a regular distribution. The joint probability distribution of any group of visible layers and hidden layers is as follows:
in the formula, Z (theta) is a normalization factor, also called a partition function, h is a hidden layer neuron, and v is a visible layer neuron.
In an RBM, hidden layer neuron h is given visible layer node statejProbability of being activated:
P(hj|v)=σ(bj+∑iWi,jxi)
the probability that the apparent layer neurons can be activated by the hidden layer neurons as well due to the bidirectional connection:
P(vj|h)=σ(ci+∑jWi,jhj)
wherein σ is Sigmoid function.
The neurons in the same layer have independence, so the probability density also satisfies the independence, and the following formula is obtained:
the training of the DBN model comprises two parts, wherein one part is to carry out unsupervised training on a Restricted Boltzmann Machine (RBM) layer by layer, and the other part is to apply a back propagation algorithm to carry out fine adjustment on the DBN model so as to optimize the network structure of the DBN model; the specific training step comprises the following substeps:
(2.1) taking a fault training sample as the input of a DBN model, inputting a given training sample to a first layer RBM visible layer node, activating all nodes of a hidden layer by using a joint probability distribution function of RBM, and simultaneously, obtaining the visible layer node again by using the excitation of the hidden layer node; and then, calculating the conditional distribution of the visible layer data by using a contrast divergence algorithm to further obtain the hidden layer data, calculating the visible layer data by using the conditional distribution data of the hidden layer, reconstructing the visible layer data, and adjusting and updating the parameters of the RBM model.
RBM parameter θ ═ (w)ij,ai,bj) The contrast divergence learning algorithm is as follows:
Δwij=ε(<vihj>data-<vihj>model)
Δai=ε(<vi>data-<vi>model)
Δbj=ε(<hj>data-<hj>model)
wherein, Δ wijRepresents the updated difference value of the connection weight of the ith node of the visible layer and the jth node of the hidden layer, delta ai,ΔbjRespectively representing the updating difference values of the bias parameters of the ith node of the visible layer and the jth node of the hidden layer,< >data is a desire for the distribution of training data,< >model RBM model reconstructionThe expectation of the latter definition because< >model is difficult to calculate, so the use of the contrast bifurcation algorithm reduces the amount of operations, resulting in an improved learning algorithm as follows:
Δwij=ε(<vihj>data-<vihj>1)
Δai=ε(<vi>data-<vi>1)
Δbj=ε(<hj>data-<hj>1)
wherein,< >1the reconstruction sample is obtained by carrying out Gibbs sampling on the reconstruction sample for one time; epsilon is the learning rate, representing the step length of each parameter adjustment; h isjFor hidden layer neurons, viVisible layer neurons.
And (2.2) taking the first layer RBM hidden layer output as the visible layer input of the second layer RBM until the stable state is reached.
(2.3) repeating the step (2.2) until the last layer of RBM is finished, and finishing the RBM parameter theta to be (w)ij,ai,bj) Wherein a isiIs the bias of the ith node of the visible layer; bjIs the bias of the jth node of the hidden layer, wijIs the connection weight of the ith node of the visible layer and the jth node of the hidden layer.
(2.4) after the last layer of RBM hidden layer training is finished, carrying out reverse propagation network training on the fault type output by the last layer of hidden layer of the DBN model, carrying out backward propagation on the type errors of the fault type result output by training prediction and the actual type result of the fault training sample layer by layer, optimizing the connection weight of each layer of the whole DBN model, reconstructing an original data sample with the minimum error, thereby obtaining the essential characteristics of the original metering automation terminal data sample, and taking the essential characteristics as the reference characteristics of the metering automation terminal fault type.
(3) And (3) establishing the bandwidth of each fault type SVDD (Support Vector Domain Description, SVDD) model by using the reference characteristics in the step (2), and performing weighted normalization processing on the bandwidth radius of each fault hypersphere, so as to judge the fault type of the metering automation terminal and realize fault diagnosis of the metering automation terminal.
As shown in fig. 2, the specific steps of determining the fault type of the metering automation terminal include:
(3.1) constructing a minimum hypersphere containing a fault target training sample in a high-dimensional space subjected to kernel mapping, and using the fault test sample data divided in the step (1), regarding test data x falling outside the hypersphere as a non-target class, and regarding test data falling inside the hypersphere and at the boundary as a fault target class; the hypersphere is the classifier, and the vector on the hypersphere is the support vector; in the fault diagnosis, each fault hypersphere is trained to obtain each corresponding fault hypersphere as a fault mode library to identify the fault.
Suppose a sample set X of reference features of a fault training sample is { X ═ X1,x2,...,xn},xi∈RnEstablishing a Lagrangian function:
in the formula, alphaiAnd betaiIs Lagrange factor, ξi(ξi≧ 0) is a relaxation variable factor, C represents a penalty factor, φ (x)i) Mapping the original space to a high-dimensional space through a nonlinear mapping function, wherein r is the radius of a hyper-sphere;
(3.2) Lagrangian function a, xi to step (3.1)iAnd r is obtained by partial differentiation:
by optimizing the above formula, the optimal hypersphere classification problem is converted into its dual form:
K(xi,xj) Mapping the inner product of the fault data to a kernel function space for the kernel function with the constraint condition of
According to the KKT condition, utilizing a boundary support vector x satisfying a constraint conditionkFrom this, the hypersphere radius is determined as:
(3.3) determining the support vector data description SVDD, i.e. satisfying 0 ≦ αiC or less test data points, and the radius of the hyper-sphere is the distance value from any support vector data description SVDD to the center; if the radius distance from the test data point to the center of a certain hyper-sphere is less than or equal to r, the test point is indicated to belong to the fault data type, and the purpose of classifying the fault types of the metering automation terminal is achieved. The fault diagnosis method can automatically judge whether the metering automation terminal is a load control terminal, a special transformer terminal or a concentrator, improves the accuracy, effectiveness and real-time performance of fault diagnosis of the metering automation terminal, quickly and accurately diagnoses and positions faults, can further reduce manual intervention, and improves the automation and intelligence levels of fault diagnosis.
The above disclosure is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of changes or modifications within the technical scope of the present invention, and shall be covered by the scope of the present invention.
Claims (6)
1. A metering automation terminal fault diagnosis method based on deep learning and support vector data description is characterized by comprising the following steps:
step (1): collecting sample data;
collecting voltage data and current data of the metering automation terminal in batches, reading and writing data streams of a local communication module, data streams of a remote communication module and switching value input and output state data, wherein the number of sampling points of each batch is kept consistent; after normalization preprocessing is carried out on the acquired data, the acquired data are divided into fault training samples and fault testing samples;
step (2): building a DBN model;
establishing a multi-hidden-layer DBN model, determining the number of nodes of an input layer of the DBN model according to the sample dimensions of the fault training sample and the fault testing sample in the step (1), and performing unsupervised training on the DBN model by adopting the fault training sample; determining the number of output layer nodes of the DBN model according to the fault type of the metering automation terminal, and obtaining a connection weight and a bias parameter of the DBN model by adopting an unsupervised layer-by-layer greedy training method; optimizing the connection weight to obtain reference characteristics of various fault types;
and (3): diagnosing faults;
and (3) establishing the bandwidth of each fault type SVDD model by using the reference characteristics in the step (2), and performing weighted normalization processing on the bandwidth radius of each fault hypersphere, so as to judge the fault type of the metering automation terminal and realize fault diagnosis of the metering automation terminal.
2. The method as claimed in claim 1, wherein in the step (2), the training of the DBN model includes two parts, one part is to perform unsupervised training of the RBM layer by layer, and the other part is to perform fine tuning of the DBN model by using a back propagation algorithm to optimize the network structure of the DBN model.
3. The metrology automation terminal fault diagnosis method of claim 2 wherein the specific training step of the DBN model comprises the substeps of:
step (2.1): taking a fault training sample as the input of a DBN model, inputting a given training sample to a first layer RBM visible layer node, activating all nodes of a hidden layer by using a joint probability distribution function of the RBM, and simultaneously, regaining the visible layer node by using the excitation of the hidden layer node; then, calculating the conditional distribution of the visible layer data by using a contrast divergence algorithm to obtain hidden layer data, calculating the visible layer data by using the conditional distribution data of the hidden layer, reconstructing the visible layer data, and adjusting and updating RBM model parameters;
step (2.2): taking the output of the first layer RBM hidden layer as the input of the visible layer of the second layer RBM until the state is stable;
step (2.3): and (3) repeating the step (2.2) until the last layer of RBM is completed, and finishing the RBM parameter theta ═ wij,ai,bj) Wherein a isiIs the bias of the ith node of the visible layer; bjIs the bias of the jth node of the hidden layer, wijIs the connection weight of the ith node of the visible layer and the jth node of the hidden layer;
step (2.4): after the last layer of RBM hidden layer training is completed, the fault type output by the last layer of hidden layer of the DBN model is trained through a back propagation network, the type error of the fault type result output by the training prediction and the type error of the actual type result of the fault training sample are propagated backwards layer by layer, the connection weight of each layer of the whole DBN model is optimized, the original data sample with the minimum error is reconstructed, and therefore the essential characteristic of the original metering automation terminal data sample is obtained and serves as the reference characteristic of the metering automation terminal fault type.
5. The metrology automation terminal fault diagnosis method of claim 3 wherein in step (2.1) the contrast divergence learning algorithm is:
Δwij=ε(<vihj>data-<vihj>1)
Δai=ε(<vi>data-<vi>1)
Δbj=ε(<hj>data-<hj>1)
wherein, Δ wijRepresents the updated difference value of the connection weight of the ith node of the visible layer and the jth node of the hidden layer, delta ai,ΔbjRespectively representing the updating difference values of the bias parameters of the ith node of the visible layer and the jth node of the hidden layer,<>datain order to train the expectation of the distribution of the data,<>1the method comprises the steps of carrying out Gibbs sampling on a sample to obtain a reconstructed sample; epsilon is the learning rate, representing the step length of each parameter adjustment; h isjFor hidden layer neurons, viVisible layer neurons.
6. The method for diagnosing faults of a metering automation terminal as claimed in claim 1, wherein in the step (3), the specific step of judging the fault type of the metering automation terminal comprises:
step (3.1): constructing a minimum hypersphere containing a fault target training sample in a high-dimensional space subjected to kernel mapping, and using the fault test sample data divided in the step (1), regarding test data x falling outside the hypersphere as a non-target class, and regarding test data falling inside the hypersphere and at the boundary as a fault target class;
suppose a sample set X of reference features of a fault training sample is { X ═ X1,x2,...,xn},xi∈RnEstablishing a Lagrangian function:
in the formula, alphaiAnd betaiIs Lagrange factor, ξi(ξi≧ 0) is a relaxation variable factor, C represents a penalty factor, φ (x)i) Mapping the original space to a high-dimensional space through a nonlinear mapping function, wherein r is the radius of a hypersphere, and a is the spherical center of the hypersphere;
step (3.2): lagrange function a, ξ for said step (3.1)iAnd r is obtained by partial differentiation:
by optimizing the above formula, the optimal hypersphere classification problem is converted into its dual form:
K(xi,xj) Mapping the inner product of the fault data to a kernel function space for the kernel function with the constraint condition of
According to the KKT condition, utilizing a boundary support vector x satisfying a constraint conditionkFrom this, the hypersphere radius is determined as:
step (3.3): determining the support vector data description SVDD, i.e. satisfying 0 ≦ αiC or less test data points, and the radius of the hyper-sphere is the distance value from any support vector data description SVDD to the center; if the test data point is at the center of a certain hyper-sphereIf the radius distance is less than or equal to r, the test data point belongs to the fault data type, and the purpose of classifying the fault types of the metering automation terminal is achieved.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811046099.3A CN109490814B (en) | 2018-09-07 | 2018-09-07 | Metering automation terminal fault diagnosis method based on deep learning and support vector data description |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811046099.3A CN109490814B (en) | 2018-09-07 | 2018-09-07 | Metering automation terminal fault diagnosis method based on deep learning and support vector data description |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109490814A CN109490814A (en) | 2019-03-19 |
CN109490814B true CN109490814B (en) | 2021-02-26 |
Family
ID=65690661
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811046099.3A Active CN109490814B (en) | 2018-09-07 | 2018-09-07 | Metering automation terminal fault diagnosis method based on deep learning and support vector data description |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109490814B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110220725B (en) * | 2019-05-30 | 2021-03-23 | 河海大学 | Subway wheel health state prediction method based on deep learning and BP integration |
CN110222914A (en) * | 2019-07-02 | 2019-09-10 | 国家电网有限公司 | A kind of concentrator that accuracy rate is high operation prediction technique |
CN110568082A (en) * | 2019-09-02 | 2019-12-13 | 北京理工大学 | cable wire breakage distinguishing method based on acoustic emission signals |
CN110991121B (en) * | 2019-11-19 | 2023-12-29 | 西安理工大学 | CDBN-SVR-based soft measurement method for deformation of air preheater rotor |
CN110879377B (en) * | 2019-11-22 | 2022-05-10 | 国网新疆电力有限公司电力科学研究院 | Metering device fault tracing method based on deep belief network |
CN111753889B (en) * | 2020-06-11 | 2022-04-12 | 浙江浙能技术研究院有限公司 | Induced draft fan fault identification method based on CNN-SVDD |
CN112067053A (en) * | 2020-09-07 | 2020-12-11 | 北京理工大学 | Multi-strategy joint fault diagnosis method for minority class identification |
CN112184037B (en) * | 2020-09-30 | 2022-11-11 | 华中科技大学 | Multi-modal process fault detection method based on weighted SVDD |
CN113205506B (en) * | 2021-05-17 | 2022-12-27 | 上海交通大学 | Three-dimensional reconstruction method for full-space information of power equipment |
CN113341347B (en) * | 2021-06-02 | 2022-05-03 | 云南大学 | Dynamic fault detection method for distribution transformer based on AOELM |
CN113486950B (en) * | 2021-07-05 | 2023-06-16 | 华能国际电力股份有限公司上安电厂 | Intelligent pipe network water leakage detection method and system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103489004A (en) * | 2013-09-30 | 2014-01-01 | 华南理工大学 | Method for achieving large category image identification of deep study network |
CN104268627A (en) * | 2014-09-10 | 2015-01-07 | 天津大学 | Short-term wind speed forecasting method based on deep neural network transfer model |
CN104616033A (en) * | 2015-02-13 | 2015-05-13 | 重庆大学 | Fault diagnosis method for rolling bearing based on deep learning and SVM (Support Vector Machine) |
CN106980873A (en) * | 2017-03-09 | 2017-07-25 | 南京理工大学 | Fancy carp screening technique and device based on deep learning |
CN107463937A (en) * | 2017-06-20 | 2017-12-12 | 大连交通大学 | A kind of tomato pest and disease damage automatic testing method based on transfer learning |
US9875237B2 (en) * | 2013-03-14 | 2018-01-23 | Microsfot Technology Licensing, Llc | Using human perception in building language understanding models |
CN108010029A (en) * | 2017-12-27 | 2018-05-08 | 江南大学 | Fabric defect detection method based on deep learning and support vector data description |
US10063582B1 (en) * | 2017-05-31 | 2018-08-28 | Symantec Corporation | Securing compromised network devices in a network |
-
2018
- 2018-09-07 CN CN201811046099.3A patent/CN109490814B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9875237B2 (en) * | 2013-03-14 | 2018-01-23 | Microsfot Technology Licensing, Llc | Using human perception in building language understanding models |
CN103489004A (en) * | 2013-09-30 | 2014-01-01 | 华南理工大学 | Method for achieving large category image identification of deep study network |
CN104268627A (en) * | 2014-09-10 | 2015-01-07 | 天津大学 | Short-term wind speed forecasting method based on deep neural network transfer model |
CN104616033A (en) * | 2015-02-13 | 2015-05-13 | 重庆大学 | Fault diagnosis method for rolling bearing based on deep learning and SVM (Support Vector Machine) |
CN106980873A (en) * | 2017-03-09 | 2017-07-25 | 南京理工大学 | Fancy carp screening technique and device based on deep learning |
US10063582B1 (en) * | 2017-05-31 | 2018-08-28 | Symantec Corporation | Securing compromised network devices in a network |
CN107463937A (en) * | 2017-06-20 | 2017-12-12 | 大连交通大学 | A kind of tomato pest and disease damage automatic testing method based on transfer learning |
CN108010029A (en) * | 2017-12-27 | 2018-05-08 | 江南大学 | Fabric defect detection method based on deep learning and support vector data description |
Non-Patent Citations (1)
Title |
---|
Deep neuralnetworks:Apromisingtoolforfaultcharacteristic mining andintelligentdiagnosisofrotatingmachinery with massivedata;Feng Jia等;《Mechanical SystemsandSignalProcessing》;20151231;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109490814A (en) | 2019-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109490814B (en) | Metering automation terminal fault diagnosis method based on deep learning and support vector data description | |
CN113496262B (en) | Data-driven active power distribution network abnormal state sensing method and system | |
Liao et al. | Fault diagnosis of power transformers using graph convolutional network | |
Yoon et al. | Semi-supervised learning with deep generative models for asset failure prediction | |
CN111860982A (en) | Wind power plant short-term wind power prediction method based on VMD-FCM-GRU | |
CN110580543A (en) | Power load prediction method and system based on deep belief network | |
CN110363230B (en) | Stacking integrated sewage treatment fault diagnosis method based on weighted base classifier | |
CN115081316A (en) | DC/DC converter fault diagnosis method and system based on improved sparrow search algorithm | |
CN112633493A (en) | Fault diagnosis method and system for industrial equipment data | |
CN110852365B (en) | ZPW-2000A type non-insulated rail circuit fault diagnosis method | |
CN116679211B (en) | Lithium battery health state prediction method | |
CN105606914A (en) | IWO-ELM-based Aviation power converter fault diagnosis method | |
CN110851654A (en) | Industrial equipment fault detection and classification method based on tensor data dimension reduction | |
CN114912666A (en) | Short-time passenger flow volume prediction method based on CEEMDAN algorithm and attention mechanism | |
CN112418476A (en) | Ultra-short-term power load prediction method | |
CN111047078A (en) | Traffic characteristic prediction method, system and storage medium | |
CN111506868B (en) | Ultra-short-term wind speed prediction method based on HHT weight optimization | |
CN115114128A (en) | Satellite health state evaluation system and evaluation method | |
CN116415505A (en) | System fault diagnosis and state prediction method based on SBR-DBN model | |
CN116227716A (en) | Multi-factor energy demand prediction method and system based on Stacking | |
Su et al. | Real-time hierarchical risk assessment for UAVs based on recurrent fusion autoencoder and dynamic FCE: A hybrid framework | |
CN118396409A (en) | Short-term wind speed prediction method for wind farm | |
Zhang et al. | Recurrent neural network model with self-attention mechanism for fault detection and diagnosis | |
Qin et al. | Remaining useful life prediction using temporal deep degradation network for complex machinery with attention-based feature extraction | |
CN113033898A (en) | Electrical load prediction method and system based on K-means clustering and BI-LSTM neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |