CN110146812B - Motor fault diagnosis method based on feature node incremental width learning - Google Patents

Motor fault diagnosis method based on feature node incremental width learning Download PDF

Info

Publication number
CN110146812B
CN110146812B CN201910401213.8A CN201910401213A CN110146812B CN 110146812 B CN110146812 B CN 110146812B CN 201910401213 A CN201910401213 A CN 201910401213A CN 110146812 B CN110146812 B CN 110146812B
Authority
CN
China
Prior art keywords
matrix
feature
data
training
proc
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910401213.8A
Other languages
Chinese (zh)
Other versions
CN110146812A (en
Inventor
江赛标
李嘉
杜晓标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai College of Jilin University
Original Assignee
Zhuhai College of Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai College of Jilin University filed Critical Zhuhai College of Jilin University
Priority to CN201910401213.8A priority Critical patent/CN110146812B/en
Publication of CN110146812A publication Critical patent/CN110146812A/en
Application granted granted Critical
Publication of CN110146812B publication Critical patent/CN110146812B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/34Testing dynamo-electric machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a motor fault diagnosis method based on feature node incremental width learning. The incremental width learning (IBL) structure of the feature nodes is simple, and the network can be effectively retrained. The invention combines feature extraction (particle swarm optimization-variable modal decomposition and time domain statistical feature), feature node incremental width learning and non-negative matrix decomposition to form the intelligent diagnosis method of the three-phase motor. Experimental results show that the method is superior to other algorithms when the three-phase motor fault is diagnosed. In addition, the IBL error simplified by non-Negative Matrix Factorization (NMF) is small and the system is more stable.

Description

Motor fault diagnosis method based on feature node incremental width learning
Technical Field
The invention relates to the field of motor fault diagnosis, in particular to a three-phase induction motor fault diagnosis method based on feature node incremental width learning.
Background
Three-phase induction motors (TPIM) provide the main driving force for our daily life. Because of the low cost, small size, ruggedness, and low maintenance of TPIM, more and more researchers have conducted research into TPIM. While TPIMs are reliable, they also suffer from certain adverse effects that can lead to failure, causing serious accidents. It is necessary for people to monitor their operational status before a serious accident occurs. The literature indicates that induction motors suffer from winding imbalance, stator or rotor imbalance, rotor bar breakage, eccentricity and bearing defects.
With the development of machine learning, the application of machine learning to fault diagnosis research of conventional motors is increasing. Deep Belief Networks (DBNs), Extreme Learning Machines (ELMs), and Convolutional Neural Networks (CNNs) are widely used in fault diagnosis of dc motors and ac motors. Although deep learning networks are very powerful, they tend to take a lot of time in the training process due to the large number of hyper-parameters and complex structures involved. In addition, it is theoretically very difficult to analyze the deep structure due to the complex deep learning structure. Most work requires adjusting parameters or adding more layers to improve accuracy, and thus requires more and more powerful computing resources. In order to improve the training performance of machine learning, researchers have proposed a width learning method. Unlike the above, the width learning structure has only two layers, one of which is the input layer, which contains the mapping features and the enhanced nodes. The other is the output layer. Although it is a simple structure, it can improve performance by adding nodes of features. Therefore, it can be applied to the diagnosis of the induction motor, and the training speed and accuracy of the diagnosis are improved.
Fast Fourier Transform (FFT) is not suitable for non-stationary signals; the short-time fourier transform (STFT) imperfections are inherently related in time and frequency; wavelet Transform (WT) can result in energy leakage loss. More recently, dragomirtsky et al proposed a morphometric decomposition (VMD) method that assumes that each extracted mode has a limited bandwidth and is compressed around a matching center frequency. The sparsity before each sub-pattern is selected as the central bandwidth in the spectral domain. However, VMD in practice has a modulation capability that depends to a large extent on the intrinsic parameter settings. Different configurations of penalty α and different numbers of sub-components K in VMD result in various decomposition performances. Therefore, the parameters α and K need to be optimized.
The traditional data processing and system model training are carried out in an experimental stage, and the diagnosis system model cannot be modified after the model training is finished. The reconstruction of the motor fault diagnosis model will take a lot of training time, especially for deep learning models, which will greatly limit its application.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a motor fault diagnosis method based on feature node incremental width learning. The IBL has a simple structure, and can effectively train and retrain the network; the training time can be effectively saved, and the accuracy and the stability of the fault diagnosis system are improved.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a motor fault diagnosis method based on feature node incremental width learning comprises the following steps:
s1: data acquisition and data processing; collecting any two groups of current signals and one sound wave signal in the stator winding A current signal, the stator winding B current signal or the stator winding C current signal, and recording as x1,x2,x3Filtering the two collected current signals and sound wave signals; dividing the filtered data into two groups, wherein one group of current signal data is subjected to time domain statistical characteristics, and the sound wave signal data is subjected to time domain statistical characteristics and particle swarm optimization-variation modal decomposition respectively; and finally, dividing the processed data into three groups of independent data sets: the method comprises the steps of training a data set, verifying the data set and testing the data set; are respectively marked as xk-Proc-Train、xk-Proc-ValiAnd xk-Proc-Test
S2: model training, i.e. x after treatmentk-Proc-TrainAnd (3) carrying out width learning, training to obtain a system model, wherein the width learning training process comprises the following steps:
using a processed training data set xk-Proc-TrainTraining a width learning network, wherein the output of the width learning network is the diagnosis accuracy; when the output accuracy is greater than or equal to the set target accuracy, obtaining a training model; when the output accuracy is smaller than the set target accuracy, the system enters incremental learning;
s3: the incremental learning of the characteristic nodes, i.e. increasing the number of the characteristic nodes to process the xk-Proc-ValiAnd performing incremental learning, wherein the incremental learning process of the feature nodes is as follows:
using processed validation data set xk-Proc-ValiTo train featuresThe node incremental width learning network outputs failure diagnosis accuracy; when the output accuracy is not within +/-M% of the set target accuracy, the system continues incremental learning; when the output accuracy is +/-M% of the set target accuracy, obtaining a feature node incremental training model; in the present invention, M is 2.5.
S4: the NMF method is adopted to simplify the model obtained by training in the step S3, so that a more stable model is obtained, and the data set x is testedk-Proc-TestAnd obtaining an output matrix, and comparing the output matrix with the fault label to obtain the fault diagnosis accuracy of the motor.
Preferably, the data acquisition manner in step S1 is: an oscilloscope is adopted to collect current signals, and a microphone is adopted to collect sound signals.
Preferably, in step S1, the acquired data is subjected to slice filtering and then divided into three independent data sets.
Preferably, the data processing procedure for the sound wave signal in step S1 is as follows: extracting signal characteristics by adopting particle swarm optimization-variable modal decomposition (PSO-VMD) and a time domain statistical method (TDSF);
after particle swarm optimization-variable modal decomposition is adopted, because the dimension of each inherent modal function is unchanged after the variable modal decomposition, the dimension needs to be reduced, Sample Entropy (SE) is applied to count the characteristics of the inherent modal functions, namely, the representative characteristics of each inherent modal function are calculated by the sample entropy; the characteristic results are respectively stored as xk-SE-Train、xk-SE-ValiAnd xk-SE-Test(ii) a To ensure that all features contribute, xk-SE-Train、xk-SE-ValiAnd xk-SE-TestIs performed for each feature of [0,1 ]]And (6) normalization processing.
Preferably, step S1 further includes: respectively adding 10 time domain statistical characteristics to the collected two groups of current signals, and carrying out [0,1 ]]Normalization processing is carried out, and then the normalized data and the sound wave characteristics obtained through sample entropy processing are combined to obtain a processed training data set, a processed verification data set and a processed test data set, wherein the processed training data set, the processed verification data set and the processed test data set are named as x respectivelyk-Proc-Train、xk-Proc-ValiAnd xk-Proc-Test
Preferably, 10 features are added to the current signal and the sound signal: mean, standard deviation, root mean square, peak, skewness, kurtosis, crest factor, gap factor, shape factor, and pulse factor.
Preferably, the process of the width learning training specifically comprises:
using a processed training data set xk-Proc-TrainTo train the learning network, let X be Xk-Proc-TrainNamely, inputting a feature set X, and setting N samples, wherein each sample is M-dimensional;
for n feature maps, its mapping characteristic ZiRepresented by formula (1):
Zi=φ(XWeiei),i=1…,n (1)
wherein WeiRandom weight matrix, beta, representing the ith input featureeiA random offset matrix representing the ith input feature, phi being the mapping function, Zn≡[Z1,…,Zn]A mapping set representing all feature nodes;
for enhanced nodes, HmEnhanced features representing an mth set of enhanced nodes:
Hm≡ξ(ZnWhmhm) (2)
Whmrandom weight matrix, β, representing the mth group of enhanced nodeshmA random offset matrix representing the mth set of enhancement nodes, ξ being the enhancement mapping function, Hm≡[H1,…,Hm]A mapping set representing all the enhanced nodes; all enhanced connection weights are denoted as Wm≡[Wh1,…,Whm];
Thus, the output matrix Y is represented as the equation:
Y=[Zn|Hm]Wm (3)
y is of
Figure GDA0002888755320000051
Output torque ofArraying;
w can be calculated using equation (3)m=[Zn|Hm]+Y。
Preferably, in the model training process of step S2, when the output accuracy is smaller than the set target accuracy, the accuracy of the model output can be made greater than or equal to the set value by increasing the number of feature nodes, so as to obtain a training model;
adding feature nodes in the learning process, and setting the composite connection of the initial input feature vector and the enhanced nodes as Am=[Zn|Hm]The feature matrix after the additional feature nodes is:
Figure GDA0002888755320000052
wherein
Figure GDA0002888755320000053
The connection weight after the characteristic node is increased;
Figure GDA0002888755320000054
for the deviations after the feature node, the pseudo-inverse matrix representation of the new matrix can be obtained as follows:
Figure GDA0002888755320000055
wherein the transition matrix
Figure GDA0002888755320000056
Intermediate matrix
Figure GDA0002888755320000061
Wherein
Figure GDA0002888755320000062
The new weights are:
Figure GDA0002888755320000063
verification data set x to be processedk-Proc-TrainAnd as an input set X, obtaining the feature node incremental width learning model based on the input X and the new weight.
Preferably, before testing the test number set in step S4, the method further includes performing NMF structure simplification on the model obtained in step S4, and the weight matrix before simplification is set as
Figure GDA0002888755320000064
Since the input data set is normalized, the weight matrix is a non-negative matrix, assuming a non-negative matrix
Figure GDA0002888755320000065
And another non-negative matrix
Figure GDA0002888755320000066
Then the following results are obtained:
Wm≈IWr (7)
wherein WmIs the original matrix, the right matrix WrIs a coefficient matrix, the left matrix I is a basis matrix;
new weight matrix Wr≈I+WmThe model obtained in step S4 can be simplified by using the new weight matrix.
Compared with the prior art, the invention has the beneficial effects that: the invention provides an incremental width learning method based on feature nodes, which can retrain a model by increasing the number of the feature nodes and improve the accuracy of a system. Because the width learning training speed of the characteristic node type is increased, the online training can be realized, and the application field of the online training is greatly improved. In addition, in order to improve the precision, the invention adopts a signal processing method to realize the extraction of useful fault characteristics.
Drawings
FIG. 1 is a schematic structural diagram of the present invention.
FIG. 2 is a schematic diagram of an incremental width learning network of feature nodes.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent; for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The technical solution of the present invention is further described below with reference to the accompanying drawings and examples. According to the technical scheme provided by the invention, as shown in figure 1, the method comprises four steps: (a) data acquisition and data processing, (b) width learning, (c) feature node incremental width learning, and (d) NMF structure simplification.
A motor fault diagnosis method based on feature node incremental width learning comprises the following steps:
the first step is as follows: data acquisition and data processing for three-phase motor
Digital equipment is used for acquiring digital signals. These are the stator winding A current signal, the stator winding B current signal and the acoustic signal, respectively denoted x1,x2,x3. A digital clipping filter is employed to reduce interference. For each signal, taking the sound signal as an example, signal x3The collected data is divided into three separate data sets, including a training data set, a validation data set, and a test data set.
The original signal is decomposed into: training data, validation data, and test data. The training data should be labeled xk-PSO-VMD-TrainVerification data flag xk-PSO-VMD-ValiTest data is marked as xk-PSO-VMD-Test. Considering the existence of irrelevant information and redundant information in the extracted features, a Sample Entropy (SE) statistical algorithm is adopted for xk-PSO-VMD-Train、xk-PSO-VMD-ValiAnd xk-PSO-VMD-TestAnd (5) carrying out feature extraction. The results are stored as x respectivelyk-SE-Train、xk-SE-ValiAnd xk-SE-Test. In addition to signal characteristics, 10 is added to the signal characteristicsTemporal statistical features (TDSF). To ensure that all properties contribute uniformly, each property is normalized to [0,1 ]]. Last xk-Proc-Train、xk-Proc-ValiAnd xk-Proc-TestThe processed training data set, validation data set and test data set are marked respectively.
The second step is that: width learning training
In the width learning module, the data set x is first trained by using a width learning modelk-Proc-Train. And secondly, outputting the trained BL model as the training accuracy of the training model. If the training accuracy is greater than a set Target Percentage (TP), the model is complete. Otherwise, the width learning will be incremental by adding feature nodes.
The width learning is based on a traditional stochastic vector function neural network. Using a processed training data set xk-Proc-TrainTo train the learning network, let X be Xk-Proc-TrainNamely, inputting a feature set X, and setting N samples, wherein each sample is M-dimensional;
for n feature maps, its mapping characteristic ZiRepresented by formula (1):
Zi=φ(XWeiei),i=1…,n (1)
wherein WeiRandom weight matrix, beta, representing the ith input featureeiA random offset matrix representing the ith input feature, phi being the mapping function, Zn≡[Z1,…,Zn]A mapping set representing all feature nodes;
for enhanced nodes, HmEnhanced features representing an mth set of enhanced nodes:
Hm≡ξ(ZnWhmhm) (2)
Whmrandom weight matrix, β, representing the mth group of enhanced nodeshmA random offset matrix representing the mth set of enhancement nodes, ξ being the enhancement mapping function, Hm≡[H1,…,Hm]A mapping set representing all the enhanced nodes; all enhanced connection weights are denoted as Wm≡[Wh1,…,Whm];
Thus, the output matrix Y is represented as the equation:
Y=[Zn|Hm]Wm (3)
y is of
Figure GDA0002888755320000091
The output matrix of (a);
w can be easily calculated using equation (3)m=[Zn|Hm]+Y。
The third step: incremental width learning with feature node addition
Under certain conditions, in order to improve the accuracy of the system, additional nodes are required, and feature nodes are added in the learning process. Assume that the composite connection of the input feature vector and the feature node at the beginning is Am=[Zn|Hm]The characteristics after the characteristic nodes are additionally added are as follows:
Figure GDA0002888755320000092
wherein
Figure GDA0002888755320000093
The connection weight after the characteristic node is increased;
Figure GDA0002888755320000094
in order to increase the deviation after the characteristic node, the pseudo-inverse matrix of the new matrix can be obtained as follows:
Figure GDA0002888755320000095
wherein the transition matrix
Figure GDA0002888755320000096
Intermediate matrix
Figure GDA0002888755320000097
Wherein
Figure GDA0002888755320000098
The new weights are:
Figure GDA0002888755320000099
verification data set x to be processedk-Proc-ValiAnd as an input set X, obtaining the feature node incremental width learning model based on the input X and the new weight. The incremental algorithm of the feature nodes does not need to calculate the whole Am+1Incremental learning can be realized by only calculating the pseudo-inverse of the additionally added feature nodes, so that the retraining speed of the network can be increased.
The fourth step: NMF structure simplification reduces error rate of motor fault diagnosis system
After adding feature nodes in incremental learning, there may be redundant nodes or data due to insufficient or excessive initialization of the input data. Generally, this structure can be simplified by a series of low order approximation methods. The invention selects non-Negative Matrix Factorization (NMF) to provide structural simplification for the feature node incremental width learning model.
Let the weight matrix before reduction be
Figure GDA0002888755320000101
Since the input data set is normalized, the weight matrix is a non-negative matrix, assuming a non-negative matrix
Figure GDA0002888755320000102
And another non-negative matrix
Figure GDA0002888755320000103
Figure GDA0002888755320000104
Then it is possible to obtain:
Wm≈IWr (7)
wherein WmCan be decomposed into two small matrices. m is the dimension of the enhanced eigenvalue, n is the number of samples, and r is the rank reduction. WmIs an original matrix. Matrix W on the rightrIs a matrix of coefficients. The matrix I on the left is called the basic matrix. The column vector of the original matrix is the weighted sum of all column vectors in the left matrix, and the weight coefficient is the element of the column vector corresponding to the right matrix. Generally, r is selected to be smaller than m, so as to realize dimension reduction of the original matrix, and the coefficient matrix W is usedrReplacing the original matrix to obtain a dimensionality reduction matrix of the data characteristics:
Wr≈I+Wm
the simplified model obtained in step S4 can thus be simplified by using the new weight matrix.
The invention provides a novel motor fault diagnosis method. The method combines feature extraction, width learning, incremental width learning of feature nodes and NMF-IBL to diagnose the motor faults, improves the testing precision of the system, reduces average testing errors, and shortens training time and retraining time. First, the method extracts the original sample data from the winding a & B current and acoustic signals, and then processes these sample data through a limiting filter, PSO-VMD, SampEn, TDSF, and normalization. Secondly, the invention inputs the processed data into a width learning network model and trains the network. And if the testing precision is not ideal, adopting the incremental width learning retraining model of the feature nodes. Finally, the NMF method is used to simplify the network structure. Experimental results show that the motor fault diagnosis method based on the incremental width learning of the characteristic nodes and the non-Negative Matrix Factorization (NMF) is effective in improving diagnosis precision and training speed. The innovation points of the invention are as follows:
1. a new method of diagnosing stator and rotor faults in a three-phase induction motor is presented.
2. The feature extraction technology combining PSO-VMD, SampEn and TDSF is applied, and the accuracy of system diagnosis is improved.
3. The retraining method based on the feature node increment width learning is provided, and the testing precision and the training speed are improved.
4. The adoption of non-Negative Matrix Factorization (NMF) simplifies the IBL structure and reduces the average test error of the system.
To sum up: the invention combines feature extraction (particle swarm optimization-variable modal decomposition and time domain statistical feature), feature node incremental width learning and non-negative matrix decomposition to form the intelligent diagnosis method of the three-phase motor. Experimental results show that the method is superior to other algorithms when the three-phase motor fault is diagnosed. In addition, the IBL error simplified by non-Negative Matrix Factorization (NMF) is small and the system is more stable.

Claims (6)

1. A motor fault diagnosis method based on feature node incremental width learning is characterized by comprising the following steps:
s1: data acquisition and data processing; collecting any two groups of current signals and one sound wave signal in the stator winding A current signal, the stator winding B current signal or the stator winding C current signal, and recording as x1,x2,x3Filtering the two collected current signals and sound wave signals; dividing the filtered data into two groups, wherein one group of current signal data is subjected to time domain statistical characteristics, and the sound wave signal data is respectively subjected to time domain statistical characteristics and particle swarm optimization-variation modal decomposition (PSO-VMD); and finally, dividing the processed data into three groups of independent data sets: comprises a training data set, a verification data set and a test data set which are respectively marked as xk-Proc-Train、xk-Proc-ValiAnd xk-Proc-Test
S2: model training, i.e. x after treatmentk-Proc-TrainAnd (3) carrying out width learning, training to obtain a system model, wherein the width learning training process comprises the following steps:
using a processed training data set xk-Proc-TrainTraining a width learning network, wherein the output of the width learning network is the diagnosis accuracy; when the output accuracy is greater than or equal to the set target accuracy, obtaining a training model; when the output accuracy is less than the set valueWhen the target accuracy rate is high, the system enters incremental learning;
s3: the incremental learning of the characteristic nodes, i.e. increasing the number of the characteristic nodes to process the xk-Proc-ValiAnd performing incremental learning, wherein the incremental learning process of the feature nodes is as follows:
using processed validation data set xk-Proc-ValiTraining a feature node incremental width learning network, wherein the feature node incremental width learning network outputs the fault diagnosis accuracy; when the output accuracy is not in the set target accuracy [ -2.5%, 2.5% ]]When the system is in use, the system continues to learn in increments; when the output accuracy is in the set target accuracy [ -2.5%, 2.5% ]]Then, obtaining a feature node incremental training model;
s4: simplifying the model trained in step S3 by non-Negative Matrix Factorization (NMF) to obtain a more stable model, and testing the data set xk-Proc-TestObtaining an output matrix, and comparing the output matrix with a fault label to obtain the fault diagnosis accuracy of the motor;
the data processing process for the acoustic wave signal in step S1 is as follows: extracting signal characteristics by adopting particle swarm optimization-variable modal decomposition (PSO-VMD) and a time domain statistical method (TDSF);
decomposing the original signal by particle swarm optimization-variation modal decomposition (PSO-VMD) into: training data, validation data and test data, the training data being labeled xk-PSO-VMD-TrainVerification data flag xk-PSO-VMD-ValiTest data is marked as xk-PSO-VMD-Test
After particle swarm optimization-variable modal decomposition (PSO-VMD) is adopted, because the dimension of each inherent modal function is unchanged after the variable modal decomposition, and the dimension needs to be reduced, Sample Entropy (SE) is applied to count the characteristics of the inherent modal functions, namely the representative characteristics of each inherent modal function are calculated by adopting the sample entropy; the characteristic results are respectively stored as xk-SE-Train、xk-SE-ValiAnd xk-SE-Test(ii) a To ensure that all features contribute, xk-SE-Train、xk-SE-ValiAnd xk-SE-TestIs performed for each feature of [0,1 ]]Normalization processing;
step S1 further includes: respectively adding 10 time domain statistical characteristics to the collected two groups of current signals, and carrying out [0,1 ]]Normalization processing is carried out, and then the normalized data and the sound wave characteristics obtained through sample entropy processing are combined to obtain a processed training data set, a processed verification data set and a processed test data set, wherein the processed training data set, the processed verification data set and the processed test data set are named as x respectivelyk-Proc-Train、xk-Proc-ValiAnd xk-Proc-Test
Adding 10 features to the current signal and the sound signal is: mean, standard deviation, root mean square, peak, skewness, kurtosis, crest factor, gap factor, shape factor, and pulse factor.
2. The method according to claim 1, wherein the data acquisition manner in step S1 is as follows: an oscilloscope is adopted to collect current signals, and a microphone is adopted to collect sound signals.
3. The method according to claim 1, wherein in step S1, the collected data is sliced into three independent data sets after being subjected to slice filtering.
4. The method according to claim 1, wherein the width learning training process is specifically:
using a processed training data set xk-Proc-TrainTo train the learning network, let X be Xk-Proc-TrainNamely, inputting a feature set X, and setting N samples, wherein each sample is M-dimensional;
for n feature maps, its mapping characteristic ZiRepresented by formula (1):
Zi=φ(XWeiei),i=1…,n (1)
wherein WeiRandom weight matrix, beta, representing the ith input featureeiA random offset matrix representing the ith input feature, phi being the mapping function, Zn≡[Z1,…,Zn]Mapping set representing all feature nodes;
For enhanced nodes, HmEnhanced features representing an mth set of enhanced nodes:
Hm≡ξ(ZnWhmhm) (2)
Whmrandom weight matrix, β, representing the mth group of enhanced nodeshmA random offset matrix representing the mth set of enhancement nodes, ξ being the enhancement mapping function, Hm≡[H1,…,Hm]A mapping set representing all the enhanced nodes; all enhanced connection weights are denoted as Wm≡[Wh1,…,Whm];
Thus, the output matrix Y is represented as the equation:
Y=[Zn|Hm]Wm(3)
y is of
Figure FDA0003008056430000031
The output matrix of (a);
w can be calculated using equation (3)m=[Zn|Hm]+Y。
5. The method of claim 4, wherein in the model training process of step S2, when the output accuracy is less than the set target accuracy, the training model can be obtained by increasing the number of feature nodes to make the accuracy of the model output greater than or equal to the set value;
adding feature nodes in the learning process, and setting the composite connection of the initial input feature vector and the enhanced nodes as Am=[Zn|Hm]The feature matrix after the additional feature nodes is:
Figure FDA00030080564300000411
wherein
Figure FDA0003008056430000041
The connection weight after the characteristic node is increased;
Figure FDA0003008056430000042
for the deviations after the feature node, the pseudo-inverse matrix representation of the new matrix can be obtained as follows:
Figure FDA0003008056430000043
wherein the transition matrix
Figure FDA0003008056430000044
Intermediate matrix
Figure FDA0003008056430000045
Wherein
Figure FDA0003008056430000046
The new weights are:
Figure FDA0003008056430000047
verification data set x to be processedk-Proc-TrainAnd as an input set X, obtaining the feature node incremental width learning model based on the input X and the new weight.
6. The method of claim 5, wherein step S4 further comprises performing non-Negative Matrix Factorization (NMF) structure reduction on the model obtained in step S4 before testing the set of test numbers, and wherein the weight matrix before reduction is set to
Figure FDA0003008056430000048
Since the input data set is normalized, the weight matrix is a non-negative matrix, assuming a non-negative matrix
Figure FDA0003008056430000049
And another non-negative matrix
Figure FDA00030080564300000410
Then the following results are obtained:
Wm≈IWr(7)
wherein WmIs the original matrix, the right matrix WrIs a coefficient matrix, the left matrix I is a basis matrix;
new weight matrix Wr≈I+WmThe model obtained in step S4 can be simplified by using the new weight matrix.
CN201910401213.8A 2019-05-15 2019-05-15 Motor fault diagnosis method based on feature node incremental width learning Active CN110146812B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910401213.8A CN110146812B (en) 2019-05-15 2019-05-15 Motor fault diagnosis method based on feature node incremental width learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910401213.8A CN110146812B (en) 2019-05-15 2019-05-15 Motor fault diagnosis method based on feature node incremental width learning

Publications (2)

Publication Number Publication Date
CN110146812A CN110146812A (en) 2019-08-20
CN110146812B true CN110146812B (en) 2021-07-13

Family

ID=67595255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910401213.8A Active CN110146812B (en) 2019-05-15 2019-05-15 Motor fault diagnosis method based on feature node incremental width learning

Country Status (1)

Country Link
CN (1) CN110146812B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110673578B (en) * 2019-09-29 2022-07-08 华北电力大学(保定) Fault degradation degree determination method and device, computer equipment and storage medium
CN110749793A (en) * 2019-10-31 2020-02-04 杭州中恒云能源互联网技术有限公司 Dry-type transformer health management method and system based on width learning and storage medium
CN111899905B (en) * 2020-08-05 2022-11-01 哈尔滨工程大学 Fault diagnosis method and system based on nuclear power device
CN112215281A (en) * 2020-10-12 2021-01-12 浙江大学 Fan blade icing fault detection method
CN112308159B (en) * 2020-11-05 2023-04-07 湖南科技大学 Image identification and classification method based on prediction increment width learning
CN112508058B (en) * 2020-11-17 2023-11-14 安徽继远软件有限公司 Transformer fault diagnosis method and device based on audio feature analysis
CN112861328B (en) * 2021-01-22 2022-08-30 东北电力大学 Generator damping evaluation device and method based on random response signals
CN113419519B (en) * 2021-07-14 2022-05-13 北京航空航天大学 Electromechanical product system or equipment real-time fault diagnosis method based on width learning
CN113688786B (en) * 2021-09-10 2022-07-12 广东电网有限责任公司广州供电局 PSO (particle swarm optimization) width learning-based voltage sag multiple disturbance source identification method
CN114123896B (en) * 2021-11-30 2022-05-24 江南大学 Permanent magnet synchronous motor control method and system based on incremental width learning system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644231A (en) * 2017-09-19 2018-01-30 广东工业大学 A kind of generator amature method for diagnosing faults and device
CN107944274A (en) * 2017-12-18 2018-04-20 华中科技大学 A kind of Android platform malicious application off-line checking method based on width study
CN108734301A (en) * 2017-06-29 2018-11-02 澳门大学 A kind of machine learning method and machine learning device
CN108960339A (en) * 2018-07-20 2018-12-07 吉林大学珠海学院 A kind of electric car induction conductivity method for diagnosing faults based on width study
EP3480714A1 (en) * 2017-11-03 2019-05-08 Tata Consultancy Services Limited Signal analysis systems and methods for features extraction and interpretation thereof

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020038510A1 (en) * 2000-10-04 2002-04-04 Orbotech, Ltd Method for detecting line width defects in electrical circuit inspection
CN109522802B (en) * 2018-10-17 2022-05-24 浙江大学 Pump noise elimination method applying empirical mode decomposition and particle swarm optimization algorithm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734301A (en) * 2017-06-29 2018-11-02 澳门大学 A kind of machine learning method and machine learning device
CN107644231A (en) * 2017-09-19 2018-01-30 广东工业大学 A kind of generator amature method for diagnosing faults and device
EP3480714A1 (en) * 2017-11-03 2019-05-08 Tata Consultancy Services Limited Signal analysis systems and methods for features extraction and interpretation thereof
CN107944274A (en) * 2017-12-18 2018-04-20 华中科技大学 A kind of Android platform malicious application off-line checking method based on width study
CN108960339A (en) * 2018-07-20 2018-12-07 吉林大学珠海学院 A kind of electric car induction conductivity method for diagnosing faults based on width study

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An Efficient Fault Diagnostic Method for Three-Phase Induction Motors Based on Incremental Broad Learning and Non-Negative Matrix Factorization;Saibiao Jiang et.al;《IEEE Access》;20190214;第7卷;第17780-17790页 *
基于宽度学习方法的多模态信息融合;贾晨 等;《智能系统学报》;20190131;第14卷(第1期);第150-157页 *

Also Published As

Publication number Publication date
CN110146812A (en) 2019-08-20

Similar Documents

Publication Publication Date Title
CN110146812B (en) Motor fault diagnosis method based on feature node incremental width learning
CN109740523B (en) Power transformer fault diagnosis method based on acoustic features and neural network
Razavi-Far et al. Information fusion and semi-supervised deep learning scheme for diagnosing gear faults in induction machine systems
CN104655425B (en) Bearing fault classification diagnosis method based on sparse representation and LDM (large margin distribution machine)
CN107066759B (en) Steam turbine rotor vibration fault diagnosis method and device
Wu et al. Induction machine fault detection using SOM-based RBF neural networks
AU2020214409A1 (en) Structural damage identification method based on ensemble empirical mode decomposition and convolution neural network
AlThobiani et al. An application to transient current signal based induction motor fault diagnosis of Fourier–Bessel expansion and simplified fuzzy ARTMAP
CN105841961A (en) Bearing fault diagnosis method based on Morlet wavelet transformation and convolutional neural network
CN110232435B (en) Self-adaptive deep confidence network rolling bearing fault diagnosis method
Jiang et al. A fault diagnostic method for induction motors based on feature incremental broad learning and singular value decomposition
CN111812507A (en) Motor fault diagnosis method based on graph convolution
CN112633098A (en) Fault diagnosis method and system for rotary machine and storage medium
CN106295023A (en) A kind of diagnostic method of asynchronous machine rotor combined failure
CN113158984A (en) Bearing fault diagnosis method based on complex Morlet wavelet and lightweight convolution network
Guedidi et al. Bearing faults classification based on variational mode decomposition and artificial neural network
CN114997749B (en) Intelligent scheduling method and system for power personnel
CN116773952A (en) Transformer voiceprint signal fault diagnosis method and system
CN111397884B (en) Blade fault diagnosis method for improving Mel cepstrum coefficient algorithm
CN114371002B (en) DAE-CNN-based planetary gear box fault diagnosis method
Du et al. Translation invariance-based deep learning for rotating machinery diagnosis
CN115901259A (en) Rolling bearing weak fault diagnosis method based on two-dimensional image and CNN
CN114964783A (en) Gearbox fault detection model based on VMD-SSA-LSSVM
CN114139607A (en) CRWGAN-div-based equipment fault sample enhancement method
CN112345251A (en) Mechanical intelligent fault diagnosis method based on signal resolution enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant