CN107957551A - Stacking noise reduction own coding Method of Motor Fault Diagnosis based on vibration and current signal - Google Patents
Stacking noise reduction own coding Method of Motor Fault Diagnosis based on vibration and current signal Download PDFInfo
- Publication number
- CN107957551A CN107957551A CN201711321716.1A CN201711321716A CN107957551A CN 107957551 A CN107957551 A CN 107957551A CN 201711321716 A CN201711321716 A CN 201711321716A CN 107957551 A CN107957551 A CN 107957551A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msup
- msub
- mtr
- mtd
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R31/00—Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
- G01R31/34—Testing dynamo-electric machines
- G01R31/343—Testing dynamo-electric machines in operation
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
Abstract
The invention discloses the stacking noise reduction own coding Method of Motor Fault Diagnosis based on vibration and current signal, it is divided into five steps:The first step, obtains the vibration of motor different faults and the time-domain signal of electric current, it is pre-processed, as network inputs;Second step, determines network parameter;3rd step, is successively trained, by input layer of the hidden layer of upper level self-encoding encoder (Auto encoder, AE) as next stage AE, so that final feature coding is obtained, for training Softmax networks;4th step, finely tunes whole network, judges whether to reach expected precise requirements, terminate if meeting the requirements network training, if not satisfied, adjusting network parameter, repeats the 3rd step;5th step, network struction are completed.This method builds multilayer SDAE networks, and vibration frequency-region signal and electric current time-domain signal are combined as input, train SDAE networks and grader successively, and has being finely adjusted to whole network for supervision, so as to fulfill accurate Diagnosing Faults of Electrical.
Description
Technical field
The invention belongs to the fault diagnosis technology field of motor in industrial production, and in particular to source signal (vibration signal
And current signal) and stack noise reduction own coding motor oscillating signal fault diagnosis method.
Background technology
Application of the asynchronous machine in contemporary society's production system is more and more extensive, is the main driving of industrial production activities
Equipment, once breaking down, will bring huge economic loss.Asynchronous machine is by stator, rotor, bearing, engine base and fan
Deng the Universal electric equipment of composition, it is internal comprising complicated multiple subsystems, electrical fault is showed diversity, it is showed
The feature gone out is also multifarious;And caused by same symptom is likely to be different reasons, feature that same failure is shown
Also it is not quite similar.Not corresponded between the fault signature and fault type of asynchronous machine, there are stronger non-linear therebetween
Relation.Therefore, electrical fault is effectively diagnosed to be to avoiding the generation of catastrophe failure, and ensureing the normal operation of mechanical equipment has
Major and immediate significance.
The method for diagnosing faults of motor belongs to the category of pattern-recognition, usually extracts the feature of motor oscillating signal first,
So as to classify.The method used has BP neural network, support vector machines (Support Vector Machine, SVM), footpath
To base net network etc..In recent years since the development of deep learning, profound network have obtained extensively in image recognition, speech recognition
Application.
Noise reduction self-encoding encoder (Stacked Denoising Autoencoder, SDAE) is stacked as a kind of profound
Network, is made of multiple self-encoding encoders, is capable of the feature of adaptive unsupervised extraction signal.And by stacking the micro- of network
Tune mechanism, can there is the training network of supervision, improve accuracy rate.
The content of the invention
As commercial production scale becomes larger, the monitoring point that each industrial equipment needs is increased, the sample frequency of each measuring point
Higher and higher, data collection time is increasingly longer, this make it that the data volume that monitoring system obtains is increasing, mechanical health monitoring
Field enters the big data epoch.Conventional method is when realizing Diagnosing Faults of Electrical, sample size all very littles for experiment, and in machine
Under tool " big data " background, these small samples just lose practical significance, therefore select suitable method for diagnosing faults and raising
It is particularly important that fault diagnosis accuracy becomes.
In order to solve the asynchronous machine caused by the factors such as electric machine structure complexity, vibration signal non-stationary and mechanical big data
The problems such as fault diagnosis is difficult, it is theoretical present invention introduces deep learning, it is proposed that based on the motor for stacking noise reduction autoencoder network
Method for diagnosing faults.This method builds multilayer SDAE networks, and vibration frequency-region signal and electric current time-domain signal are combined as defeated
Enter, train SDAE networks and grader successively, and have being finely adjusted to whole network for supervision, so as to fulfill the event of accurate motor
Barrier diagnosis.
Technical solution of the present invention is as follows:
The input sample of SDAE networks should include all features of fault-signal as far as possible, and vibration signal includes complicated axis
Information is held, current signal includes abundant rotor characteristic, therefore this patent is mutually tied frequency-region signal is vibrated with electric current time-domain signal
Cooperate the input for network, as shown in Figure 1.
SDAE electrical fault network training process is divided into 5 steps:
The first step:The vibration of motor different faults and the time-domain signal of electric current are obtained, it is pre-processed, it is defeated as network
Enter;
Second step:Determine network parameter (the network number of plies, each node layer number, learning rate, iterations etc.);
3rd step:Successively train, by the hidden layer of upper level self-encoding encoder (Auto encoder, AE) as next stage AE
Input layer, so that final feature coding is obtained, for training Softmax networks;
4th step:Whole network is finely tuned, judges whether to reach expected precise requirements, if meeting the requirements network training knot
Beam, if not satisfied, then adjusting network parameter, repeats the 3rd step;
5th step:Network struction is completed.
Beneficial effect
Using SDAE respectively the vibration time-domain signal to motor, vibration frequency-region signal, vibration time-domain signal+frequency-region signal and
Vibrate 4 class sample analysis of frequency-region signal+electric current time-domain signal diagnosis.Found by test of many times, as shown in figure 4, to vibrate frequency
Domain+electric current time-domain signal is input sample, (the network structure in 4 layers of SDAE networks:2000-100-100-100-7), accuracy rate
Apparently higher than other three kinds, reach highest 99.86%.
In order to which compared with traditional intelligence method, this experiment utilizes this 2 kinds of methods of EMD+SVM, diagnostic characteristic+SVM to motor
Fault diagnosis is carried out, the 75% of all samples is equally chosen and is used to train, remaining 25% is used to test, its result such as institute of table 1
Show.
The diagnostic result of 1 distinct methods of table
Although two methods of EMD+SVM and diagnostic characteristic+SVM can preferably realize Diagnosing Faults of Electrical, and it is diagnosed
Accuracy is higher (being respectively 90.15% and 93.65%).But SDAE is by the way that deep layer network can adaptively unsupervised extraction be more
Accurate feature representation, and have the fine setting whole network of supervision, so as to fulfill the Diagnosing Faults of Electrical of intelligent and high-efficiency, it diagnoses essence
Spend for 99.86%.
In order to compare the ability of DAE and SDAE networks extraction feature, to vibrate frequency-region signal+electric current time-domain signal in experiment
DAE and SDAE (4 hidden layer) network is respectively trained for sample, utilizes principal component analysis (Principal Component
Analysis, PCA) extraction the 4th layer of feature two important components (be respectively principal component component x and principal component component y) simultaneously can
Depending on change, as shown in Figure 5.
Fig. 5 (a) is the scatter diagram that DAE network characterizations are drawn, and Fig. 5 (b) is the scatter diagram that SDAE network characterizations are drawn.From figure
In it can be seen that SDAE network characterizations can be distinguished significantly, and DAE network characterizations overlap, can not obvious area
Point.
Brief description of the drawings
Fig. 1 is the splicing of vibration frequency-region signal and electric current time-domain signal;
Fig. 2 is the flow chart of Diagnosing Faults of Electrical;
Fig. 3 is noise reduction self-encoding encoder schematic diagram;
Fig. 4 is the diagnostic result of different depth network under different samples;
Fig. 5 is the feature scatter diagram under two kinds of networks, and (a) is the feature scatter diagram of DAE, and (b) is the feature scatterplot of SDAE
Figure;
Fig. 6 is to have supervision SDAE networks when network is finely tuned.
Embodiment
With reference to the attached drawing of the present invention, embodiment of the present invention is described in detail.
The first step:Gathered data.Using the asynchronous machine of power transmission fault diagnosis multi-function test stand as research object, experiment
Platform is made of four parts such as asynchronous machine, two-stage planetary gear, fixed axis gear case and magnetic powder brake.By replacing motor
7 kinds of different malfunctions are simulated, list 7 kinds of different malfunctions as shown in table 2, in table.
7 kinds of states of 2 motor of table
To ensure the diversity of experimental data, when gathered data, simulates 10 kinds of different operating modes, and corresponding 5 kinds of rotating speeds (rise
Reduction of speed, 3560RPM, 3580RPM, 3560RPM, 3620RPM), 2 kinds of states (have load, non-loaded).In view of sensing station
Influence, in two acceleration transducers of 12 o'clock of front end of motor and 9 o'clock location arrangements, while passed using pincerlike electric current
Sensor acquires current signal during motor operation.The sample frequency of sensor is set to 5kHz.When choosing data, every kind of operating mode
Using 200 samples, wherein each 100 of the acceleration transducer signals of 12 o'clock and 9 o'clock position.Therefore, each failure
Total number of samples be 2000, each sample corresponds to the vibration signal of 2000 points.And choose 2000 groups of electricity of corresponding time
Flow signal.14000 groups of vibration time-domain signals and 14000 groups of electric current time-domain signals of corresponding time are obtained altogether.Randomly select every kind of
The 75% of each operating mode of failure is used as training sample, and residue 25% is used as test sample.
Second step:The vibration time-domain signal of different faults to collecting carries out frequency-domain analysis with Fast Fourier Transform (FFT),
Frequency-region signal (length 1000) is extracted, is then spliced in a manner of Fig. 1 with electric current time-domain signal (length 1000), is made
For the sample x (length 2000) of network inputs.
3rd step:, it is necessary to which sample is normalized before network training, such as formula (1).
Then noise reduction self-encoding encoder is built, single self-encoding encoder is as shown in Figure 3.Coding be by sample x from input Es-region propagations
To hidden layer, to make self-encoding encoder (AE:Auto Encoder) feature that learns of each hidden layer has more robustness, with certain general
Rate adds noise in training sample, i.e., at random by the input data zero setting of each hidden layer.Then plus the data made an uproar are passed through
Sigmoid activation primitives (such as formula 2), are mapped to k dimensional vector h ∈ []k×1(such as formula (3)).
In formula:X is input sample;F () is activation primitive;θ1={ w1,b1It is network parameter;w1For weights, b1To be inclined
Put.
Decoding is that feature coding is propagated to output layer from hidden layer, and m dimensional vectors are mapped to by activation primitiveThe process of reconstructed sample x, such as formula (4).
In formula:It is the reconstruct to sample x;F () is activation primitive, θ2={ w2,b2It is network parameter;W2For weights,
b2For biasing.
The training objective of AE networks is by finding one group of optimal parameterSo that output data
Error between input data reaches small as far as possible, that is, realizes loss function L (w1,w2,b1,b2) minimize, loss function table
It is as follows up to formula.
In formula:Section 1 represents the sum of the deviations of network inputs data and output data on the right of equation;Section 2 is canonical
Change bound term, for preventing from training over-fitting;x(i)WithThe input vector and reconstruct vector of i-th of sample are represented respectively;Represent x(i)WithBetween mean square deviation, its expression formula is as follows.
AE networks are by error Back-Propagation and gradient descent method, to realize error function L (w1,w2,b1,b2) minimize.Make
AE is capable of the feature of adaptive unsupervised learning sample.
4th step:With the output of first AE encoder hidden layer, as input sample, second AE is built, repeats the
Three steps, and so on the multiple AE of structure.
5th step:Unsupervised trained each AE network encoders hidden layer in 4th step is taken out, such as Fig. 6 successively heaps
It is folded, and in last layer supervision fine setting has been carried out plus softmax graders.Softmax graders divide feature vector
Class identifies.Assuming that input sample is x, corresponding label y in training data, then sample is determined as that the probability of some classification J is p
(y=j | x).So for a K class grader, output will be a K dimensional vector (vectorial element and for 1), such as formula
(7) shown in.
In formula:θ1;θ2;…;For model parameter;For normalized function, probability is divided
Cloth is normalized so that the sum of all probability are 1.
In training, optimized parameter is found using gradient descent method so that the cost function J (θ) of Softmax reaches most
It is small, so as to complete network training.Cost function J (θ) is as shown in formula (8).
In formula:1 { } is an indicative function, i.e., when value is true in braces, which is just 1, otherwise
As a result it is just 0.
6th step:Successive ignition, when losing loss convergences, completes network training.And use verification collection data assessment net
Network performance, if rate of accuracy reached to requiring, exports network, otherwise changes network parameter, continue to train.
Claims (2)
1. the stacking noise reduction own coding Method of Motor Fault Diagnosis based on vibration and current signal, it is characterised in that be divided into five
Step:
The first step, gathered data;Using the asynchronous machine of power transmission fault diagnosis multi-function test stand as object, experimental bench is by asynchronous
Motor, two-stage planetary gear, fixed axis gear case and magnetic powder brake, the malfunction different by replacing motor simulation, is
Ensure the diversity of experimental data;Randomly select the 75% of each operating mode of every kind of failure and be used as training sample, residue 25% is as survey
Sample sheet;
Second step, the vibration time-domain signal of the different faults to collecting carry out frequency-domain analysis with Fast Fourier Transform (FFT), extraction
Frequency-region signal;
3rd step, it is necessary to which sample is normalized before network training, such as formula (1):
<mrow>
<msup>
<mi>X</mi>
<mo>*</mo>
</msup>
<mo>=</mo>
<mfrac>
<mrow>
<mi>x</mi>
<mo>-</mo>
<mi>m</mi>
<mi>i</mi>
<mi>n</mi>
</mrow>
<mrow>
<mi>m</mi>
<mi>a</mi>
<mi>x</mi>
<mo>-</mo>
<mi>m</mi>
<mi>i</mi>
<mi>n</mi>
</mrow>
</mfrac>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
Then noise reduction self-encoding encoder is built;Coding is that sample x is propagated to hidden layer from input layer, to make self-encoding encoder AE:
The feature that each hidden layers of Auto Encoder learn has more robustness, noise is added in training sample with certain probability, i.e.,
At random by the input data zero setting of each hidden layer;Then plus the data made an uproar pass through sigmoid activation primitives, see formula (2), are mapped to
K dimensional vector h ∈ []k×1, see formula (3)):
<mrow>
<mi>f</mi>
<mrow>
<mo>(</mo>
<mo>&CenterDot;</mo>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<mn>1</mn>
<mo>+</mo>
<msup>
<mi>e</mi>
<mrow>
<mo>-</mo>
<mi>t</mi>
</mrow>
</msup>
</mrow>
</mfrac>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mi>h</mi>
<mo>=</mo>
<msub>
<mi>f</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&theta;</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>f</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>w</mi>
<mn>1</mn>
</msub>
<mo>&CenterDot;</mo>
<mi>x</mi>
<mo>+</mo>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula:X is input sample;F () is activation primitive;θ1={ w1,b1It is network parameter;w1For weights, b1For biasing;
Decoding is that feature coding is propagated to output layer from hidden layer, and m dimensional vectors are mapped to by activation primitive
The process of reconstructed sample x, such as formula (4):
<mrow>
<mover>
<mi>x</mi>
<mo>^</mo>
</mover>
<mo>=</mo>
<msub>
<mi>f</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&theta;</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>h</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>f</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>w</mi>
<mn>2</mn>
</msub>
<mo>&CenterDot;</mo>
<mi>h</mi>
<mo>+</mo>
<msub>
<mi>b</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>4</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula:It is the reconstruct to sample x;F () is activation primitive, θ2={ w2,b2It is network parameter;W2For weights, b2To be inclined
Put;
The training objective of AE networks is by finding one group of optimal parameter θ*={ w1 *,w2 *,b1 *,b2 *So that output data with
Error between input data reaches small as far as possible, that is, realizes loss function L (w1,w2,b1,b2) minimize, loss function expression
Formula is as follows;
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<mi>L</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>w</mi>
<mn>1</mn>
</msub>
<mo>,</mo>
<msub>
<mi>w</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mo>&lsqb;</mo>
<mfrac>
<mn>1</mn>
<mi>n</mi>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>n</mi>
</munderover>
<mi>J</mi>
<mrow>
<mo>(</mo>
<msup>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<msup>
<mover>
<mi>x</mi>
<mo>^</mo>
</mover>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>+</mo>
<mfrac>
<mi>&lambda;</mi>
<mn>2</mn>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>l</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mrow>
<msub>
<mi>n</mi>
<mi>l</mi>
</msub>
<mo>-</mo>
<mn>1</mn>
</mrow>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>s</mi>
<mi>l</mi>
</msub>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>s</mi>
<mrow>
<mi>l</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msub>
</munderover>
<msup>
<mrow>
<mo>(</mo>
<msup>
<msub>
<mi>W</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>5</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula:Section 1 represents the sum of the deviations of network inputs data and output data on the right of equation;Section 2 for regularization about
Shu Xiang, for preventing from training over-fitting;x(i)WithThe input vector and reconstruct vector of i-th of sample are represented respectively;Represent x(i)WithBetween mean square deviation, its expression formula is as follows:
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<mi>J</mi>
<mrow>
<mo>(</mo>
<msup>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>,</mo>
<msup>
<mover>
<mi>x</mi>
<mo>^</mo>
</mover>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
<mo>|</mo>
<mo>|</mo>
<msup>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>-</mo>
<msup>
<mover>
<mi>x</mi>
<mo>^</mo>
</mover>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
<mo>|</mo>
<mo>|</mo>
<msup>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>-</mo>
<mi>f</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>w</mi>
<mn>2</mn>
</msub>
<mo>&CenterDot;</mo>
<mi>f</mi>
<mo>(</mo>
<mrow>
<msub>
<mi>w</mi>
<mn>1</mn>
</msub>
<mo>&CenterDot;</mo>
<mi>x</mi>
<mo>+</mo>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
</mrow>
<mo>)</mo>
<mo>+</mo>
<msub>
<mi>b</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>6</mn>
<mo>)</mo>
</mrow>
</mrow>
AE networks are by error Back-Propagation and gradient descent method, to realize error function L (w1,w2,b1,b2) minimize;So that AE
It is capable of the feature of adaptive unsupervised learning sample;
4th step, with the output of first AE encoder hidden layer, as input sample, builds second AE, repeats the 3rd step,
And so on the multiple AE of structure;
5th step, unsupervised trained each AE network encoders hidden layer in the 4th step is taken out, and is added in last layer
Softmax graders have carried out supervision fine setting;Softmax graders carry out Classification and Identification to feature vector;Assuming that training data
Middle input sample is x, corresponding label y, then is determined as sample the probability of some classification J for p (y=j | x);So for
One K class grader, output will be a K dimensional vector (vectorial element and for 1), as shown in formula (7);
<mrow>
<msub>
<mi>h</mi>
<mi>&theta;</mi>
</msub>
<mrow>
<mo>(</mo>
<msup>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<msup>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>=</mo>
<mn>1</mn>
<mo>|</mo>
<msup>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>;</mo>
<mi>&theta;</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<msup>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>=</mo>
<mn>2</mn>
<mo>|</mo>
<msup>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>;</mo>
<mi>&theta;</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>p</mi>
<mrow>
<mo>(</mo>
<msup>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>=</mo>
<mi>k</mi>
<mo>|</mo>
<msup>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>;</mo>
<mi>&theta;</mi>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mrow>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>k</mi>
</msubsup>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>&theta;</mi>
<mi>j</mi>
<mi>T</mi>
</msubsup>
<msup>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>&theta;</mi>
<mn>1</mn>
<mi>T</mi>
</msubsup>
<msup>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>&theta;</mi>
<mn>2</mn>
<mi>T</mi>
</msubsup>
<msup>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>&theta;</mi>
<mi>k</mi>
<mi>T</mi>
</msubsup>
<msup>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula:For model parameter;For normalized function, to probability distribution
It is normalized so that the sum of all probability are 1;
In training, optimized parameter is found using gradient descent method so that the cost function J (θ) of Softmax reaches minimum, from
And complete network training;Cost function J (θ) is as shown in formula (8):
<mrow>
<mi>J</mi>
<mrow>
<mo>(</mo>
<mi>&theta;</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mo>-</mo>
<mfrac>
<mn>1</mn>
<mi>m</mi>
</mfrac>
<mo>&lsqb;</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>m</mi>
</munderover>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>k</mi>
</munderover>
<mn>1</mn>
<mo>{</mo>
<msup>
<mi>y</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>=</mo>
<mi>j</mi>
<mo>}</mo>
<mi>log</mi>
<mfrac>
<mrow>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>&theta;</mi>
<mi>j</mi>
<mi>T</mi>
</msubsup>
<msup>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>l</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>k</mi>
</msubsup>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>&theta;</mi>
<mi>l</mi>
<mi>T</mi>
</msubsup>
<msup>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
</msup>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
<mo>&rsqb;</mo>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>8</mn>
<mo>)</mo>
</mrow>
</mrow>
In formula:1 { } is an indicative function, i.e., when value is true in braces, which is just 1, otherwise result
It is just 0;
6th step, successive ignition, when losing loss convergences, completes network training;And use verification collection data assessment internetworking
Can, if rate of accuracy reached to requiring, exports network, otherwise changes network parameter, continue to train.
2. the method as described in claim 1, it is characterised in that electric current time-domain signal length is 1000 in the second step, electricity
It is 1000 to flow time-domain signal length, and the sample x length as network inputs is 2000.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711321716.1A CN107957551A (en) | 2017-12-12 | 2017-12-12 | Stacking noise reduction own coding Method of Motor Fault Diagnosis based on vibration and current signal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711321716.1A CN107957551A (en) | 2017-12-12 | 2017-12-12 | Stacking noise reduction own coding Method of Motor Fault Diagnosis based on vibration and current signal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107957551A true CN107957551A (en) | 2018-04-24 |
Family
ID=61958617
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711321716.1A Pending CN107957551A (en) | 2017-12-12 | 2017-12-12 | Stacking noise reduction own coding Method of Motor Fault Diagnosis based on vibration and current signal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107957551A (en) |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108760305A (en) * | 2018-06-13 | 2018-11-06 | 中车青岛四方机车车辆股份有限公司 | A kind of Bearing Fault Detection Method, device and equipment |
CN108919059A (en) * | 2018-08-23 | 2018-11-30 | 广东电网有限责任公司 | A kind of electric network failure diagnosis method, apparatus, equipment and readable storage medium storing program for executing |
CN109000930A (en) * | 2018-06-04 | 2018-12-14 | 哈尔滨工业大学 | A kind of turbogenerator performance degradation assessment method based on stacking denoising self-encoding encoder |
CN109060347A (en) * | 2018-10-25 | 2018-12-21 | 哈尔滨理工大学 | Based on the planetary gear fault recognition method for stacking de-noising autocoder and gating cycle unit neural network |
CN109100648A (en) * | 2018-05-16 | 2018-12-28 | 上海海事大学 | Ocean current generator impeller based on CNN-ARMA-Softmax winds failure fusion diagnosis method |
CN109145886A (en) * | 2018-10-12 | 2019-01-04 | 西安交通大学 | A kind of asynchronous machine method for diagnosing faults of Multi-source Information Fusion |
CN109270921A (en) * | 2018-09-25 | 2019-01-25 | 深圳市元征科技股份有限公司 | A kind of method for diagnosing faults and device |
CN109613428A (en) * | 2018-12-12 | 2019-04-12 | 广州汇数信息科技有限公司 | It is a kind of can be as system and its application in motor device fault detection method |
CN109829538A (en) * | 2019-02-28 | 2019-05-31 | 苏州热工研究院有限公司 | A kind of equipment health Evaluation method and apparatus based on deep neural network |
CN109858408A (en) * | 2019-01-17 | 2019-06-07 | 西安交通大学 | A kind of ultrasonic signal processing method based on self-encoding encoder |
CN109858345A (en) * | 2018-12-25 | 2019-06-07 | 华中科技大学 | A kind of intelligent failure diagnosis method suitable for pipe expanding equipment |
CN110059601A (en) * | 2019-04-10 | 2019-07-26 | 西安交通大学 | A kind of multi-feature extraction and the intelligent failure diagnosis method merged |
CN110068760A (en) * | 2019-04-23 | 2019-07-30 | 哈尔滨理工大学 | A kind of Induction Motor Fault Diagnosis based on deep learning |
CN110286279A (en) * | 2019-06-05 | 2019-09-27 | 武汉大学 | Based on extreme random forest and the sparse Fault Diagnosis of Power Electronic Circuits method from encryption algorithm of stacking-type |
CN110458240A (en) * | 2019-08-16 | 2019-11-15 | 集美大学 | A kind of three-phase bridge rectifier method for diagnosing faults, terminal device and storage medium |
CN110619342A (en) * | 2018-06-20 | 2019-12-27 | 鲁东大学 | Rotary machine fault diagnosis method based on deep migration learning |
CN111157894A (en) * | 2020-01-14 | 2020-05-15 | 许昌中科森尼瑞技术有限公司 | Motor fault diagnosis method, device and medium based on convolutional neural network |
CN111310830A (en) * | 2020-02-17 | 2020-06-19 | 湖北工业大学 | Combine harvester blocking fault diagnosis system and method |
CN111323220A (en) * | 2020-03-02 | 2020-06-23 | 武汉大学 | Fault diagnosis method and system for gearbox of wind driven generator |
CN111539152A (en) * | 2020-01-20 | 2020-08-14 | 内蒙古工业大学 | Rolling bearing fault self-learning method based on two-stage twin convolutional neural network |
CN111680665A (en) * | 2020-06-28 | 2020-09-18 | 湖南大学 | Motor mechanical fault diagnosis method based on data driving and adopting current signals |
CN111783531A (en) * | 2020-05-27 | 2020-10-16 | 福建亿华源能源管理有限公司 | Water turbine set fault diagnosis method based on SDAE-IELM |
CN112706901A (en) * | 2020-12-31 | 2021-04-27 | 华南理工大学 | Semi-supervised fault diagnosis method for main propulsion system of semi-submerged ship |
CN112731137A (en) * | 2020-09-15 | 2021-04-30 | 华北电力大学(保定) | Cage type asynchronous motor stator and rotor fault joint diagnosis method based on stack type self-coding and light gradient elevator algorithm |
WO2021128510A1 (en) * | 2019-12-27 | 2021-07-01 | 江苏科技大学 | Bearing defect identification method based on sdae and improved gwo-svm |
CN113203914A (en) * | 2021-04-08 | 2021-08-03 | 华南理工大学 | Underground cable early fault detection and identification method based on DAE-CNN |
CN114692694A (en) * | 2022-04-11 | 2022-07-01 | 合肥工业大学 | Equipment fault diagnosis method based on feature fusion and integrated clustering |
CN114861728A (en) * | 2022-05-17 | 2022-08-05 | 江苏科技大学 | Fault diagnosis method based on fusion-shrinkage stack denoising self-editor characteristic |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE4110716A1 (en) * | 1991-04-03 | 1992-10-08 | Jens Dipl Ing Weidauer | Asynchronous machine parameters identification - using computer model which derives modelling parameter from measured stator current, voltage and revolution rate to provide input to estimation process |
CN101034038A (en) * | 2007-03-28 | 2007-09-12 | 华北电力大学 | Failure testing method of asynchronous motor bearing |
CN102121967A (en) * | 2010-11-20 | 2011-07-13 | 太原理工大学 | Diagnostor for predicting operation state of three-phase rotating electromechanical equipment in time |
WO2013093800A1 (en) * | 2011-12-21 | 2013-06-27 | Gyoeker Gyula Istvan | A method and an apparatus for machine diagnosing and condition monitoring based upon sensing and analysis of magnetic tension |
CN107247231A (en) * | 2017-07-28 | 2017-10-13 | 南京航空航天大学 | A kind of aerogenerator fault signature extracting method based on OBLGWO DBN models |
-
2017
- 2017-12-12 CN CN201711321716.1A patent/CN107957551A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE4110716A1 (en) * | 1991-04-03 | 1992-10-08 | Jens Dipl Ing Weidauer | Asynchronous machine parameters identification - using computer model which derives modelling parameter from measured stator current, voltage and revolution rate to provide input to estimation process |
CN101034038A (en) * | 2007-03-28 | 2007-09-12 | 华北电力大学 | Failure testing method of asynchronous motor bearing |
CN102121967A (en) * | 2010-11-20 | 2011-07-13 | 太原理工大学 | Diagnostor for predicting operation state of three-phase rotating electromechanical equipment in time |
WO2013093800A1 (en) * | 2011-12-21 | 2013-06-27 | Gyoeker Gyula Istvan | A method and an apparatus for machine diagnosing and condition monitoring based upon sensing and analysis of magnetic tension |
CN107247231A (en) * | 2017-07-28 | 2017-10-13 | 南京航空航天大学 | A kind of aerogenerator fault signature extracting method based on OBLGWO DBN models |
Non-Patent Citations (1)
Title |
---|
王丽华等: ""采用深度学习的异步电机故障诊断方法"", 《西安交通大学学报》 * |
Cited By (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109100648B (en) * | 2018-05-16 | 2020-07-24 | 上海海事大学 | CNN-ARMA-Softmax-based ocean current generator impeller winding fault fusion diagnosis method |
CN109100648A (en) * | 2018-05-16 | 2018-12-28 | 上海海事大学 | Ocean current generator impeller based on CNN-ARMA-Softmax winds failure fusion diagnosis method |
CN109000930A (en) * | 2018-06-04 | 2018-12-14 | 哈尔滨工业大学 | A kind of turbogenerator performance degradation assessment method based on stacking denoising self-encoding encoder |
CN108760305A (en) * | 2018-06-13 | 2018-11-06 | 中车青岛四方机车车辆股份有限公司 | A kind of Bearing Fault Detection Method, device and equipment |
CN110619342A (en) * | 2018-06-20 | 2019-12-27 | 鲁东大学 | Rotary machine fault diagnosis method based on deep migration learning |
CN108919059A (en) * | 2018-08-23 | 2018-11-30 | 广东电网有限责任公司 | A kind of electric network failure diagnosis method, apparatus, equipment and readable storage medium storing program for executing |
CN109270921A (en) * | 2018-09-25 | 2019-01-25 | 深圳市元征科技股份有限公司 | A kind of method for diagnosing faults and device |
CN109145886A (en) * | 2018-10-12 | 2019-01-04 | 西安交通大学 | A kind of asynchronous machine method for diagnosing faults of Multi-source Information Fusion |
CN109060347A (en) * | 2018-10-25 | 2018-12-21 | 哈尔滨理工大学 | Based on the planetary gear fault recognition method for stacking de-noising autocoder and gating cycle unit neural network |
CN109613428A (en) * | 2018-12-12 | 2019-04-12 | 广州汇数信息科技有限公司 | It is a kind of can be as system and its application in motor device fault detection method |
CN109858345A (en) * | 2018-12-25 | 2019-06-07 | 华中科技大学 | A kind of intelligent failure diagnosis method suitable for pipe expanding equipment |
CN109858345B (en) * | 2018-12-25 | 2021-06-11 | 华中科技大学 | Intelligent fault diagnosis method suitable for pipe expansion equipment |
CN109858408A (en) * | 2019-01-17 | 2019-06-07 | 西安交通大学 | A kind of ultrasonic signal processing method based on self-encoding encoder |
CN109829538A (en) * | 2019-02-28 | 2019-05-31 | 苏州热工研究院有限公司 | A kind of equipment health Evaluation method and apparatus based on deep neural network |
CN110059601A (en) * | 2019-04-10 | 2019-07-26 | 西安交通大学 | A kind of multi-feature extraction and the intelligent failure diagnosis method merged |
CN110068760A (en) * | 2019-04-23 | 2019-07-30 | 哈尔滨理工大学 | A kind of Induction Motor Fault Diagnosis based on deep learning |
CN110286279A (en) * | 2019-06-05 | 2019-09-27 | 武汉大学 | Based on extreme random forest and the sparse Fault Diagnosis of Power Electronic Circuits method from encryption algorithm of stacking-type |
CN110286279B (en) * | 2019-06-05 | 2021-03-16 | 武汉大学 | Power electronic circuit fault diagnosis method based on extreme tree and stack type sparse self-coding algorithm |
CN110458240A (en) * | 2019-08-16 | 2019-11-15 | 集美大学 | A kind of three-phase bridge rectifier method for diagnosing faults, terminal device and storage medium |
WO2021128510A1 (en) * | 2019-12-27 | 2021-07-01 | 江苏科技大学 | Bearing defect identification method based on sdae and improved gwo-svm |
CN111157894A (en) * | 2020-01-14 | 2020-05-15 | 许昌中科森尼瑞技术有限公司 | Motor fault diagnosis method, device and medium based on convolutional neural network |
CN111539152A (en) * | 2020-01-20 | 2020-08-14 | 内蒙古工业大学 | Rolling bearing fault self-learning method based on two-stage twin convolutional neural network |
CN111539152B (en) * | 2020-01-20 | 2022-08-26 | 内蒙古工业大学 | Rolling bearing fault self-learning method based on two-stage twin convolutional neural network |
CN111310830A (en) * | 2020-02-17 | 2020-06-19 | 湖北工业大学 | Combine harvester blocking fault diagnosis system and method |
CN111310830B (en) * | 2020-02-17 | 2023-10-10 | 湖北工业大学 | Blocking fault diagnosis system and method for combine harvester |
CN111323220A (en) * | 2020-03-02 | 2020-06-23 | 武汉大学 | Fault diagnosis method and system for gearbox of wind driven generator |
CN111323220B (en) * | 2020-03-02 | 2021-08-10 | 武汉大学 | Fault diagnosis method and system for gearbox of wind driven generator |
CN111783531A (en) * | 2020-05-27 | 2020-10-16 | 福建亿华源能源管理有限公司 | Water turbine set fault diagnosis method based on SDAE-IELM |
CN111783531B (en) * | 2020-05-27 | 2024-03-19 | 福建亿华源能源管理有限公司 | Water turbine set fault diagnosis method based on SDAE-IELM |
CN111680665A (en) * | 2020-06-28 | 2020-09-18 | 湖南大学 | Motor mechanical fault diagnosis method based on data driving and adopting current signals |
CN112731137A (en) * | 2020-09-15 | 2021-04-30 | 华北电力大学(保定) | Cage type asynchronous motor stator and rotor fault joint diagnosis method based on stack type self-coding and light gradient elevator algorithm |
CN112706901A (en) * | 2020-12-31 | 2021-04-27 | 华南理工大学 | Semi-supervised fault diagnosis method for main propulsion system of semi-submerged ship |
CN112706901B (en) * | 2020-12-31 | 2022-04-22 | 华南理工大学 | Semi-supervised fault diagnosis method for main propulsion system of semi-submerged ship |
CN113203914A (en) * | 2021-04-08 | 2021-08-03 | 华南理工大学 | Underground cable early fault detection and identification method based on DAE-CNN |
CN114692694B (en) * | 2022-04-11 | 2024-02-13 | 合肥工业大学 | Equipment fault diagnosis method based on feature fusion and integrated clustering |
CN114692694A (en) * | 2022-04-11 | 2022-07-01 | 合肥工业大学 | Equipment fault diagnosis method based on feature fusion and integrated clustering |
CN114861728A (en) * | 2022-05-17 | 2022-08-05 | 江苏科技大学 | Fault diagnosis method based on fusion-shrinkage stack denoising self-editor characteristic |
CN114861728B (en) * | 2022-05-17 | 2024-08-06 | 江苏科技大学 | Fault diagnosis method based on fusion-shrinkage stack noise reduction self-braiding device characteristics |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107957551A (en) | Stacking noise reduction own coding Method of Motor Fault Diagnosis based on vibration and current signal | |
Sun et al. | Sparse deep stacking network for fault diagnosis of motor | |
Li et al. | Understanding and improving deep learning-based rolling bearing fault diagnosis with attention mechanism | |
Lang et al. | Artificial intelligence-based technique for fault detection and diagnosis of EV motors: A review | |
CN111722145B (en) | Synchronous motor excitation winding turn-to-turn short circuit mild fault diagnosis method | |
Shao et al. | Learning features from vibration signals for induction motor fault diagnosis | |
CN106124212B (en) | Fault Diagnosis of Roller Bearings based on sparse coding device and support vector machines | |
Goumas et al. | Classification of washing machines vibration signals using discrete wavelet analysis for feature extraction | |
CN107702922B (en) | Rolling bearing fault diagnosis method based on LCD and stacking automatic encoder | |
CN103728551B (en) | A kind of analog-circuit fault diagnosis method based on cascade integrated classifier | |
CN107657250A (en) | Bearing fault detection and localization method and detection location model realize system and method | |
CN107632258A (en) | A kind of fan converter method for diagnosing faults based on wavelet transformation and DBN | |
Jiang et al. | Rolling bearing fault identification using multilayer deep learning convolutional neural network | |
CN115859077A (en) | Multi-feature fusion motor small sample fault diagnosis method under variable working conditions | |
CN112926728B (en) | Small sample turn-to-turn short circuit fault diagnosis method for permanent magnet synchronous motor | |
CN113255458A (en) | Bearing fault diagnosis method based on multi-view associated feature learning | |
Sabir et al. | Signal generation using 1d deep convolutional generative adversarial networks for fault diagnosis of electrical machines | |
CN115407197B (en) | Motor fault diagnosis method based on multi-head sparse self-encoder and Goertzel analysis | |
Ahmed et al. | Effects of deep neural network parameters on classification of bearing faults | |
Huang et al. | Research on fan vibration fault diagnosis based on image recognition | |
CN117828531A (en) | Bearing fault diagnosis method based on multi-sensor multi-scale feature fusion | |
Gangsar et al. | Diagnostics of combined mechanical and electrical faults of an electromechanical system for steady and ramp-up speeds | |
Kibrete et al. | Applications of artificial intelligence for fault diagnosis of rotating machines: A review | |
CN110779722B (en) | Rolling bearing fault diagnosis method based on encoder signal local weighting | |
Han et al. | A Study on Motor Poor Maintenance Detection Based on DT-CNN |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180424 |