CN107957551A - Stacking noise reduction own coding Method of Motor Fault Diagnosis based on vibration and current signal - Google Patents

Stacking noise reduction own coding Method of Motor Fault Diagnosis based on vibration and current signal Download PDF

Info

Publication number
CN107957551A
CN107957551A CN201711321716.1A CN201711321716A CN107957551A CN 107957551 A CN107957551 A CN 107957551A CN 201711321716 A CN201711321716 A CN 201711321716A CN 107957551 A CN107957551 A CN 107957551A
Authority
CN
China
Prior art keywords
mrow
msup
msub
mtr
mtd
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711321716.1A
Other languages
Chinese (zh)
Inventor
赵晓平
吴家新
周子贤
杨家巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201711321716.1A priority Critical patent/CN107957551A/en
Publication of CN107957551A publication Critical patent/CN107957551A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/34Testing dynamo-electric machines
    • G01R31/343Testing dynamo-electric machines in operation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

本发明公开了基于振动和电流信号的堆叠降噪自编码电机故障诊断方法,分为五个步骤:第一步,获取电机不同故障的振动和电流的时域信号,对其预处理,作为网络输入;第二步,确定网络参数;第三步,逐层训练,将上一级自编码器(Auto encoder,AE)的隐藏层作为下一级AE的输入层,从而得到最终的特征编码,用于训练Softmax网络;第四步,微调整个网络,判断是否达到预期的精确度要求,若满足要求网络训练结束,若不满足,则调整网络参数,重复第三步;第五步,网络构建完成。该方法构建多层SDAE网络,将振动频域信号和电流时域信号相结合作为输入,依次训练SDAE网络和分类器,并有监督的对整个网络进行微调,从而实现精确的电机故障诊断。

The invention discloses a fault diagnosis method for stacked noise-reducing self-encoded motors based on vibration and current signals, which is divided into five steps: the first step is to obtain the time-domain signals of vibration and current of different faults of the motor, preprocess them, and use them as a network Input; the second step is to determine the network parameters; the third step is to train layer by layer, using the hidden layer of the previous level of autoencoder (Auto encoder, AE) as the input layer of the next level of AE, so as to obtain the final feature encoding, It is used to train the Softmax network; the fourth step is to fine-tune the entire network to determine whether the expected accuracy requirements are met. If the requirements are met, the network training ends. If not, adjust the network parameters and repeat the third step; the fifth step is to build the network Finish. This method constructs a multi-layer SDAE network, combines the vibration frequency domain signal and the current time domain signal as input, trains the SDAE network and classifier in sequence, and fine-tunes the entire network under supervision, so as to achieve accurate motor fault diagnosis.

Description

基于振动和电流信号的堆叠降噪自编码电机故障诊断方法Fault diagnosis method for stacked noise reduction self-encoder motor based on vibration and current signals

技术领域technical field

本发明属于工业生产中电机的故障诊断技术领域,具体涉及多源信号(振动信号和电流信号)和堆叠降噪自编码的电机振动信号故障诊断方法。The invention belongs to the technical field of fault diagnosis of motors in industrial production, and in particular relates to a fault diagnosis method for motor vibration signals of multi-source signals (vibration signals and current signals) and stacked noise-reduction self-encoding.

背景技术Background technique

异步电机在当代社会生产系统中的应用越来越广泛,是工业生产活动的主要驱动设备,一旦发生故障,将带来巨大的经济损失。异步电机是由定子、转子、轴承、机座和风扇等组成的综合电气设备,其内部包含复杂的多个子系统,使电机故障呈现出多样性,其表现出的特征也千差万别;并且同一症状有可能是不同原因造成的,同一种故障表现出的特征也不尽相同。异步电机的故障特征与故障类型之间并非一一对应,其间存在较强的非线性关系。因此,有效的诊断出电机故障对避免严重故障的发生,保证机械设备的正常运行有着重大的现实意义。Asynchronous motors are more and more widely used in the production system of contemporary society, and are the main driving equipment for industrial production activities. Once a failure occurs, it will bring huge economic losses. Asynchronous motor is a comprehensive electrical equipment composed of stator, rotor, bearing, frame and fan, etc. It contains multiple complex subsystems inside, which makes the motor faults appear diverse, and its characteristics are also very different; and the same symptom has It may be caused by different reasons, and the characteristics of the same fault are also different. There is no one-to-one correspondence between the fault characteristics and fault types of asynchronous motors, and there is a strong nonlinear relationship between them. Therefore, it is of great practical significance to effectively diagnose motor faults to avoid serious faults and ensure the normal operation of mechanical equipment.

电机的故障诊断方法属于模式识别的范畴,通常首先提取电机振动信号的特征,从而进行分类。使用的方法有BP神经网络、支持向量机(Support Vector Machine,SVM)、径向基网络等。近年来由于深度学习的发展,深层次的网络在图像识别、语音识别得到了广泛的应用。The motor fault diagnosis method belongs to the category of pattern recognition. Usually, the characteristics of the motor vibration signal are extracted first, so as to classify. The methods used include BP neural network, support vector machine (Support Vector Machine, SVM), radial basis network and so on. In recent years, due to the development of deep learning, deep networks have been widely used in image recognition and speech recognition.

堆叠降噪自编码器(Stacked Denoising Autoencoder,SDAE)作为一种深层次的网络,由多个自编码器组成,能够自适应无监督的提取信号的特征。并且通过堆叠网络的微调机制,能够有监督的训练网络,提高准确率。Stacked denoising autoencoder (Stacked Denoising Autoencoder, SDAE), as a deep network, consists of multiple autoencoders, which can adaptively and unsupervisedly extract the features of signals. And through the fine-tuning mechanism of the stacked network, the network can be trained under supervision and the accuracy rate can be improved.

发明内容Contents of the invention

随着工业生产规模变大,每个工业装备需要的监测点增多,每个测点的采样频率越来越高,数据收集时间越来越长,这使得监测系统获得的数据量越来越大,机械健康监测领域进入大数据时代。传统方法在实现电机故障诊断时,用于试验的样本量都很小,而在机械“大数据”背景下,这些小样本就失去了实际意义,因此选择合适的故障诊断方法和提高故障诊断精确度变得尤为重要。As the scale of industrial production increases, each industrial equipment needs more monitoring points, the sampling frequency of each measuring point is getting higher and higher, and the data collection time is getting longer and longer, which makes the amount of data obtained by the monitoring system larger and larger. , the field of mechanical health monitoring has entered the era of big data. When the traditional method realizes the fault diagnosis of the motor, the sample size used for the test is very small, but in the context of mechanical "big data", these small samples lose their practical significance. Therefore, it is necessary to choose the appropriate fault diagnosis method and improve the accuracy of fault diagnosis degree becomes more important.

为了解决因电机结构复杂、振动信号非平稳和机械大数据等因素引起的异步电机故障诊断困难等问题,本发明引入深度学习理论,提出了基于堆叠降噪自编码网络的电机故障诊断方法。该方法构建多层SDAE网络,将振动频域信号和电流时域信号相结合作为输入,依次训练SDAE网络和分类器,并有监督的对整个网络进行微调,从而实现精确的电机故障诊断。In order to solve the problems of asynchronous motor fault diagnosis difficulties caused by factors such as complex motor structure, non-stationary vibration signal, and large mechanical data, this invention introduces deep learning theory and proposes a motor fault diagnosis method based on stacked noise reduction autoencoder network. This method constructs a multi-layer SDAE network, combines the vibration frequency domain signal and the current time domain signal as input, trains the SDAE network and classifier in sequence, and fine-tunes the entire network under supervision, so as to achieve accurate motor fault diagnosis.

本发明技术方案如下:Technical scheme of the present invention is as follows:

SDAE网络的输入样本应尽可能包含故障信号的所有特征,振动信号包含复杂的轴承信息,电流信号包含丰富的转子特征,因此本专利将振动频域信号与电流时域信号相结合作为网络的输入,如图1所示。The input samples of the SDAE network should contain all the features of the fault signal as much as possible, the vibration signal contains complex bearing information, and the current signal contains rich rotor features, so this patent combines the vibration frequency domain signal and the current time domain signal as the input of the network ,As shown in Figure 1.

SDAE电机故障网络训练过程分为5个步骤:The SDAE motor fault network training process is divided into 5 steps:

第一步:获取电机不同故障的振动和电流的时域信号,对其预处理,作为网络输入;Step 1: Obtain the time-domain signals of the vibration and current of different faults of the motor, preprocess them, and use them as network input;

第二步:确定网络参数(网络层数、各层节点数、学习率、迭代次数等);The second step: determine the network parameters (number of network layers, number of nodes in each layer, learning rate, number of iterations, etc.);

第三步:逐层训练,将上一级自编码器(Auto encoder,AE)的隐藏层作为下一级AE的输入层,从而得到最终的特征编码,用于训练Softmax网络;The third step: training layer by layer, using the hidden layer of the upper-level autoencoder (Auto encoder, AE) as the input layer of the next-level AE, so as to obtain the final feature encoding, which is used to train the Softmax network;

第四步:微调整个网络,判断是否达到预期的精确度要求,若满足要求网络训练结束,若不满足,则调整网络参数,重复第三步;Step 4: Fine-tune the entire network to determine whether the expected accuracy requirements are met. If the requirements are met, the network training ends. If not, adjust the network parameters and repeat the third step;

第五步:网络构建完成。Step 5: The network construction is completed.

有益效果Beneficial effect

利用SDAE分别对电机的振动时域信号、振动频域信号、振动时域信号+频域信号和振动频域信号+电流时域信号4类样本分析诊断。经过多次试验发现,如图4所示,以振动频域+电流时域信号为输入样本,在4层SDAE网络时(网络结构:2000-100-100-100-7),准确率明显高于其他三种,达到最高99.86%。SDAE is used to analyze and diagnose the four types of samples of motor vibration time domain signal, vibration frequency domain signal, vibration time domain signal + frequency domain signal and vibration frequency domain signal + current time domain signal. After many experiments, it is found that, as shown in Figure 4, the accuracy rate is significantly higher when the vibration frequency domain + current time domain signal is used as the input sample in a 4-layer SDAE network (network structure: 2000-100-100-100-7). Compared with the other three, it reaches the highest 99.86%.

为了与传统智能方法比较,本实验利用EMD+SVM、诊断特征+SVM这2种方法对电机进行故障诊断,同样选取所有样本的75%用于训练,剩余的25%用于测试,其结果如表1所示。In order to compare with traditional intelligent methods, this experiment uses two methods of EMD+SVM and diagnostic features+SVM to diagnose motor faults, and also selects 75% of all samples for training, and the remaining 25% for testing. The results are as follows: Table 1 shows.

表1不同方法的诊断结果Table 1 Diagnosis results of different methods

EMD+SVM和诊断特征+SVM两种方法虽然能够较好的实现电机故障诊断,且其诊断精确度较高(分别为90.15%和93.65%)。但SDAE通过深层网络能够自适应无监督的提取更精确的特征表达,并有监督的微调整个网络,从而实现智能高效的电机故障诊断,其诊断精度为99.86%。Although the two methods of EMD+SVM and diagnostic features+SVM can better realize motor fault diagnosis, and their diagnostic accuracy is higher (90.15% and 93.65% respectively). However, SDAE can adaptively and unsupervisedly extract more accurate feature expressions through the deep network, and fine-tune the entire network with supervision, so as to realize intelligent and efficient motor fault diagnosis, and its diagnosis accuracy is 99.86%.

为了比较DAE与SDAE网络提取特征的能力,实验中以振动频域信号+电流时域信号为样本分别训练DAE和SDAE(4隐层)网络,利用主成分分析(Principal ComponentAnalysis,PCA)提取第四层特征的两个重要分量(分别为主成分分量x和主成分分量y)并可视化,如图5所示。In order to compare the ability of DAE and SDAE networks to extract features, in the experiment, the vibration frequency domain signal + current time domain signal were used as samples to train the DAE and SDAE (4 hidden layer) networks respectively, and principal component analysis (Principal Component Analysis, PCA) was used to extract the fourth Two important components of layer features (principal component component x and principal component component y, respectively) are visualized, as shown in Figure 5.

图5(a)为DAE网络特征绘制的散点图,图5(b)为SDAE网络特征绘制的散点图。从图中可以看出SDAE网络特征能够明显的区分出来,而DAE网络特征却重叠在一起,无法明显区分。Fig. 5(a) is a scatter diagram of DAE network characteristics, and Fig. 5(b) is a scatter diagram of SDAE network characteristics. It can be seen from the figure that the SDAE network features can be clearly distinguished, while the DAE network features overlap and cannot be clearly distinguished.

附图说明Description of drawings

图1为振动频域信号和电流时域信号的拼接;Figure 1 is the splicing of vibration frequency domain signals and current time domain signals;

图2为电机故障诊断的流程图;Figure 2 is a flow chart of motor fault diagnosis;

图3为降噪自编码器示意图;Fig. 3 is a schematic diagram of a noise reduction self-encoder;

图4为不同样本下不同深度网络的诊断结果;Figure 4 shows the diagnostic results of different depth networks under different samples;

图5为两种网络下的特征散点图,(a)为DAE的特征散点图,(b)为SDAE的特征散点图;Figure 5 is the characteristic scatter diagram under two kinds of networks, (a) is the characteristic scatter diagram of DAE, (b) is the characteristic scatter diagram of SDAE;

图6为网络微调时的有监督SDAE网络。Figure 6 shows the supervised SDAE network during network fine-tuning.

具体实施方式Detailed ways

结合本发明的附图,对本发明的实施方案进行详细地描述。Embodiments of the present invention are described in detail with reference to the accompanying drawings of the present invention.

第一步:采集数据。以动力传动故障诊断综合试验台的异步电机为研究对象,实验台由异步电机、两级行星齿轮箱、定轴齿轮箱和磁粉制动器等四个部分组成。通过更换电机模拟7种不同的故障状态,如表2所示,表中列举了7种不同的故障状态。Step 1: Collect data. Taking the asynchronous motor of the power transmission fault diagnosis comprehensive test bench as the research object, the test bench is composed of four parts: asynchronous motor, two-stage planetary gearbox, fixed shaft gearbox and magnetic powder brake. By replacing the motor to simulate 7 different fault states, as shown in Table 2, 7 different fault states are listed in the table.

表2电机的7种状态Table 2 Seven states of the motor

为保证实验数据的多样性,采集数据时模拟了10种不同的工况,对应5种转速(升降速、3560RPM、3580RPM、3560RPM、3620RPM),2种状态(有负载、无负载)。考虑到传感器位置的影响,在电机的前端12点钟和9点钟位置布置了两个加速度传感器,同时使用钳形电流传感器采集了电机运行时的电流信号。传感器的采样频率设为5kHz。在选取数据时,每种工况使用200个样本,其中12点钟和9点钟位置的加速度传感器信号各100个。因此,每一种故障的总样本数为2000个,每个样本对应2000个点的振动信号。并且选取对应时间的2000组电流信号。共获得14000组振动时域信号和对应时间的14000组电流时域信号。随机选取每种故障各工况的75%作为训练样本,剩余25%作为测试样本。In order to ensure the diversity of experimental data, 10 different working conditions were simulated when collecting data, corresponding to 5 speeds (speed up and down, 3560RPM, 3580RPM, 3560RPM, 3620RPM), and 2 states (loaded and unloaded). Considering the influence of the position of the sensor, two acceleration sensors are arranged at the front end of the motor at 12 o'clock and 9 o'clock, and the current signal of the motor is collected by using the clamp current sensor at the same time. The sampling frequency of the sensor is set to 5kHz. When selecting data, 200 samples are used for each working condition, including 100 acceleration sensor signals at the 12 o'clock and 9 o'clock positions. Therefore, the total number of samples for each type of fault is 2000, and each sample corresponds to the vibration signal of 2000 points. And select 2000 sets of current signals corresponding to the time. A total of 14,000 sets of vibration time-domain signals and 14,000 sets of current time-domain signals corresponding to time were obtained. 75% of each working condition of each fault is randomly selected as training samples, and the remaining 25% are used as test samples.

第二步:对采集到的不同故障的振动时域信号用快速傅里叶变换进行频域分析,提取频域信号(长度为1000),然后以图1的方式与电流时域信号(长度为1000)进行拼接,作为网络输入的样本x(长度为2000)。Step 2: Perform frequency domain analysis on the collected vibration time domain signals of different faults with fast Fourier transform, extract frequency domain signals (length is 1000), and then combine them with current time domain signals (length is 1000) in the manner shown in Figure 1 1000) for splicing, as the sample x (length 2000) input by the network.

第三步:在网络训练前,需要对样本进行归一化处理,如式(1)。Step 3: Before network training, the samples need to be normalized, such as formula (1).

然后构建降噪自编码器,单个自编码器如图3所示。编码是将样本x从输入层传播至隐藏层,为使自编码器(AE:Auto Encoder)各隐藏层学习到的特征更具鲁棒性,以一定概率在训练样本中加入噪声,即随机将各隐藏层的输入数据置零。然后加噪的数据经过sigmoid激活函数(如式2),映射成k维向量h∈[]k×1(如式(3))。A denoising autoencoder is then constructed, and a single autoencoder is shown in Figure 3. Encoding is to propagate the sample x from the input layer to the hidden layer. In order to make the features learned by each hidden layer of the autoencoder (AE: Auto Encoder) more robust, noise is added to the training samples with a certain probability, that is, random The input data of each hidden layer is set to zero. Then the noise-added data is mapped into a k-dimensional vector h∈[] k×1 (such as formula (3)) through the sigmoid activation function (such as formula 2).

式中:x为输入样本;f(·)为激活函数;θ1={w1,b1}为网络参数;w1为权值,b1为偏置。In the formula: x is the input sample; f(·) is the activation function; θ 1 ={w 1 ,b 1 } is the network parameter; w 1 is the weight, and b 1 is the bias.

解码是将特征编码从隐藏层传播至输出层,经过激活函数映射成m维向量重构样本x的过程,如公式(4)。Decoding is to propagate the feature code from the hidden layer to the output layer, and map it into an m-dimensional vector through the activation function The process of reconstructing sample x, as in formula (4).

式中:是对样本x的重构;f(·)为激活函数,θ2={w2,b2}为网络参数;W2为权值,b2为偏置。In the formula: is the reconstruction of the sample x; f(·) is the activation function, θ 2 ={w 2 ,b 2 } is the network parameter; W 2 is the weight, and b 2 is the bias.

AE网络的训练目标是通过寻找一组最优的参数使得输出数据与输入数据间的误差达到尽可能的小,即实现损失函数L(w1,w2,b1,b2)最小化,损失函数表达式如下。The training goal of the AE network is to find a set of optimal parameters The error between the output data and the input data is made as small as possible, that is, the loss function L(w 1 , w 2 , b 1 , b 2 ) is minimized, and the expression of the loss function is as follows.

式中:等式右边第一项表示网络输入数据与输出数据的误差总和;第二项为正则化约束项,用于防止训练过拟合;x(i)分别表示第i个样本的输入向量与重构向量;表示x(i)间的均方差,其表达式如下。In the formula: the first item on the right side of the equation represents the sum of the errors between the input data and the output data of the network; the second item is a regularization constraint item, which is used to prevent training from overfitting; x (i) and Represent the input vector and reconstruction vector of the i-th sample, respectively; Denotes x (i) with The mean square error between , its expression is as follows.

AE网络通过误差逆传播与梯度下降法,来实现误差函数L(w1,w2,b1,b2)最小化。使得AE能够自适应无监督的学习样本的特征。The AE network realizes the minimization of the error function L(w 1 ,w 2 ,b 1 ,b 2 ) through the error backpropagation and gradient descent method. It enables AE to adapt to the characteristics of unsupervised learning samples.

第四步:以第一个AE编码器隐藏层的输出,作为输入样本,构建第二个AE,重复第三步,以此类推构建多个AE。Step 4: Use the output of the hidden layer of the first AE encoder as an input sample to construct the second AE, repeat the third step, and so on to construct multiple AEs.

第五步:将第四步中无监督训练好的各AE网络编码器隐藏层取出,如图6逐层堆叠,并在最后一层加上softmax分类器进行有监督微调。Softmax分类器对特征向量进行分类识别。假设训练数据中输入样本为x,对应标签为y,则将样本判定为某个类别J的概率为p(y=j|x)。所以,对于一个K类分类器,输出的将是一个K维向量(向量的元素和为1),如式(7)所示。Step 5: Take out the hidden layers of each AE network encoder that has been unsupervisedly trained in the fourth step, stack them layer by layer as shown in Figure 6, and add a softmax classifier to the last layer for supervised fine-tuning. The Softmax classifier classifies and identifies the feature vectors. Assuming that the input sample in the training data is x, and the corresponding label is y, then the probability of judging the sample as a certain category J is p(y=j|x). Therefore, for a K-type classifier, the output will be a K-dimensional vector (the sum of the elements of the vector is 1), as shown in formula (7).

式中:θ1;θ2;…;为模型参数;为归一化函数,对概率分布进行归一化,使得所有概率之和为1。In the formula: θ 1 ; θ 2 ;  …; is the model parameter; is a normalization function that normalizes the probability distribution such that the sum of all probabilities is 1.

在训练中,利用梯度下降法寻找最优参数,使得Softmax的代价函数J(θ)达到最小,从而完成网络训练。代价函数J(θ)如式(8)所示。In the training, the gradient descent method is used to find the optimal parameters, so that the cost function J(θ) of Softmax reaches the minimum, thus completing the network training. The cost function J(θ) is shown in formula (8).

式中:1{·}是一个指示性函数,即当大括号内值为真时,该函数结果就为1,否则结果就为0。In the formula: 1{ } is an indicative function, that is, when the value inside the braces is true, the result of the function is 1, otherwise the result is 0.

第六步:多次迭代,当损失loss收敛时,完成网络训练。并使用验证集数据评估网络性能,若准确率达到要求,则输出网络,否则更改网络参数,继续训练。Step 6: After multiple iterations, when the loss loss converges, the network training is completed. And use the verification set data to evaluate the network performance, if the accuracy rate meets the requirements, output the network, otherwise change the network parameters and continue training.

Claims (2)

1. the stacking noise reduction own coding Method of Motor Fault Diagnosis based on vibration and current signal, it is characterised in that be divided into five Step:
The first step, gathered data;Using the asynchronous machine of power transmission fault diagnosis multi-function test stand as object, experimental bench is by asynchronous Motor, two-stage planetary gear, fixed axis gear case and magnetic powder brake, the malfunction different by replacing motor simulation, is Ensure the diversity of experimental data;Randomly select the 75% of each operating mode of every kind of failure and be used as training sample, residue 25% is as survey Sample sheet;
Second step, the vibration time-domain signal of the different faults to collecting carry out frequency-domain analysis with Fast Fourier Transform (FFT), extraction Frequency-region signal;
3rd step, it is necessary to which sample is normalized before network training, such as formula (1):
<mrow> <msup> <mi>X</mi> <mo>*</mo> </msup> <mo>=</mo> <mfrac> <mrow> <mi>x</mi> <mo>-</mo> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mo>-</mo> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Then noise reduction self-encoding encoder is built;Coding is that sample x is propagated to hidden layer from input layer, to make self-encoding encoder AE: The feature that each hidden layers of Auto Encoder learn has more robustness, noise is added in training sample with certain probability, i.e., At random by the input data zero setting of each hidden layer;Then plus the data made an uproar pass through sigmoid activation primitives, see formula (2), are mapped to K dimensional vector h ∈ []k×1, see formula (3)):
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mo>&amp;CenterDot;</mo> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>1</mn> <mo>+</mo> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mi>t</mi> </mrow> </msup> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mi>h</mi> <mo>=</mo> <msub> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;theta;</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>&amp;CenterDot;</mo> <mi>x</mi> <mo>+</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
In formula:X is input sample;F () is activation primitive;θ1={ w1,b1It is network parameter;w1For weights, b1For biasing;
Decoding is that feature coding is propagated to output layer from hidden layer, and m dimensional vectors are mapped to by activation primitive The process of reconstructed sample x, such as formula (4):
<mrow> <mover> <mi>x</mi> <mo>^</mo> </mover> <mo>=</mo> <msub> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;theta;</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </msub> <mrow> <mo>(</mo> <mi>h</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mn>2</mn> </msub> <mo>&amp;CenterDot;</mo> <mi>h</mi> <mo>+</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
In formula:It is the reconstruct to sample x;F () is activation primitive, θ2={ w2,b2It is network parameter;W2For weights, b2To be inclined Put;
The training objective of AE networks is by finding one group of optimal parameter θ*={ w1 *,w2 *,b1 *,b2 *So that output data with Error between input data reaches small as far as possible, that is, realizes loss function L (w1,w2,b1,b2) minimize, loss function expression Formula is as follows;
<mrow> <mtable> <mtr> <mtd> <mrow> <mi>L</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>w</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mo>&amp;lsqb;</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mi>J</mi> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>,</mo> <msup> <mover> <mi>x</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <mfrac> <mi>&amp;lambda;</mi> <mn>2</mn> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <msub> <mi>n</mi> <mi>l</mi> </msub> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>s</mi> <mi>l</mi> </msub> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <msub> <mi>s</mi> <mrow> <mi>l</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </munderover> <msup> <mrow> <mo>(</mo> <msup> <msub> <mi>W</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
In formula:Section 1 represents the sum of the deviations of network inputs data and output data on the right of equation;Section 2 for regularization about Shu Xiang, for preventing from training over-fitting;x(i)WithThe input vector and reconstruct vector of i-th of sample are represented respectively;Represent x(i)WithBetween mean square deviation, its expression formula is as follows:
<mrow> <mtable> <mtr> <mtd> <mrow> <mi>J</mi> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>,</mo> <msup> <mover> <mi>x</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mo>|</mo> <mo>|</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <msup> <mover> <mi>x</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mo>|</mo> <mo>|</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>-</mo> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mn>2</mn> </msub> <mo>&amp;CenterDot;</mo> <mi>f</mi> <mo>(</mo> <mrow> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>&amp;CenterDot;</mo> <mi>x</mi> <mo>+</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> </mrow> <mo>)</mo> <mo>+</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
AE networks are by error Back-Propagation and gradient descent method, to realize error function L (w1,w2,b1,b2) minimize;So that AE It is capable of the feature of adaptive unsupervised learning sample;
4th step, with the output of first AE encoder hidden layer, as input sample, builds second AE, repeats the 3rd step, And so on the multiple AE of structure;
5th step, unsupervised trained each AE network encoders hidden layer in the 4th step is taken out, and is added in last layer Softmax graders have carried out supervision fine setting;Softmax graders carry out Classification and Identification to feature vector;Assuming that training data Middle input sample is x, corresponding label y, then is determined as sample the probability of some classification J for p (y=j | x);So for One K class grader, output will be a K dimensional vector (vectorial element and for 1), as shown in formula (7);
<mrow> <msub> <mi>h</mi> <mi>&amp;theta;</mi> </msub> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <msup> <mi>y</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mn>1</mn> <mo>|</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>;</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <msup> <mi>y</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mn>2</mn> <mo>|</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>;</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <msup> <mi>y</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mi>k</mi> <mo>|</mo> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>;</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mi>exp</mi> <mrow> <mo>(</mo> <msubsup> <mi>&amp;theta;</mi> <mi>j</mi> <mi>T</mi> </msubsup> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> </mrow> </mfrac> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mi>exp</mi> <mrow> <mo>(</mo> <msubsup> <mi>&amp;theta;</mi> <mn>1</mn> <mi>T</mi> </msubsup> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>exp</mi> <mrow> <mo>(</mo> <msubsup> <mi>&amp;theta;</mi> <mn>2</mn> <mi>T</mi> </msubsup> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>exp</mi> <mrow> <mo>(</mo> <msubsup> <mi>&amp;theta;</mi> <mi>k</mi> <mi>T</mi> </msubsup> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
In formula:For model parameter;For normalized function, to probability distribution It is normalized so that the sum of all probability are 1;
In training, optimized parameter is found using gradient descent method so that the cost function J (θ) of Softmax reaches minimum, from And complete network training;Cost function J (θ) is as shown in formula (8):
<mrow> <mi>J</mi> <mrow> <mo>(</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <mfrac> <mn>1</mn> <mi>m</mi> </mfrac> <mo>&amp;lsqb;</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </munderover> <mn>1</mn> <mo>{</mo> <msup> <mi>y</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mi>j</mi> <mo>}</mo> <mi>log</mi> <mfrac> <mrow> <mi>exp</mi> <mrow> <mo>(</mo> <msubsup> <mi>&amp;theta;</mi> <mi>j</mi> <mi>T</mi> </msubsup> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mi>exp</mi> <mrow> <mo>(</mo> <msubsup> <mi>&amp;theta;</mi> <mi>l</mi> <mi>T</mi> </msubsup> <msup> <mi>x</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
In formula:1 { } is an indicative function, i.e., when value is true in braces, which is just 1, otherwise result It is just 0;
6th step, successive ignition, when losing loss convergences, completes network training;And use verification collection data assessment internetworking Can, if rate of accuracy reached to requiring, exports network, otherwise changes network parameter, continue to train.
2. the method as described in claim 1, it is characterised in that electric current time-domain signal length is 1000 in the second step, electricity It is 1000 to flow time-domain signal length, and the sample x length as network inputs is 2000.
CN201711321716.1A 2017-12-12 2017-12-12 Stacking noise reduction own coding Method of Motor Fault Diagnosis based on vibration and current signal Pending CN107957551A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711321716.1A CN107957551A (en) 2017-12-12 2017-12-12 Stacking noise reduction own coding Method of Motor Fault Diagnosis based on vibration and current signal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711321716.1A CN107957551A (en) 2017-12-12 2017-12-12 Stacking noise reduction own coding Method of Motor Fault Diagnosis based on vibration and current signal

Publications (1)

Publication Number Publication Date
CN107957551A true CN107957551A (en) 2018-04-24

Family

ID=61958617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711321716.1A Pending CN107957551A (en) 2017-12-12 2017-12-12 Stacking noise reduction own coding Method of Motor Fault Diagnosis based on vibration and current signal

Country Status (1)

Country Link
CN (1) CN107957551A (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108760305A (en) * 2018-06-13 2018-11-06 中车青岛四方机车车辆股份有限公司 A kind of Bearing Fault Detection Method, device and equipment
CN108919059A (en) * 2018-08-23 2018-11-30 广东电网有限责任公司 A kind of electric network failure diagnosis method, apparatus, equipment and readable storage medium storing program for executing
CN109000930A (en) * 2018-06-04 2018-12-14 哈尔滨工业大学 A kind of turbogenerator performance degradation assessment method based on stacking denoising self-encoding encoder
CN109060347A (en) * 2018-10-25 2018-12-21 哈尔滨理工大学 Based on the planetary gear fault recognition method for stacking de-noising autocoder and gating cycle unit neural network
CN109100648A (en) * 2018-05-16 2018-12-28 上海海事大学 Ocean current generator impeller based on CNN-ARMA-Softmax winds failure fusion diagnosis method
CN109145886A (en) * 2018-10-12 2019-01-04 西安交通大学 A kind of asynchronous machine method for diagnosing faults of Multi-source Information Fusion
CN109270921A (en) * 2018-09-25 2019-01-25 深圳市元征科技股份有限公司 A kind of method for diagnosing faults and device
CN109613428A (en) * 2018-12-12 2019-04-12 广州汇数信息科技有限公司 It is a kind of can be as system and its application in motor device fault detection method
CN109829538A (en) * 2019-02-28 2019-05-31 苏州热工研究院有限公司 A kind of equipment health Evaluation method and apparatus based on deep neural network
CN109858345A (en) * 2018-12-25 2019-06-07 华中科技大学 A kind of intelligent failure diagnosis method suitable for pipe expanding equipment
CN109858408A (en) * 2019-01-17 2019-06-07 西安交通大学 A kind of ultrasonic signal processing method based on self-encoding encoder
CN110059601A (en) * 2019-04-10 2019-07-26 西安交通大学 A kind of multi-feature extraction and the intelligent failure diagnosis method merged
CN110068760A (en) * 2019-04-23 2019-07-30 哈尔滨理工大学 A kind of Induction Motor Fault Diagnosis based on deep learning
CN110286279A (en) * 2019-06-05 2019-09-27 武汉大学 Fault Diagnosis Method for Power Electronic Circuits Based on Extreme Random Forest and Stacked Sparse Autoencoder Algorithm
CN110458240A (en) * 2019-08-16 2019-11-15 集美大学 A three-phase bridge rectifier fault diagnosis method, terminal equipment and storage medium
CN110619342A (en) * 2018-06-20 2019-12-27 鲁东大学 Rotary machine fault diagnosis method based on deep migration learning
CN111157894A (en) * 2020-01-14 2020-05-15 许昌中科森尼瑞技术有限公司 Motor fault diagnosis method, device and medium based on convolutional neural network
CN111310830A (en) * 2020-02-17 2020-06-19 湖北工业大学 Combine harvester blocking fault diagnosis system and method
CN111323220A (en) * 2020-03-02 2020-06-23 武汉大学 Fault diagnosis method and system for gearbox of wind driven generator
CN111539152A (en) * 2020-01-20 2020-08-14 内蒙古工业大学 Rolling bearing fault self-learning method based on two-stage twin convolutional neural network
CN111680665A (en) * 2020-06-28 2020-09-18 湖南大学 A Data-Driven Method for Motor Mechanical Fault Diagnosis Using Current Signals
CN111783531A (en) * 2020-05-27 2020-10-16 福建亿华源能源管理有限公司 Water turbine set fault diagnosis method based on SDAE-IELM
CN112706901A (en) * 2020-12-31 2021-04-27 华南理工大学 Semi-supervised fault diagnosis method for main propulsion system of semi-submerged ship
CN112731137A (en) * 2020-09-15 2021-04-30 华北电力大学(保定) Cage type asynchronous motor stator and rotor fault joint diagnosis method based on stack type self-coding and light gradient elevator algorithm
WO2021128510A1 (en) * 2019-12-27 2021-07-01 江苏科技大学 Bearing defect identification method based on sdae and improved gwo-svm
CN113203914A (en) * 2021-04-08 2021-08-03 华南理工大学 Underground cable early fault detection and identification method based on DAE-CNN
CN114692694A (en) * 2022-04-11 2022-07-01 合肥工业大学 Equipment fault diagnosis method based on feature fusion and integrated clustering
CN114861728A (en) * 2022-05-17 2022-08-05 江苏科技大学 Fault diagnosis method based on fusion-shrinkage stack denoising self-editor characteristic

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4110716A1 (en) * 1991-04-03 1992-10-08 Jens Dipl Ing Weidauer Asynchronous machine parameters identification - using computer model which derives modelling parameter from measured stator current, voltage and revolution rate to provide input to estimation process
CN101034038A (en) * 2007-03-28 2007-09-12 华北电力大学 Failure testing method of asynchronous motor bearing
CN102121967A (en) * 2010-11-20 2011-07-13 太原理工大学 Diagnostor for predicting operation state of three-phase rotating electromechanical equipment in time
WO2013093800A1 (en) * 2011-12-21 2013-06-27 Gyoeker Gyula Istvan A method and an apparatus for machine diagnosing and condition monitoring based upon sensing and analysis of magnetic tension
CN107247231A (en) * 2017-07-28 2017-10-13 南京航空航天大学 A kind of aerogenerator fault signature extracting method based on OBLGWO DBN models

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4110716A1 (en) * 1991-04-03 1992-10-08 Jens Dipl Ing Weidauer Asynchronous machine parameters identification - using computer model which derives modelling parameter from measured stator current, voltage and revolution rate to provide input to estimation process
CN101034038A (en) * 2007-03-28 2007-09-12 华北电力大学 Failure testing method of asynchronous motor bearing
CN102121967A (en) * 2010-11-20 2011-07-13 太原理工大学 Diagnostor for predicting operation state of three-phase rotating electromechanical equipment in time
WO2013093800A1 (en) * 2011-12-21 2013-06-27 Gyoeker Gyula Istvan A method and an apparatus for machine diagnosing and condition monitoring based upon sensing and analysis of magnetic tension
CN107247231A (en) * 2017-07-28 2017-10-13 南京航空航天大学 A kind of aerogenerator fault signature extracting method based on OBLGWO DBN models

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王丽华等: ""采用深度学习的异步电机故障诊断方法"", 《西安交通大学学报》 *

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109100648B (en) * 2018-05-16 2020-07-24 上海海事大学 Fusion diagnosis method of turbine impeller winding fault based on CNN-ARMA-Softmax
CN109100648A (en) * 2018-05-16 2018-12-28 上海海事大学 Ocean current generator impeller based on CNN-ARMA-Softmax winds failure fusion diagnosis method
CN109000930A (en) * 2018-06-04 2018-12-14 哈尔滨工业大学 A kind of turbogenerator performance degradation assessment method based on stacking denoising self-encoding encoder
CN108760305A (en) * 2018-06-13 2018-11-06 中车青岛四方机车车辆股份有限公司 A kind of Bearing Fault Detection Method, device and equipment
CN110619342A (en) * 2018-06-20 2019-12-27 鲁东大学 Rotary machine fault diagnosis method based on deep migration learning
CN108919059A (en) * 2018-08-23 2018-11-30 广东电网有限责任公司 A kind of electric network failure diagnosis method, apparatus, equipment and readable storage medium storing program for executing
CN109270921A (en) * 2018-09-25 2019-01-25 深圳市元征科技股份有限公司 A kind of method for diagnosing faults and device
CN109145886A (en) * 2018-10-12 2019-01-04 西安交通大学 A kind of asynchronous machine method for diagnosing faults of Multi-source Information Fusion
CN109060347A (en) * 2018-10-25 2018-12-21 哈尔滨理工大学 Based on the planetary gear fault recognition method for stacking de-noising autocoder and gating cycle unit neural network
CN109613428A (en) * 2018-12-12 2019-04-12 广州汇数信息科技有限公司 It is a kind of can be as system and its application in motor device fault detection method
CN109858345A (en) * 2018-12-25 2019-06-07 华中科技大学 A kind of intelligent failure diagnosis method suitable for pipe expanding equipment
CN109858345B (en) * 2018-12-25 2021-06-11 华中科技大学 Intelligent fault diagnosis method suitable for pipe expansion equipment
CN109858408A (en) * 2019-01-17 2019-06-07 西安交通大学 A kind of ultrasonic signal processing method based on self-encoding encoder
CN109829538A (en) * 2019-02-28 2019-05-31 苏州热工研究院有限公司 A kind of equipment health Evaluation method and apparatus based on deep neural network
CN110059601A (en) * 2019-04-10 2019-07-26 西安交通大学 A kind of multi-feature extraction and the intelligent failure diagnosis method merged
CN110068760A (en) * 2019-04-23 2019-07-30 哈尔滨理工大学 A kind of Induction Motor Fault Diagnosis based on deep learning
CN110286279A (en) * 2019-06-05 2019-09-27 武汉大学 Fault Diagnosis Method for Power Electronic Circuits Based on Extreme Random Forest and Stacked Sparse Autoencoder Algorithm
CN110286279B (en) * 2019-06-05 2021-03-16 武汉大学 Power electronic circuit fault diagnosis method based on extreme tree and stack type sparse self-coding algorithm
CN110458240A (en) * 2019-08-16 2019-11-15 集美大学 A three-phase bridge rectifier fault diagnosis method, terminal equipment and storage medium
WO2021128510A1 (en) * 2019-12-27 2021-07-01 江苏科技大学 Bearing defect identification method based on sdae and improved gwo-svm
CN111157894A (en) * 2020-01-14 2020-05-15 许昌中科森尼瑞技术有限公司 Motor fault diagnosis method, device and medium based on convolutional neural network
CN111539152A (en) * 2020-01-20 2020-08-14 内蒙古工业大学 Rolling bearing fault self-learning method based on two-stage twin convolutional neural network
CN111539152B (en) * 2020-01-20 2022-08-26 内蒙古工业大学 Rolling bearing fault self-learning method based on two-stage twin convolutional neural network
CN111310830A (en) * 2020-02-17 2020-06-19 湖北工业大学 Combine harvester blocking fault diagnosis system and method
CN111310830B (en) * 2020-02-17 2023-10-10 湖北工业大学 Blocking fault diagnosis system and method for combine harvester
CN111323220A (en) * 2020-03-02 2020-06-23 武汉大学 Fault diagnosis method and system for gearbox of wind driven generator
CN111323220B (en) * 2020-03-02 2021-08-10 武汉大学 Fault diagnosis method and system for gearbox of wind driven generator
CN111783531A (en) * 2020-05-27 2020-10-16 福建亿华源能源管理有限公司 Water turbine set fault diagnosis method based on SDAE-IELM
CN111783531B (en) * 2020-05-27 2024-03-19 福建亿华源能源管理有限公司 Water turbine set fault diagnosis method based on SDAE-IELM
CN111680665A (en) * 2020-06-28 2020-09-18 湖南大学 A Data-Driven Method for Motor Mechanical Fault Diagnosis Using Current Signals
CN112731137A (en) * 2020-09-15 2021-04-30 华北电力大学(保定) Cage type asynchronous motor stator and rotor fault joint diagnosis method based on stack type self-coding and light gradient elevator algorithm
CN112706901A (en) * 2020-12-31 2021-04-27 华南理工大学 Semi-supervised fault diagnosis method for main propulsion system of semi-submerged ship
CN112706901B (en) * 2020-12-31 2022-04-22 华南理工大学 Semi-supervised fault diagnosis method for main propulsion system of semi-submerged ship
CN113203914A (en) * 2021-04-08 2021-08-03 华南理工大学 Underground cable early fault detection and identification method based on DAE-CNN
CN114692694B (en) * 2022-04-11 2024-02-13 合肥工业大学 An equipment fault diagnosis method based on feature fusion and ensemble clustering
CN114692694A (en) * 2022-04-11 2022-07-01 合肥工业大学 Equipment fault diagnosis method based on feature fusion and integrated clustering
CN114861728A (en) * 2022-05-17 2022-08-05 江苏科技大学 Fault diagnosis method based on fusion-shrinkage stack denoising self-editor characteristic
CN114861728B (en) * 2022-05-17 2024-08-06 江苏科技大学 A fault diagnosis method based on fusion-contraction stack denoising self-editing feature

Similar Documents

Publication Publication Date Title
CN107957551A (en) Stacking noise reduction own coding Method of Motor Fault Diagnosis based on vibration and current signal
Sun et al. Sparse deep stacking network for fault diagnosis of motor
CN106124212B (en) Fault Diagnosis of Roller Bearings based on sparse coding device and support vector machines
Wu et al. A hybrid classification autoencoder for semi-supervised fault diagnosis in rotating machinery
Wu et al. An integrated ensemble learning model for imbalanced fault diagnostics and prognostics
CN112906644B (en) Intelligent diagnosis method of mechanical fault based on deep transfer learning
CN107702922B (en) Fault Diagnosis Method of Rolling Bearing Based on LCD and Stacked Autoencoder
Shao et al. Learning features from vibration signals for induction motor fault diagnosis
CN112665852B (en) Variable working condition planetary gearbox fault diagnosis method and device based on deep learning
CN110657984B (en) Planetary gearbox fault diagnosis method based on reinforced capsule network
CN107657250B (en) Bearing fault detection and location method and implementation system and method of detection and location model
Zhao et al. Fault Diagnosis of Motor in Frequency Domain Signal by Stacked De-noising Auto-encoder.
CN107526853A (en) Rolling bearing fault mode identification method and device based on stacking convolutional network
CN112461537A (en) Wind power gear box state monitoring method based on long-time neural network and automatic coding machine
CN111275164A (en) Underwater robot propulsion system fault diagnosis method
CN110132554A (en) A Rotating Machinery Fault Diagnosis Method Based on Deep Laplacian Self-encoding
CN109932174A (en) A fault diagnosis method for gearboxes based on multi-task deep learning
CN116754230A (en) Bearing abnormality detection and fault diagnosis method based on deep convolution generation countermeasure network
CN105487009A (en) Motor fault diagnosis method based on k-means RBF neural network algorithm
CN112116029A (en) A Gearbox Intelligent Fault Diagnosis Method Based on Multi-scale Structure and Feature Fusion
CN115859077A (en) Multi-feature fusion motor small sample fault diagnosis method under variable working conditions
CN114091504A (en) Rotary machine small sample fault diagnosis method based on generation countermeasure network
Afrasiabi et al. Two-stage deep learning-based wind turbine condition monitoring using SCADA data
CN114331214A (en) Domain-adaptive bearing voiceprint fault diagnosis method and system based on reinforcement learning
CN112926728B (en) A small sample inter-turn short circuit fault diagnosis method for permanent magnet synchronous motor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180424

WD01 Invention patent application deemed withdrawn after publication