CN114383844B - A Fault Diagnosis Method for Rolling Bearings Based on Distributed Deep Neural Network - Google Patents

A Fault Diagnosis Method for Rolling Bearings Based on Distributed Deep Neural Network Download PDF

Info

Publication number
CN114383844B
CN114383844B CN202111529933.6A CN202111529933A CN114383844B CN 114383844 B CN114383844 B CN 114383844B CN 202111529933 A CN202111529933 A CN 202111529933A CN 114383844 B CN114383844 B CN 114383844B
Authority
CN
China
Prior art keywords
model
sample
neural network
branch
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111529933.6A
Other languages
Chinese (zh)
Other versions
CN114383844A (en
Inventor
丁华
吕彦宝
牛锐祥
孙晓春
王焱
李宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Original Assignee
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology filed Critical Taiyuan University of Technology
Priority to CN202111529933.6A priority Critical patent/CN114383844B/en
Publication of CN114383844A publication Critical patent/CN114383844A/en
Application granted granted Critical
Publication of CN114383844B publication Critical patent/CN114383844B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M13/00Testing of machine parts
    • G01M13/04Bearings
    • G01M13/045Acoustic or vibration analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Testing Of Devices, Machine Parts, Or Other Structures Thereof (AREA)

Abstract

本发明属于机械智能制造技术领域,具体是一种基于分布式深度神经网络的滚动轴承故障诊断方法。S1:数据集的制作。S2:构建分布式深度神经网络模型。S3:分支模型和主干模型的联合训练。S4:模型推理。S5:故障识别,将分支模型和主干模型推理的结果和通讯量进行汇总,输出最终的故障诊断精度和消耗的通讯量。本发明方法构建的模型能够自动从原始振动信号中提取特征无需人工选择特征和去噪,在变载荷、大噪声工况下,能够实现高精度、低通信、低时延的滚动轴承故障诊断。

Figure 202111529933

The invention belongs to the technical field of mechanical intelligent manufacturing, in particular to a rolling bearing fault diagnosis method based on a distributed deep neural network. S1: The production of the dataset. S2: Construct a distributed deep neural network model. S3: Joint training of branch model and backbone model. S4: Model inference. S5: Fault identification, summarizing the reasoning results and traffic of the branch model and the main model, and outputting the final fault diagnosis accuracy and the communication traffic consumed. The model constructed by the method of the invention can automatically extract features from the original vibration signal without manually selecting features and denoising, and can realize high-precision, low-communication, and low-time-delay rolling bearing fault diagnosis under variable load and large noise conditions.

Figure 202111529933

Description

一种基于分布式深度神经网络的滚动轴承故障诊断方法A Fault Diagnosis Method for Rolling Bearings Based on Distributed Deep Neural Network

技术领域technical field

本发明属于机械智能制造技术领域,具体是一种基于分布式深度神经网络的滚动轴承故障诊断方法。The invention belongs to the technical field of mechanical intelligent manufacturing, in particular to a rolling bearing fault diagnosis method based on a distributed deep neural network.

背景技术Background technique

滚动轴承作为旋转机械设备的重要组成部分,在国民经济各个行业中有着广泛的应用,在实际工作过程中,轴承故障振动信号随负载不断变化,且轴承故障振动信号微弱,容易被强干扰信号覆盖。因此,从变载荷和强噪声环境下的轴承振动信号中提取出故障特征,并对故障进行及时有效的诊断,可以有效避免故障的持续恶化,近年来,数据驱动的故障诊断方法逐渐成为故障诊断领域的重要研究热点。数据驱动的故障诊断方法主要包括基于信号分析、机器学习、深度学习的故障诊断方法。As an important part of rotating machinery and equipment, rolling bearings are widely used in various industries of the national economy. In the actual work process, the vibration signal of bearing faults changes continuously with the load, and the vibration signals of bearing faults are weak and easily covered by strong interference signals. Therefore, extracting fault features from bearing vibration signals in variable load and strong noise environments, and timely and effectively diagnosing faults, can effectively avoid continuous deterioration of faults. In recent years, data-driven fault diagnosis methods have gradually become a fault diagnosis method. important research hotspots in the field. Data-driven fault diagnosis methods mainly include fault diagnosis methods based on signal analysis, machine learning, and deep learning.

基于信号分析的轴承故障诊断主要包括:傅里叶分析法、小波变换法、倒频谱法、经验模态分解法等。这些故障诊断方法仅能有效地识别某类特定的工况,泛化能力低且易受到其他因素的影响。Bearing fault diagnosis based on signal analysis mainly includes: Fourier analysis method, wavelet transform method, cepstrum method, empirical mode decomposition method, etc. These fault diagnosis methods can only effectively identify a certain type of specific working conditions, have low generalization ability and are easily affected by other factors.

基于机器学习的轴承故障诊断主要包括:支持向量机、极限学习机、人工神经网络等。这些故障诊断方法虽然能够有效地识别多类故障,但需要提取故障信号的某类特征,该过程十分依赖人工经验,存在计算过程复杂和耗时的问题,且特征提取过程并不能完全表征所有故障类型,存在一定的局限性。Bearing fault diagnosis based on machine learning mainly includes: support vector machine, extreme learning machine, artificial neural network, etc. Although these fault diagnosis methods can effectively identify multiple types of faults, they need to extract certain features of the fault signal. This process is very dependent on manual experience, and there are problems of complex and time-consuming calculations, and the feature extraction process cannot fully characterize all faults. type, there are certain limitations.

基于深度学习的轴承故障诊断主要包括:卷积神经网络、深度残差网络、深度玻尔兹曼机、深度信念网络、堆叠自编码器等。这些故障诊断方法虽然实现了故障特征的自动提取和多类故障的识别,但随着传感器产生的海量数据和深度学习模型网络深度的不断加深,集中式云计算的深度学习模型存在诊断延时和通讯成本显著增加的问题。Bearing fault diagnosis based on deep learning mainly includes: convolutional neural network, deep residual network, deep Boltzmann machine, deep belief network, stacked autoencoder, etc. Although these fault diagnosis methods realize the automatic extraction of fault features and the identification of multiple types of faults, with the massive data generated by sensors and the deepening of the network depth of deep learning models, the deep learning models of centralized cloud computing have diagnostic delays and problems. The problem of a significant increase in communication costs.

因此,如何有效的对轴承故障进行诊断,是一项亟待解决的问题。Therefore, how to effectively diagnose bearing faults is an urgent problem to be solved.

发明内容Contents of the invention

本发明为了解决上述问题,提供一种基于分布式深度神经网络的滚动轴承故障诊断方法。In order to solve the above problems, the present invention provides a rolling bearing fault diagnosis method based on a distributed deep neural network.

本发明采取以下技术方案:一种基于分布式深度神经网络的滚动轴承故障诊断方法,包括以下步骤。The present invention adopts the following technical solutions: a rolling bearing fault diagnosis method based on a distributed deep neural network, comprising the following steps.

S1:数据集的制作,首先通过轴承座上方的加速度传感器用来采集故障轴承基于无噪声的振动加速度信号,并对其加入加性高斯白噪声,然后按一定的数据点长度截取振动信号,将其转化为二维图像并赋予真实故障标签构建数据集,最后把数据集按照预设比例分为训练集和测试集。S1: The production of the data set, firstly, the acceleration sensor above the bearing seat is used to collect the noise-free vibration acceleration signal of the faulty bearing, and add additive Gaussian white noise to it, and then intercept the vibration signal according to a certain data point length, and It is converted into a two-dimensional image and given real fault labels to construct a data set, and finally the data set is divided into a training set and a test set according to a preset ratio.

S2:构建分布式深度神经网络模型,模型框架包括:一个样本输入点、一个共享卷积块、一个分支模型、一个主干模型和两个样本诊断结果输出点。S2: Construct a distributed deep neural network model. The model framework includes: a sample input point, a shared convolution block, a branch model, a backbone model, and two sample diagnosis result output points.

S3:分支模型和主干模型的联合训练,训练时,通过对每个样本诊断结果输出点的交叉熵损失函数的损失值加权求和,使得整个网络可以联合训练且每个样本诊断结果输出点相对于其深度都能够达到理想的对应损失值,从而使预测值无限接近真实值。S3: Joint training of the branch model and the backbone model. During training, the weighted summation of the loss value of the cross-entropy loss function of each sample diagnosis result output point makes the entire network can be jointly trained and each sample diagnosis result output point is relatively Because of its depth, it can reach the ideal corresponding loss value, so that the predicted value is infinitely close to the real value.

S4:模型推理,分支模型对输入的测试集样本图像进行快速地初始特征提取,分支模型的样本诊断结果输出点对预测结果有信心的情况下,分支出口点对样本进行退出并输出分类结果,否则,则将未退出样本在共享卷积块中的图像特征传递到主干模型,进行下一步的特征提取和分类。S4: Model reasoning, the branch model performs rapid initial feature extraction on the input test set sample image, and when the output point of the sample diagnosis result of the branch model is confident in the prediction result, the branch exit point exits the sample and outputs the classification result, Otherwise, the image features of the non-exited samples in the shared convolution block are passed to the backbone model for feature extraction and classification in the next step.

S5:故障识别,将分支模型和主干模型推理的结果和通讯量进行汇总,输出最终的故障诊断精度和消耗的通讯量。S5: Fault identification, summarizing the reasoning results and traffic of the branch model and the main model, and outputting the final fault diagnosis accuracy and the communication traffic consumed.

步骤S1中基于无噪声的各负载振动信号中加入加性高斯白噪声的步骤为:The step of adding additive Gaussian white noise to each load vibration signal based on noise-free in step S1 is:

S11:通过调节信噪比的方式来模拟不同的噪声条件,信噪比的分贝形式表示为:S11: Simulate different noise conditions by adjusting the signal-to-noise ratio. The decibel form of the signal-to-noise ratio is expressed as:

Figure SMS_1
Figure SMS_1

Figure SMS_2
Figure SMS_2

式中:

Figure SMS_3
为信号功率,/>
Figure SMS_4
为噪声功率,/>
Figure SMS_5
为振动信号数值,N为振动信号长度;In the formula:
Figure SMS_3
is the signal power, />
Figure SMS_4
is the noise power, />
Figure SMS_5
is the value of the vibration signal, N is the length of the vibration signal;

S12:对于均值为零且方差已知的信号,其功率可以用方差

Figure SMS_6
表示,因此对于标准正态分布噪声,其功率为1,因此,首先计算原始信号/>
Figure SMS_7
的功率,然后计算在期望信噪比下产生的噪声信号/>
Figure SMS_8
的功率,最后,通过下面的公式生成加性高斯白噪声,然后将其加到原始信号/>
Figure SMS_9
中,使原始信号具有期望的信噪比;原始振动信号加入加性高斯白噪声的振动信号计算公式为:S12: For a signal with zero mean and known variance, its power can be calculated by variance
Figure SMS_6
Represents, so for standard normally distributed noise, its power is 1, therefore, first calculate the original signal />
Figure SMS_7
power, and then calculate the noise signal generated under the desired signal-to-noise ratio />
Figure SMS_8
The power of , finally, the additive white Gaussian noise is generated by the following formula, and then added to the original signal />
Figure SMS_9
In , the original signal has the desired signal-to-noise ratio; the formula for calculating the vibration signal by adding additive Gaussian white noise to the original vibration signal is:

X=M+x i X = M + x i

Figure SMS_10
Figure SMS_10

式中:M为高斯白噪声,randn表示一种产生标准正态分布的随机数或矩阵的函数。In the formula: M is Gaussian white noise, and randn represents a function that generates a standard normal distribution of random numbers or matrices.

步骤S1中,将振动信号按一定的数据点长度截取并将其转化为二维图像,转化过程计算公式表示为:In step S1, the vibration signal is intercepted according to a certain data point length and converted into a two-dimensional image, and the calculation formula of the conversion process is expressed as:

Figure SMS_11
Figure SMS_11

式中:P表示二维图像的像素强度,L表示原始振动信号加入加性高斯白噪声振动信号的值,

Figure SMS_12
K表示二维图像的单边尺寸。In the formula: P represents the pixel intensity of the two-dimensional image, L represents the value of the original vibration signal added to the vibration signal of additive Gaussian white noise,
Figure SMS_12
, K represents the single side size of the two-dimensional image.

步骤S2中,分布式深度神经网络的关键结构包括:In step S2, the key structure of the distributed deep neural network includes:

1)采用3×3卷积核堆叠的方式提取输入样本图像的特征向量,以提高卷积层的感受野和非线性表达能力;1) The 3×3 convolution kernel stacking method is used to extract the feature vector of the input sample image to improve the receptive field and nonlinear expression ability of the convolution layer;

2)共享卷积核的个数设为1,用来减少共享卷积块与主干临近卷积块之间的通讯量并最大限度的保留原始图像特征;.2) The number of shared convolution kernels is set to 1, which is used to reduce the amount of communication between the shared convolution block and the adjacent convolution blocks of the backbone and to preserve the original image features to the greatest extent;

3)将卷积特征袋放置在分支最后一个卷积块后面代替全连接层,以量化卷积块最终输出的特征向量,减少模型参数量;3) Place the convolutional feature bag behind the last convolutional block of the branch instead of the fully connected layer to quantify the final output feature vector of the convolutional block and reduce the amount of model parameters;

4)主干采用残差网络块和全局平均池化结构,用来省略全连接层,增加特征向量的流动性,减少模型参数量。4) The backbone uses a residual network block and a global average pooling structure to omit the fully connected layer, increase the mobility of the feature vector, and reduce the amount of model parameters.

步骤S3中模型的联合训练过程为,The joint training process of the model in step S3 is,

S31:分支模型和主干模型分别有一个分类器,每个样本诊断结果输出点以交叉熵损失函数作为优化目标,交叉熵损失函数表示为:S31: The branch model and the backbone model have a classifier respectively, and the output point of each sample diagnosis result takes the cross-entropy loss function as the optimization target, and the cross-entropy loss function is expressed as:

Figure SMS_13
Figure SMS_13

Figure SMS_14
Figure SMS_14

Figure SMS_15
Figure SMS_15

式中:X表示输入样本,y表示样本的真实故障标签,

Figure SMS_16
表示样本的预测故障标签,C表示标签集合,/>
Figure SMS_17
表示的是样本从神经网络的输入到第n个出口进行的运算,/>
Figure SMS_18
表示该过程网络的权重和偏置等参数;In the formula: X represents the input sample, y represents the real fault label of the sample,
Figure SMS_16
Indicates the predicted failure label of the sample, C indicates the label set, />
Figure SMS_17
Indicates the operation of the sample from the input of the neural network to the nth exit, />
Figure SMS_18
Represents parameters such as weights and biases of the process network;

S32:将各个样本诊断结果输出点的损失加权求和进行训练,并采用SGD方法更新分布式神经网络的参数,分布式神经网络的损失函数表示为:S32: Train the weighted sum of the losses of the output points of each sample diagnosis result, and use the SGD method to update the parameters of the distributed neural network. The loss function of the distributed neural network is expressed as:

Figure SMS_19
Figure SMS_19

式中:N表示分类出口的数量,

Figure SMS_20
表示每个出口的权重,/>
Figure SMS_21
表示第n个出口的估计值。In the formula: N represents the number of classified exports,
Figure SMS_20
Indicates the weight of each outlet, />
Figure SMS_21
represents the estimated value of the nth exit.

步骤S4中,将样本置信度作为分支出口点对预测结果有无信心的判断依据,若样本置信度小于给定的阈值则为有信心,反之,没有信心,样本置信度计算公式为:In step S4, the sample confidence is used as the basis for judging whether the branch exit point has confidence in the prediction result. If the sample confidence is less than a given threshold, it is confident, otherwise, there is no confidence. The formula for calculating the sample confidence is:

Figure SMS_22
Figure SMS_22

式中:C为所有真实标签的集合,x为概率向量,

Figure SMS_23
。In the formula: C is the set of all real labels, x is the probability vector,
Figure SMS_23
.

步骤S5中,模故障识别的通讯量计算公式为:In step S5, the communication volume calculation formula for module fault identification is:

Figure SMS_24
Figure SMS_24

式中:l为分支退出样本占全部输入样本的百分比,f为共享卷积块向主干模型输出的特征图像尺寸,o为共享卷积块向主干模型输出的特征图像通道数,常数4的意思是,在64位普通的Windows系统中,一个32位浮点数占据4个字节。In the formula: l is the percentage of branch exit samples to all input samples, f is the feature image size output from the shared convolution block to the backbone model, o is the number of feature image channels output from the shared convolution block to the backbone model, and the constant 4 means Yes, on a 64-bit normal Windows system, a 32-bit floating point number occupies 4 bytes.

与现有技术相比,本发明具有以下有益效果:Compared with the prior art, the present invention has the following beneficial effects:

本发明采用数据灰度化将滚动轴承振动信号进行图像转换,构建图像样本;提出在模型底层增加一个分支模型,提前退出部分简单样本,使主干模型计算机计算资源利用最大化,将卷积特征袋代替分支模型的全连接层,改善了分支模型计算机算力不足的缺陷;通过分支和主干模型的协同计算,识别滚动轴承不同的故障类型以及故障严重程度;本发明方法构建的模型能够自动从原始振动信号中提取特征无需人工选择特征和去噪,在变载荷、大噪声工况下,能够实现高精度、低通信、低时延的滚动轴承故障诊断。The present invention converts the vibration signal of the rolling bearing into an image by graying data to construct an image sample; it proposes to add a branch model at the bottom layer of the model, withdraw some simple samples in advance, maximize the use of computer computing resources of the backbone model, and replace the convolution feature bag The fully connected layer of the branch model improves the defect of insufficient computer computing power of the branch model; through the collaborative calculation of the branch and the main model, different fault types and fault severity of the rolling bearing can be identified; the model constructed by the method of the present invention can be automatically obtained from the original vibration signal Extracting features does not require manual feature selection and denoising. Under variable load and large noise conditions, it can achieve high-precision, low-communication, and low-latency rolling bearing fault diagnosis.

附图说明Description of drawings

图1是本发明方法的故障诊断流程图;Fig. 1 is the fault diagnosis flowchart of the inventive method;

图2是本发明方法的振动信号转换图像示意图;Fig. 2 is the vibration signal conversion image schematic diagram of the inventive method;

图3是本发明方法的数据集中各故障类型对应的二维图像示意图;Fig. 3 is a two-dimensional image schematic diagram corresponding to each fault type in the data set of the method of the present invention;

图4是本发明方法的分布式深度神经网络结构示意图;Fig. 4 is the distributed deep neural network structural representation of the inventive method;

图5是模型分支单独推理、模型主干单独推理和模型分支主干协同推理的混淆矩阵示意图;Figure 5 is a schematic diagram of the confusion matrix of model branch reasoning alone, model backbone reasoning alone, and model branch backbone collaborative reasoning;

图6是模型分支单独推理、模型主干单独推理和模型分支主干协同推理的t-SNE可视化结果示意图。Figure 6 is a schematic diagram of the t-SNE visualization results of the model branch alone inference, the model trunk alone reasoning and the model branch trunk collaborative reasoning.

具体实施方式Detailed ways

一种基于分布式深度神经网络的滚动轴承故障诊断方法,包括以下步骤。A rolling bearing fault diagnosis method based on a distributed deep neural network, comprising the following steps.

S1:数据集的制作,首先通过轴承座上方的加速度传感器用来采集故障轴承基于无噪声的振动加速度信号,并对其加入加性高斯白噪声,然后按一定的数据点长度截取振动信号,将其转化为二维图像并赋予真实故障标签构建数据集,最后把数据集按照预设比例分为训练集和测试集。S1: The production of the data set, firstly, the acceleration sensor above the bearing seat is used to collect the noise-free vibration acceleration signal of the faulty bearing, and add additive Gaussian white noise to it, and then intercept the vibration signal according to a certain data point length, and It is converted into a two-dimensional image and given real fault labels to construct a data set, and finally the data set is divided into a training set and a test set according to a preset ratio.

S2:构建分布式深度神经网络模型,模型框架包括:一个样本输入点、一个共享卷积块、一个分支模型、一个主干模型和两个样本诊断结果输出点。S2: Construct a distributed deep neural network model. The model framework includes: a sample input point, a shared convolution block, a branch model, a backbone model, and two sample diagnosis result output points.

S3:分支模型和主干模型的联合训练,训练时,通过对每个样本诊断结果输出点的交叉熵损失函数的损失值加权求和,使得整个网络可以联合训练且每个样本诊断结果输出点相对于其深度都能够达到理想的对应损失值,从而使预测值无限接近真实值。S3: Joint training of the branch model and the backbone model. During training, the weighted summation of the loss value of the cross-entropy loss function of each sample diagnosis result output point makes the entire network can be jointly trained and each sample diagnosis result output point is relatively Because of its depth, it can reach the ideal corresponding loss value, so that the predicted value is infinitely close to the real value.

S4:模型推理,分支模型对输入的测试集样本图像进行快速地初始特征提取,分支模型的样本诊断结果输出点对预测结果有信心的情况下,分支出口点对样本进行退出并输出分类结果,否则,则将未退出样本在共享卷积块中的图像特征传递到主干模型,进行下一步的特征提取和分类。S4: Model reasoning, the branch model performs rapid initial feature extraction on the input test set sample image, and when the output point of the sample diagnosis result of the branch model is confident in the prediction result, the branch exit point exits the sample and outputs the classification result, Otherwise, the image features of the non-exited samples in the shared convolution block are passed to the backbone model for feature extraction and classification in the next step.

S5:故障识别,将分支模型和主干模型推理的结果和通讯量进行汇总,输出最终的故障诊断精度和消耗的通讯量。S5: Fault identification, summarizing the reasoning results and traffic of the branch model and the main model, and outputting the final fault diagnosis accuracy and the communication traffic consumed.

步骤S1中基于无噪声的各负载振动信号中加入加性高斯白噪声的步骤为:The step of adding additive Gaussian white noise to each load vibration signal based on noise-free in step S1 is:

S11:通过调节信噪比的方式来模拟不同的噪声条件,信噪比的分贝形式表示为:S11: Simulate different noise conditions by adjusting the signal-to-noise ratio. The decibel form of the signal-to-noise ratio is expressed as:

Figure SMS_25
Figure SMS_25

Figure SMS_26
Figure SMS_26

式中:

Figure SMS_27
为信号功率,/>
Figure SMS_28
为噪声功率,/>
Figure SMS_29
为振动信号数值,N为振动信号长度;In the formula:
Figure SMS_27
is the signal power, />
Figure SMS_28
is the noise power, />
Figure SMS_29
is the value of the vibration signal, N is the length of the vibration signal;

S12:对于均值为零且方差已知的信号,其功率可以用方差

Figure SMS_30
表示,因此对于标准正态分布噪声,其功率为1,因此,首先计算原始信号/>
Figure SMS_31
的功率,然后计算在期望信噪比下产生的噪声信号/>
Figure SMS_32
的功率,最后,通过下面的公式生成加性高斯白噪声,然后将其加到原始信号/>
Figure SMS_33
中,使原始信号具有期望的信噪比;原始振动信号加入加性高斯白噪声的振动信号计算公式为:S12: For a signal with zero mean and known variance, its power can be calculated by variance
Figure SMS_30
Represents, so for standard normally distributed noise, its power is 1, therefore, first calculate the original signal />
Figure SMS_31
power, and then calculate the noise signal generated under the desired signal-to-noise ratio />
Figure SMS_32
The power of , finally, the additive white Gaussian noise is generated by the following formula, and then added to the original signal />
Figure SMS_33
In , the original signal has the desired signal-to-noise ratio; the formula for calculating the vibration signal by adding additive Gaussian white noise to the original vibration signal is:

X=M+x i X = M + x i

Figure SMS_34
Figure SMS_34

式中:M为高斯白噪声,randn表示一种产生标准正态分布的随机数或矩阵的函数。In the formula: M is Gaussian white noise, and randn represents a function that generates a standard normal distribution of random numbers or matrices.

步骤S1中,将振动信号按一定的数据点长度截取并将其转化为二维图像,转化过程计算公式表示为:In step S1, the vibration signal is intercepted according to a certain data point length and converted into a two-dimensional image, and the calculation formula of the conversion process is expressed as:

Figure SMS_35
Figure SMS_35

式中:P表示二维图像的像素强度,L表示原始振动信号加入加性高斯白噪声振动信号的值,

Figure SMS_36
K表示二维图像的单边尺寸。In the formula: P represents the pixel intensity of the two-dimensional image, L represents the value of the original vibration signal added to the vibration signal of additive Gaussian white noise,
Figure SMS_36
, K represents the single side size of the two-dimensional image.

步骤S2中,分布式深度神经网络的关键结构包括:In step S2, the key structure of the distributed deep neural network includes:

5)采用3×3卷积核堆叠的方式提取输入样本图像的特征向量,以提高卷积层的感受野和非线性表达能力;5) The feature vector of the input sample image is extracted by stacking 3×3 convolution kernels to improve the receptive field and nonlinear expression ability of the convolution layer;

6)共享卷积核的个数设为1,用来减少共享卷积块与主干临近卷积块之间的通讯量并最大限度的保留原始图像特征;.6) The number of shared convolution kernels is set to 1, which is used to reduce the amount of communication between the shared convolution block and the adjacent convolution blocks of the backbone and to preserve the original image features to the greatest extent;

7)将卷积特征袋放置在分支最后一个卷积块后面代替全连接层,以量化卷积块最终输出的特征向量,减少模型参数量;7) Place the convolutional feature bag behind the last convolutional block of the branch instead of the fully connected layer to quantify the final output feature vector of the convolutional block and reduce the amount of model parameters;

8)主干采用残差网络块和全局平均池化结构,用来省略全连接层,增加特征向量的流动性,减少模型参数量。8) The backbone uses a residual network block and a global average pooling structure to omit the fully connected layer, increase the mobility of the feature vector, and reduce the amount of model parameters.

步骤S3中模型的联合训练过程为,The joint training process of the model in step S3 is,

S31:分支模型和主干模型分别有一个分类器,每个样本诊断结果输出点以交叉熵损失函数作为优化目标,交叉熵损失函数表示为:S31: The branch model and the backbone model have a classifier respectively, and the output point of each sample diagnosis result takes the cross-entropy loss function as the optimization target, and the cross-entropy loss function is expressed as:

Figure SMS_37
Figure SMS_37

Figure SMS_38
Figure SMS_38

Figure SMS_39
Figure SMS_39

式中:X表示输入样本,y表示样本的真实故障标签,

Figure SMS_40
表示样本的预测故障标签,C表示标签集合,/>
Figure SMS_41
表示的是样本从神经网络的输入到第n个出口进行的运算,/>
Figure SMS_42
表示该过程网络的权重和偏置等参数;In the formula: X represents the input sample, y represents the real fault label of the sample,
Figure SMS_40
Indicates the predicted failure label of the sample, C indicates the label set, />
Figure SMS_41
Indicates the operation of the sample from the input of the neural network to the nth exit, />
Figure SMS_42
Represents parameters such as weights and biases of the process network;

S32:将各个样本诊断结果输出点的损失加权求和进行训练,并采用SGD方法更新分布式神经网络的参数,分布式神经网络的损失函数表示为:S32: Train the weighted sum of the losses of the output points of each sample diagnosis result, and use the SGD method to update the parameters of the distributed neural network. The loss function of the distributed neural network is expressed as:

Figure SMS_43
Figure SMS_43

式中:N表示分类出口的数量,

Figure SMS_44
表示每个出口的权重,/>
Figure SMS_45
表示第n个出口的估计值。In the formula: N represents the number of classified exports,
Figure SMS_44
Indicates the weight of each outlet, />
Figure SMS_45
represents the estimated value of the nth exit.

步骤S4中,将样本置信度作为分支出口点对预测结果有无信心的判断依据,若样本置信度小于给定的阈值则为有信心,反之,没有信心,样本置信度计算公式为:In step S4, the sample confidence is used as the basis for judging whether the branch exit point has confidence in the prediction result. If the sample confidence is less than a given threshold, it is confident, otherwise, there is no confidence. The formula for calculating the sample confidence is:

Figure SMS_46
Figure SMS_46

式中:C为所有真实标签的集合,x为概率向量,

Figure SMS_47
。In the formula: C is the set of all real labels, x is the probability vector,
Figure SMS_47
.

步骤S5中,模故障识别的通讯量计算公式为:In step S5, the communication volume calculation formula for module fault identification is:

Figure SMS_48
Figure SMS_48

式中:l为分支退出样本占全部输入样本的百分比,f为共享卷积块向主干模型输出的特征图像尺寸,o为共享卷积块向主干模型输出的特征图像通道数,常数4的意思是,在64位普通的Windows系统中,一个32位浮点数占据4个字节。In the formula: l is the percentage of branch exit samples to all input samples, f is the feature image size output from the shared convolution block to the backbone model, o is the number of feature image channels output from the shared convolution block to the backbone model, and the constant 4 means Yes, on a 64-bit normal Windows system, a 32-bit floating point number occupies 4 bytes.

实验案例Experimental case

实验数据Experimental data

实验数据集为凯斯西储大学公开的轴承数据集,本实验采用驱动端轴承,采样频率为12Khz的数据集,在此数据集中,有三种故障类型,每个故障类型具有三种不同的损坏尺寸。共九种故障状态和一个正常状态。三个故障类型分别是滚子故障(RF),外圈故障(OF)和内圈故障(IF)。损坏尺寸分别为0.18mm,0.36mm和0.54mm。在四种负载(0,1,2,3 HP)条件下的驱动端部振动信号上添加信噪比分别为-3dB、0dB、3dB、6dB和9dB的高斯白噪声,将添加噪声的变载荷振动信号每784个数据点长度截取一次并将其转化为28

Figure SMS_49
28的二维图像,转换图像如图3所示。数据集共45396个样本,将其按照5:1的比例分为训练集和测试集。样本具体组成信息见表1。The experimental data set is the bearing data set released by Case Western Reserve University. This experiment uses the driving end bearing with a sampling frequency of 12Khz. In this data set, there are three types of faults, and each fault type has three different types of damage. size. There are nine fault states and one normal state. The three fault types are roller fault (RF), outer race fault (OF) and inner race fault (IF). The damage sizes are 0.18mm, 0.36mm and 0.54mm, respectively. Adding Gaussian white noise with signal-to-noise ratios of -3dB, 0dB, 3dB, 6dB and 9dB to the vibration signal of the driving end under four loads (0, 1, 2, 3 HP) conditions, will add the variable load of the noise The vibration signal is intercepted every 784 data points and converted into 28
Figure SMS_49
28 two-dimensional image, the converted image is shown in Figure 3. The data set has a total of 45396 samples, which are divided into training set and test set according to the ratio of 5:1. The specific composition information of the samples can be seen in Table 1.

表1Table 1

Figure SMS_50
Figure SMS_50

模型结构model structure

所搭建的模型结构如图4所示,模型框架包括一个输入点、一个共享卷积块、一个分支模型、一个主干模型和两个退出点。模型采用3×3卷积核堆叠的方式提取输入图像的特征向量,以提高卷积层的感受野和非线性表达能力。共享卷积核的个数设为1,用来减少共享卷积块与主干临近卷积块之间的通讯量并最大限度的保留原始图像特征。将CBoF放置在分支最后一个卷积块后面代替全连接层,以量化卷积块最终输出的特征向量,减少模型参数量。主干采用残差网络块和全局平均池化结构,用来省略全连接层,增加特征向量的流动性,减少模型参数量。模型参数如表2所示。The built model structure is shown in Figure 4. The model framework includes an input point, a shared convolution block, a branch model, a backbone model, and two exit points. The model uses a 3×3 convolution kernel stacking method to extract the feature vector of the input image to improve the receptive field and nonlinear expression ability of the convolution layer. The number of shared convolution kernels is set to 1, which is used to reduce the amount of communication between the shared convolution block and the adjacent convolution blocks of the backbone and to preserve the original image features to the greatest extent. CBoF is placed behind the last convolutional block of the branch instead of the fully connected layer to quantify the feature vectors of the final output of the convolutional block and reduce the amount of model parameters. The backbone uses a residual network block and a global average pooling structure to omit a fully connected layer, increase the mobility of feature vectors, and reduce the amount of model parameters. The model parameters are shown in Table 2.

表2Table 2

Figure SMS_51
Figure SMS_51

模型训练model training

将两个出口的损失加权求和进行联合训练,模型超参数设置如下:主干模型加权值

Figure SMS_52
,=0.8分支模型加权值/>
Figure SMS_53
=0.2,batch size为16,优化器为SGD,动量参数为0.9,初始学习率为0.1,学习率衰减值为0.0001,迭代次数为100。The loss weighted sum of the two outlets is combined for joint training, and the model hyperparameters are set as follows: backbone model weighted value
Figure SMS_52
,=0.8 branch model weighted value/>
Figure SMS_53
=0.2, the batch size is 16, the optimizer is SGD, the momentum parameter is 0.9, the initial learning rate is 0.1, the learning rate decay value is 0.0001, and the number of iterations is 100.

(1)本发明方法在变载荷、多噪声的混合场景中的测试(1) Test of the inventive method in variable load, multi-noise mixed scene

表3给出了训练好的模型在变载荷、多噪声和变载荷单一噪声环境中推理的精度和通讯成本。由在变载荷、多噪声环境测试结果来看,模型分支可以处理47.98%的样本且识别精度为100%,模型的整体识别精度达到了99.27%,证明模型其具有良好的抗干扰能力;由在变载荷、单一噪声环境测试结果来看,模型分支处理样本的数量与模型在协同推理过程中所需的通讯成本成反比,各类噪声中模型分支处理样本的数量有所差异,但识别精度均能达到100%;-3dB噪声对模型推理的干扰最大,但模型的整体识别精度也能维持在96%,测试环境从-3dB到无噪声的变化过程中,模型推理整体精度不断提高,所需的通讯成本不断减少。Table 3 shows the inference accuracy and communication cost of the trained model in variable load, multi-noise and variable load single noise environments. According to the test results in variable load and multi-noise environment, the model branch can handle 47.98% of the samples and the recognition accuracy is 100%, and the overall recognition accuracy of the model reaches 99.27%, which proves that the model has good anti-interference ability; According to the test results of variable load and single noise environment, the number of model branch processing samples is inversely proportional to the communication cost required by the model in the collaborative reasoning process. The number of model branch processing samples varies in various noises, but the recognition accuracy is uniform. It can reach 100%; -3dB noise interferes the most with model inference, but the overall recognition accuracy of the model can also be maintained at 96%. In the process of changing the test environment from -3dB to no noise, the overall accuracy of model inference continues to improve. Communication costs are constantly decreasing.

表3table 3

Figure SMS_54
Figure SMS_54

(2)本发明方法与其他神经网络模型作比较(2) The inventive method is compared with other neural network models

表4给出了所构建模型与其他神经网络模型在推理速度、模型参数、通讯成本以及变载荷、多噪声和变载荷、单一噪声环境中的识别精度四个方面的推理结果,T=0代表模型主干单独推理,T=1表示模型分支单独推理,T

Figure SMS_55
表示模型分支与主干协同推理。从各模型单独推理的结果中可以看出,DDNN(T=0)在变载荷、多噪声和变载荷、单一噪声环境中,其识别精度均优于其他单独推理的模型,虽然在推理速度方面稍落后于其他模型,但其参数量分别是AlexNet、LeNet-5、Vgg16模型参数量的54%、37%、20%,DDNN(T=1)模型参数量最少,推理速度最快,但其识别精度低;从模型分支单独推理、模型主干单独推理和模型分支和主干的协同推理结果中可以看出,DDNN(协同)在保证识别精度的前提下,在增加560个参数后,其相比于模型主干单独推理,推理速度提高了18%,通讯成本降低了32%(模型主干单独推理的平均通讯量为4×28×28B),模型分支单独推理的精度表现最差。Table 4 shows the inference results of the constructed model and other neural network models in terms of inference speed, model parameters, communication cost, variable load, multiple noise and variable load, and recognition accuracy in a single noise environment. T=0 means The model trunk reasoning alone, T=1 means the model branch reasoning alone, T
Figure SMS_55
Indicates that the model branch is reasoned in conjunction with the trunk. It can be seen from the results of individual reasoning of each model that the recognition accuracy of DDNN (T=0) is better than that of other independent reasoning models in variable load, multi-noise, variable load, and single noise environments, although in terms of inference speed It is slightly behind other models, but its parameters are 54%, 37%, and 20% of AlexNet, LeNet-5, and Vgg16 models, respectively. DDNN (T=1) model has the least number of parameters and the fastest inference speed, but its The recognition accuracy is low; it can be seen from the results of separate reasoning of the model branch, single reasoning of the model backbone, and collaborative reasoning of the model branch and the backbone. Under the premise of ensuring the recognition accuracy, after adding 560 parameters, the DDNN (collaboration) is better than the Based on the reasoning of the model backbone alone, the reasoning speed is increased by 18%, and the communication cost is reduced by 32% (the average communication volume of the model backbone reasoning alone is 4×28×28B), and the accuracy of the model branch reasoning alone is the worst.

表4Table 4

Figure SMS_56
Figure SMS_56

混淆矩阵分析Confusion Matrix Analysis

模型分支单独推理、模型主干单独推理和模型分支主干协同推理的混淆矩阵如图5所示,前者验证的整体精度仅有91.74%,标签8最容易被误识别,标签0最不易被误识别,二者的误识别率分别为28.17%和0;后两者验证的整体精度均为99.27%,标签8同样最容易被误识别,标签0,1,3,6,7最不容易被误识别,他们的误识别率分别为3.33%和0。The confusion matrix of model branch independent reasoning, model backbone independent reasoning, and model branch backbone collaborative reasoning is shown in Figure 5. The overall verification accuracy of the former is only 91.74%. Label 8 is the most likely to be misidentified, and label 0 is the least likely to be misidentified. The misrecognition rates of the two are 28.17% and 0 respectively; the overall accuracy of the latter two verifications is 99.27%, label 8 is also the most likely to be misrecognized, and labels 0, 1, 3, 6, and 7 are the least likely to be misrecognized , their false recognition rates are 3.33% and 0, respectively.

t-SNE可视化分析t-SNE Visual Analysis

模型分支单独推理、模型主干单独推理和模型分支主干协同推理的t-SNE可视图如图6所示,前者大量标签样本显著存在决策边界模糊的现象,模型协同推理的结果中,模型分支处理的样本不存在决策边界模型的现象,模型主干处理样本的结果与模型主干单独推理的结果类似,存在极少量标签样本决策边界模糊的现象。Figure 6 shows the t-SNE visualizations of model branch independent reasoning, model backbone independent reasoning, and model branch-background collaborative reasoning. The former has a large number of label samples with obvious decision boundary blurring. In the results of model collaborative reasoning, the model branch processing There is no decision boundary model phenomenon in the sample, the result of the model backbone processing the sample is similar to the result of the model backbone's independent reasoning, and there is a phenomenon that the decision boundary of a very small number of labeled samples is blurred.

本发明针对滚动轴承在变载荷、大噪声的复杂工况下故障诊断的准确率低及诊断时延较大的问题,提出了一种基于分布式深度神经网络的滚动轴承故障诊断方法。该方法采用数据灰度化将轴承振动信号进行图像转换,构建图像数据集;提出在模型底层增加一个分支作为分支模型,提前退出部分简单样本,并将卷积特征袋代替分支模型的全连接层;通过分支模型与主干模型的协同计算,最终实现了变载荷、大噪声的复杂工况下滚动轴承高精度、低通信成本、低延时的故障诊断。此外,所构建的模型在变载荷、单一噪声的环境中仍具有良好的效果,表现出了良好的泛化能力。Aiming at the problems of low fault diagnosis accuracy and large diagnosis time delay of rolling bearings under complex working conditions of variable load and large noise, the present invention proposes a rolling bearing fault diagnosis method based on a distributed deep neural network. This method uses data grayscale to convert the bearing vibration signal into an image to construct an image data set; it proposes to add a branch at the bottom of the model as a branch model, withdraw some simple samples in advance, and replace the fully connected layer of the branch model with a convolutional feature bag ; Through the collaborative calculation of the branch model and the main model, the fault diagnosis of rolling bearings with high precision, low communication cost and low delay under the complex working conditions of variable load and large noise is finally realized. In addition, the constructed model still has a good effect in the environment of variable load and single noise, showing good generalization ability.

Claims (7)

1. A rolling bearing fault diagnosis method based on a distributed deep neural network is characterized in that: comprises the steps of,
the method comprises the steps of S1, manufacturing a data set, namely firstly, collecting a noise-free vibration acceleration signal of a fault bearing through an acceleration sensor above a bearing seat, adding additive Gaussian white noise to the vibration signal, then intercepting the vibration signal according to a certain data point length, converting the vibration signal into a two-dimensional image, endowing the two-dimensional image with a real fault label to construct the data set, and finally dividing the data set into a training set and a testing set according to a preset proportion;
s2, constructing a distributed deep neural network model, wherein the model framework comprises: one sample input point, one shared convolution block, one branch model, one trunk model and two sample diagnosis result output points;
s3, carrying out combined training on the branch model and the trunk model, wherein during training, the whole network can be combined trained by weighting and summing the loss values of the cross entropy loss functions of the output points of the diagnosis results of each sample, and the output points of the diagnosis results of each sample can reach ideal corresponding loss values relative to the depth of the output points of the diagnosis results of each sample, so that the predicted values are infinitely close to the true values;
s4, model reasoning, namely quickly extracting initial characteristics of an input test set sample image by a branch model, exiting the sample by a branch outlet point and outputting a classification result under the condition that a sample diagnosis result output point of the branch model has confidence to a prediction result, otherwise, transmitting the image characteristics of the sample which is not exited in a shared convolution block to a trunk model, and extracting and classifying the characteristics of the next step;
and S5, fault identification, namely summarizing the results and the communication quantity of the reasoning of the branch model and the trunk model, and outputting the final fault diagnosis precision and the consumed communication quantity.
2. The rolling bearing fault diagnosis method based on the distributed deep neural network according to claim 1, wherein: the step S1 of adding additive Gaussian white noise to each load vibration signal based on no noise comprises the following steps:
s11: different noise conditions are simulated by adjusting the signal-to-noise ratio, which is expressed in decibel form as:
Figure QLYQS_1
Figure QLYQS_2
wherein:
Figure QLYQS_3
for signal power, +.>
Figure QLYQS_4
Is noise power +.>
Figure QLYQS_5
In order to be a value of the vibration signal,Nis the vibration signal length;
s12: for a signal with zero mean and known variance, the power can be calculated using the variance
Figure QLYQS_6
It means that, therefore, for a standard normal distributed noise, its power is 1, and therefore, the original signal +.>
Figure QLYQS_7
And then calculate the power at the desired signal to noiseNoise signal generated by comparison->
Figure QLYQS_8
Finally, additive white gaussian noise is generated by the following formula and then added to the original vibration signal +.>
Figure QLYQS_9
In, let the original vibration signal +.>
Figure QLYQS_10
With a desired signal-to-noise ratio; the calculation formula of the vibration signal with the original vibration signal added with the additive Gaussian white noise is as follows:
X=M+x i
Figure QLYQS_11
wherein: m is gaussian white noise and randn represents a function that produces a random number or matrix of standard normal distribution.
3. The rolling bearing fault diagnosis method based on the distributed deep neural network according to claim 2, characterized in that: in the step S1, the vibration signal is intercepted according to a certain data point length and is converted into a two-dimensional image, and the calculation formula of the conversion process is expressed as follows:
Figure QLYQS_12
wherein:Prepresenting the pixel intensities of a two-dimensional image,La value representing the addition of the original vibration signal to the additive white gaussian noise vibration signal,
Figure QLYQS_13
Krepresenting the single-sided size of the two-dimensional image.
4. The rolling bearing fault diagnosis method based on the distributed deep neural network according to claim 1, wherein: in the step S2, the key structure of the distributed deep neural network includes:
1) Extracting feature vectors of an input sample image by adopting a 3×3 convolution kernel stacking mode so as to improve the receptive field and nonlinear expression capability of a convolution layer;
2) The number of the shared convolution kernels is set to be 1, so that the communication quantity between the shared convolution blocks and the adjacent convolution blocks of the trunk is reduced, and the original image characteristics are reserved to the maximum extent;
3) The convolution feature bag is placed behind the last convolution block of the branch to replace a full connection layer, so that the feature vector finally output by the convolution block is quantized, and the model parameter quantity is reduced;
4) The trunk adopts a residual network block and a global average pooling structure, so that a full connection layer is omitted, the fluidity of the feature vector is increased, and the number of model parameters is reduced.
5. The rolling bearing fault diagnosis method based on the distributed deep neural network according to claim 1, wherein: the combined training process of the model in the step S3 is that,
s31: the branch model and the trunk model are respectively provided with a classifier, and each sample diagnosis result output point takes a cross entropy loss function as an optimization target, wherein the cross entropy loss function is expressed as:
Figure QLYQS_14
Figure QLYQS_15
Figure QLYQS_16
wherein:
Figure QLYQS_17
representing input samples, +_>
Figure QLYQS_18
True failure label representing sample, +.>
Figure QLYQS_19
A predictive failure tag representing a sample, C represents a tag set, <' > and->
Figure QLYQS_20
Representing the operation of the sample from the input of the neural network to the nth exit, +.>
Figure QLYQS_21
Weights and bias parameters representing the process network;
s32: training the loss weighted summation of the output points of the diagnosis results of each sample, and updating the parameters of the distributed neural network by adopting an SGD method, wherein the loss function of the distributed neural network is expressed as follows:
Figure QLYQS_22
wherein: n represents the number of sort outlets,
Figure QLYQS_23
representing the weight of each exit, +.>
Figure QLYQS_24
Representing an estimate of the nth exit.
6. The rolling bearing fault diagnosis method based on the distributed deep neural network according to claim 1, wherein: in the step S4, the confidence coefficient of the sample is used as a basis for judging whether the confidence coefficient of the branch outlet point has the confidence to the prediction result, if the confidence coefficient of the sample is smaller than the given threshold value, the confidence coefficient is the confidence, otherwise, the confidence coefficient of the sample is not the confidence, and the calculation formula of the confidence coefficient of the sample is:
Figure QLYQS_25
wherein:Cfor the set of all the real tags,xas a vector of the probability that the vector of the probability,
Figure QLYQS_26
7. the rolling bearing fault diagnosis method based on the distributed deep neural network according to claim 1, wherein: in the step S5, the traffic calculation formula for identifying the mode fault is as follows:
Figure QLYQS_27
wherein:
Figure QLYQS_28
for branch exit samples as a percentage of the total input samples, +.>
Figure QLYQS_29
To share the feature image size of the convolution block output to the backbone model, o is the number of feature image channels of the shared convolution block output to the backbone model, and constant 4 means that in a 64-bit normal Windows system, one 32-bit floating point number occupies 4 bytes.
CN202111529933.6A 2021-12-15 2021-12-15 A Fault Diagnosis Method for Rolling Bearings Based on Distributed Deep Neural Network Active CN114383844B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111529933.6A CN114383844B (en) 2021-12-15 2021-12-15 A Fault Diagnosis Method for Rolling Bearings Based on Distributed Deep Neural Network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111529933.6A CN114383844B (en) 2021-12-15 2021-12-15 A Fault Diagnosis Method for Rolling Bearings Based on Distributed Deep Neural Network

Publications (2)

Publication Number Publication Date
CN114383844A CN114383844A (en) 2022-04-22
CN114383844B true CN114383844B (en) 2023-06-23

Family

ID=81196798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111529933.6A Active CN114383844B (en) 2021-12-15 2021-12-15 A Fault Diagnosis Method for Rolling Bearings Based on Distributed Deep Neural Network

Country Status (1)

Country Link
CN (1) CN114383844B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109765053A (en) * 2019-01-22 2019-05-17 中国人民解放军海军工程大学 Fault diagnosis method of rolling bearing using convolutional neural network and kurtosis index
CN110647867A (en) * 2019-10-09 2020-01-03 中国科学技术大学 Bearing fault diagnosis method and system based on adaptive anti-noise neural network
CN112254964A (en) * 2020-09-03 2021-01-22 太原理工大学 Rolling bearing fault diagnosis method based on rapid multi-scale convolution neural network
WO2021243838A1 (en) * 2020-06-03 2021-12-09 苏州大学 Fault diagnosis method for intra-class self-adaptive bearing under variable working conditions

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180307979A1 (en) * 2017-04-19 2018-10-25 David Lee Selinger Distributed deep learning using a distributed deep neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109765053A (en) * 2019-01-22 2019-05-17 中国人民解放军海军工程大学 Fault diagnosis method of rolling bearing using convolutional neural network and kurtosis index
CN110647867A (en) * 2019-10-09 2020-01-03 中国科学技术大学 Bearing fault diagnosis method and system based on adaptive anti-noise neural network
WO2021243838A1 (en) * 2020-06-03 2021-12-09 苏州大学 Fault diagnosis method for intra-class self-adaptive bearing under variable working conditions
CN112254964A (en) * 2020-09-03 2021-01-22 太原理工大学 Rolling bearing fault diagnosis method based on rapid multi-scale convolution neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多输入层卷积神经网络的滚动轴承故障诊断模型;昝涛;王辉;刘智豪;王民;高相胜;;振动与冲击(12);全文 *

Also Published As

Publication number Publication date
CN114383844A (en) 2022-04-22

Similar Documents

Publication Publication Date Title
Zhang et al. A novel feature adaptive extraction method based on deep learning for bearing fault diagnosis
CN110516305B (en) Intelligent fault diagnosis method under small sample based on attention mechanism meta-learning model
CN107421741A (en) A kind of Fault Diagnosis of Roller Bearings based on convolutional neural networks
CN112763214B (en) Fault diagnosis method of rolling bearing based on multi-label zero-sample learning
CN113869208B (en) Rolling bearing fault diagnosis method based on SA-ACWGAN-GP
CN111562108A (en) An Intelligent Fault Diagnosis Method of Rolling Bearing Based on CNN and FCMC
CN114997211A (en) A fault diagnosis method based on improved adversarial network and attention mechanism across working conditions
CN112577748A (en) Reinforced lightweight multi-scale CNN-based rolling bearing fault diagnosis method
CN113567159B (en) A method for state monitoring and fault diagnosis of scraper conveyor based on edge-cloud collaboration
CN110991295A (en) Self-adaptive fault diagnosis method based on one-dimensional convolutional neural network
CN115290326A (en) Rolling bearing fault intelligent diagnosis method
CN114169377A (en) G-MSCNN-based fault diagnosis method for rolling bearing in noisy environment
CN117439069A (en) Electric quantity prediction method based on neural network
CN113095265A (en) Fungal target detection method based on feature fusion and attention
CN115901260A (en) A Method of Rolling Bearing Fault Diagnosis Based on ECA_ResNet
Yang et al. Few-shot learning for rolling bearing fault diagnosis via siamese two-dimensional convolutional neural network
CN116304625A (en) Method for identifying electricity stealing user and detecting electricity stealing time period
Yeh et al. Using convolutional neural network for vibration fault diagnosis monitoring in machinery
CN114120066B (en) Small sample steel surface defect classification method based on lightweight network
CN114383844B (en) A Fault Diagnosis Method for Rolling Bearings Based on Distributed Deep Neural Network
CN113758709A (en) Rolling bearing fault diagnosis method and system combining edge computing and deep learning
CN113657244A (en) Fan gearbox fault diagnosis method and system based on improved EEMD and speech spectrum analysis
CN117516939A (en) Bearing cross-working condition fault detection method and system based on improved EfficientNetV2
CN116956739A (en) Ball mill bearing life prediction method based on ST-BiLSTM
CN112967420B (en) A method and system for monitoring the running process of heavy-duty trains based on interval type II

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant