CN114383844B - Rolling bearing fault diagnosis method based on distributed deep neural network - Google Patents

Rolling bearing fault diagnosis method based on distributed deep neural network Download PDF

Info

Publication number
CN114383844B
CN114383844B CN202111529933.6A CN202111529933A CN114383844B CN 114383844 B CN114383844 B CN 114383844B CN 202111529933 A CN202111529933 A CN 202111529933A CN 114383844 B CN114383844 B CN 114383844B
Authority
CN
China
Prior art keywords
model
sample
neural network
noise
branch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111529933.6A
Other languages
Chinese (zh)
Other versions
CN114383844A (en
Inventor
丁华
吕彦宝
牛锐祥
孙晓春
王焱
李宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Original Assignee
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology filed Critical Taiyuan University of Technology
Priority to CN202111529933.6A priority Critical patent/CN114383844B/en
Publication of CN114383844A publication Critical patent/CN114383844A/en
Application granted granted Critical
Publication of CN114383844B publication Critical patent/CN114383844B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M13/00Testing of machine parts
    • G01M13/04Bearings
    • G01M13/045Acoustic or vibration analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Testing Of Devices, Machine Parts, Or Other Structures Thereof (AREA)

Abstract

The invention belongs to the technical field of mechanical intelligent manufacturing, and particularly relates to a rolling bearing fault diagnosis method based on a distributed deep neural network. S1, manufacturing a data set. S2, constructing a distributed deep neural network model. And S3, joint training of the branch model and the trunk model. S4, model reasoning. And S5, fault identification, namely summarizing the results and the communication quantity of the reasoning of the branch model and the trunk model, and outputting the final fault diagnosis precision and the consumed communication quantity. The model constructed by the method can automatically extract the characteristics from the original vibration signal without manually selecting the characteristics and denoising, and can realize the fault diagnosis of the rolling bearing with high precision, low communication and low time delay under the working conditions of variable load and large noise.

Description

Rolling bearing fault diagnosis method based on distributed deep neural network
Technical Field
The invention belongs to the technical field of mechanical intelligent manufacturing, and particularly relates to a rolling bearing fault diagnosis method based on a distributed deep neural network.
Background
The rolling bearing is used as an important component of rotary mechanical equipment, has wide application in various industries of national economy, and in the actual working process, the bearing fault vibration signal continuously changes along with the load, is weak and is easily covered by a strong interference signal. Therefore, the fault characteristics are extracted from the bearing vibration signals under the variable load and strong noise environments, the faults are effectively diagnosed in time, the continuous deterioration of the faults can be effectively avoided, and in recent years, the data-driven fault diagnosis method gradually becomes an important research hot spot in the field of fault diagnosis. The data-driven fault diagnosis method mainly comprises a fault diagnosis method based on signal analysis, machine learning and deep learning.
Bearing fault diagnosis based on signal analysis mainly comprises: fourier analysis, wavelet transformation, cepstrum, empirical mode decomposition, and the like. These fault diagnosis methods can only effectively identify certain specific conditions, have low generalization capability and are susceptible to other factors.
Bearing fault diagnosis based on machine learning mainly comprises: support vector machines, extreme learning machines, artificial neural networks, and the like. Although the fault diagnosis methods can effectively identify various faults, certain types of characteristics of fault signals need to be extracted, the process is very dependent on manual experience, the problems of complex calculation process and time consumption exist, the characteristic extraction process cannot completely represent all fault types, and certain limitations exist.
Bearing fault diagnosis based on deep learning mainly comprises the following steps: convolutional neural networks, depth residual networks, depth boltzmann machines, depth belief networks, stacked self-encoders, and the like. Although the fault diagnosis methods realize automatic extraction of fault characteristics and identification of multiple types of faults, the deep learning model of the centralized cloud computing has the problems of obvious increase of diagnosis delay and communication cost along with continuous deepening of massive data generated by the sensor and the network depth of the deep learning model.
Therefore, how to diagnose the bearing faults effectively is a problem to be solved urgently.
Disclosure of Invention
The invention aims to solve the problems and provides a rolling bearing fault diagnosis method based on a distributed deep neural network.
The invention adopts the following technical scheme: a rolling bearing fault diagnosis method based on a distributed deep neural network comprises the following steps.
S1, manufacturing a data set, namely firstly, collecting a noise-free vibration acceleration signal of a fault bearing through an acceleration sensor above a bearing seat, adding additive Gaussian white noise to the vibration signal, then intercepting the vibration signal according to a certain data point length, converting the vibration signal into a two-dimensional image, endowing the two-dimensional image with a real fault label to construct the data set, and finally dividing the data set into a training set and a testing set according to a preset proportion.
S2, constructing a distributed deep neural network model, wherein the model framework comprises: a sample input point, a shared convolution block, a branch model, a trunk model and two sample diagnosis result output points.
And S3, carrying out combined training on the branch model and the trunk model, wherein during training, the whole network can be combined trained by weighting and summing the loss values of the cross entropy loss function of each sample diagnosis result output point, and each sample diagnosis result output point can reach an ideal corresponding loss value relative to the depth of the sample diagnosis result output point, so that the predicted value is infinitely close to the true value.
And S4, model reasoning, namely quickly extracting initial characteristics of an input test set sample image by a branch model, exiting the sample by a branch outlet point and outputting a classification result under the condition that a sample diagnosis result output point of the branch model has confidence to a prediction result, otherwise, transmitting the image characteristics of the sample which is not exited in a shared convolution block to a trunk model, and extracting and classifying the characteristics of the next step.
And S5, fault identification, namely summarizing the results and the communication quantity of the reasoning of the branch model and the trunk model, and outputting the final fault diagnosis precision and the consumed communication quantity.
The step S1 of adding additive Gaussian white noise to each load vibration signal based on no noise comprises the following steps:
s11: different noise conditions are simulated by adjusting the signal-to-noise ratio, which is expressed in decibel form as:
Figure SMS_1
Figure SMS_2
wherein:
Figure SMS_3
for signal power, +.>
Figure SMS_4
Is noise power +.>
Figure SMS_5
In order to be a value of the vibration signal,Nis the vibration signal length;
s12: for a signal with zero mean and known variance, the power can be calculated using the variance
Figure SMS_6
It means that, therefore, for a standard normal distributed noise, its power is 1, and therefore, the original signal +.>
Figure SMS_7
Then calculates the noise signal generated at the desired signal-to-noise ratio +.>
Figure SMS_8
Finally, additive white gaussian noise is generated by the following formula and then added to the original signal +.>
Figure SMS_9
Wherein the original signal is given a desired signal to noise ratio; the calculation formula of the vibration signal with the original vibration signal added with the additive Gaussian white noise is as follows:
X=M+x i
Figure SMS_10
wherein: m is gaussian white noise and randn represents a function that produces a random number or matrix of standard normal distribution.
In step S1, the vibration signal is intercepted according to a certain data point length and converted into a two-dimensional image, and the calculation formula of the conversion process is expressed as follows:
Figure SMS_11
wherein:Prepresenting the pixel intensities of a two-dimensional image,La value representing the addition of the original vibration signal to the additive white gaussian noise vibration signal,
Figure SMS_12
Krepresenting the single-sided size of the two-dimensional image.
In step S2, the key structure of the distributed deep neural network includes:
1) Extracting feature vectors of an input sample image by adopting a 3×3 convolution kernel stacking mode so as to improve the receptive field and nonlinear expression capability of a convolution layer;
2) The number of the shared convolution kernels is set to be 1, so that the communication quantity between the shared convolution blocks and the adjacent convolution blocks of the trunk is reduced, and the original image characteristics are reserved to the maximum extent; .
3) The convolution feature bag is placed behind the last convolution block of the branch to replace a full connection layer, so that the feature vector finally output by the convolution block is quantized, and the model parameter quantity is reduced;
4) The trunk adopts a residual network block and a global average pooling structure, so that a full connection layer is omitted, the fluidity of the feature vector is increased, and the number of model parameters is reduced.
The joint training process of the model in step S3 is that,
s31: the branch model and the trunk model are respectively provided with a classifier, and each sample diagnosis result output point takes a cross entropy loss function as an optimization target, wherein the cross entropy loss function is expressed as:
Figure SMS_13
Figure SMS_14
Figure SMS_15
wherein: x meterThe input samples are shown, y represents the true failure signature of the sample,
Figure SMS_16
a predictive failure tag representing a sample, C represents a tag set, <' > and->
Figure SMS_17
Representing the operation of the sample from the input of the neural network to the nth exit, +.>
Figure SMS_18
Parameters such as weight and bias representing the process network;
s32: training the loss weighted summation of the output points of the diagnosis results of each sample, and updating the parameters of the distributed neural network by adopting an SGD method, wherein the loss function of the distributed neural network is expressed as follows:
Figure SMS_19
wherein: n represents the number of sort outlets,
Figure SMS_20
representing the weight of each exit, +.>
Figure SMS_21
Representing an estimate of the nth exit.
In step S4, the confidence coefficient of the sample is used as a basis for judging whether the confidence coefficient of the branch outlet point has confidence in the prediction result, if the confidence coefficient of the sample is smaller than a given threshold value, the confidence coefficient is confidence, otherwise, the confidence coefficient of the sample is not confidence, and the calculation formula of the confidence coefficient of the sample is:
Figure SMS_22
wherein:Cfor the set of all the real tags,xas a vector of the probability that the vector of the probability,
Figure SMS_23
in step S5, the traffic calculation formula for identifying the mode fault is:
Figure SMS_24
wherein:lfor the branch exit samples to be a percentage of the total input samples,fto share the feature image size of the convolution block output to the backbone model, o is the number of feature image channels of the shared convolution block output to the backbone model, and constant 4 means that in a 64-bit normal Windows system, one 32-bit floating point number occupies 4 bytes.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the rolling bearing vibration signal is subjected to image conversion by adopting data graying, so that an image sample is constructed; providing a branch model added at the bottom layer of the model, exiting part of simple samples in advance, maximizing the utilization of calculation resources of a main model computer, replacing a full-connection layer of the branch model with a convolution feature bag, and improving the defect of insufficient calculation power of the branch model computer; different fault types and fault severity degrees of the rolling bearing are identified through cooperative calculation of the branch model and the trunk model; the model constructed by the method can automatically extract the characteristics from the original vibration signal without manually selecting the characteristics and denoising, and can realize the fault diagnosis of the rolling bearing with high precision, low communication and low time delay under the working conditions of variable load and large noise.
Drawings
FIG. 1 is a fault diagnosis flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a vibration signal conversion image of the method of the present invention;
FIG. 3 is a schematic representation of a two-dimensional image corresponding to each fault type in a dataset of the method of the present invention;
FIG. 4 is a schematic diagram of a distributed deep neural network architecture of the method of the present invention;
FIG. 5 is a schematic diagram of a confusion matrix for model branch independent reasoning, model trunk independent reasoning, and model branch trunk collaborative reasoning;
FIG. 6 is a schematic diagram of t-SNE visualization results of model branch independent reasoning, model trunk independent reasoning, and model branch trunk collaborative reasoning.
Detailed Description
A rolling bearing fault diagnosis method based on a distributed deep neural network comprises the following steps.
S1, manufacturing a data set, namely firstly, collecting a noise-free vibration acceleration signal of a fault bearing through an acceleration sensor above a bearing seat, adding additive Gaussian white noise to the vibration signal, then intercepting the vibration signal according to a certain data point length, converting the vibration signal into a two-dimensional image, endowing the two-dimensional image with a real fault label to construct the data set, and finally dividing the data set into a training set and a testing set according to a preset proportion.
S2, constructing a distributed deep neural network model, wherein the model framework comprises: a sample input point, a shared convolution block, a branch model, a trunk model and two sample diagnosis result output points.
And S3, carrying out combined training on the branch model and the trunk model, wherein during training, the whole network can be combined trained by weighting and summing the loss values of the cross entropy loss function of each sample diagnosis result output point, and each sample diagnosis result output point can reach an ideal corresponding loss value relative to the depth of the sample diagnosis result output point, so that the predicted value is infinitely close to the true value.
And S4, model reasoning, namely quickly extracting initial characteristics of an input test set sample image by a branch model, exiting the sample by a branch outlet point and outputting a classification result under the condition that a sample diagnosis result output point of the branch model has confidence to a prediction result, otherwise, transmitting the image characteristics of the sample which is not exited in a shared convolution block to a trunk model, and extracting and classifying the characteristics of the next step.
And S5, fault identification, namely summarizing the results and the communication quantity of the reasoning of the branch model and the trunk model, and outputting the final fault diagnosis precision and the consumed communication quantity.
The step S1 of adding additive Gaussian white noise to each load vibration signal based on no noise comprises the following steps:
s11: different noise conditions are simulated by adjusting the signal-to-noise ratio, which is expressed in decibel form as:
Figure SMS_25
Figure SMS_26
wherein:
Figure SMS_27
for signal power, +.>
Figure SMS_28
Is noise power +.>
Figure SMS_29
In order to be a value of the vibration signal,Nis the vibration signal length;
s12: for a signal with zero mean and known variance, the power can be calculated using the variance
Figure SMS_30
It means that, therefore, for a standard normal distributed noise, its power is 1, and therefore, the original signal +.>
Figure SMS_31
Then calculates the noise signal generated at the desired signal-to-noise ratio +.>
Figure SMS_32
Finally, additive white gaussian noise is generated by the following formula and then added to the original signal +.>
Figure SMS_33
Wherein the original signal is given a desired signal to noise ratio; the calculation formula of the vibration signal with the original vibration signal added with the additive Gaussian white noise is as follows:
X=M+x i
Figure SMS_34
wherein: m is gaussian white noise and randn represents a function that produces a random number or matrix of standard normal distribution.
In step S1, the vibration signal is intercepted according to a certain data point length and converted into a two-dimensional image, and the calculation formula of the conversion process is expressed as follows:
Figure SMS_35
wherein:Prepresenting the pixel intensities of a two-dimensional image,La value representing the addition of the original vibration signal to the additive white gaussian noise vibration signal,
Figure SMS_36
Krepresenting the single-sided size of the two-dimensional image.
In step S2, the key structure of the distributed deep neural network includes:
5) Extracting feature vectors of an input sample image by adopting a 3×3 convolution kernel stacking mode so as to improve the receptive field and nonlinear expression capability of a convolution layer;
6) The number of the shared convolution kernels is set to be 1, so that the communication quantity between the shared convolution blocks and the adjacent convolution blocks of the trunk is reduced, and the original image characteristics are reserved to the maximum extent; .
7) The convolution feature bag is placed behind the last convolution block of the branch to replace a full connection layer, so that the feature vector finally output by the convolution block is quantized, and the model parameter quantity is reduced;
8) The trunk adopts a residual network block and a global average pooling structure, so that a full connection layer is omitted, the fluidity of the feature vector is increased, and the number of model parameters is reduced.
The joint training process of the model in step S3 is that,
s31: the branch model and the trunk model are respectively provided with a classifier, and each sample diagnosis result output point takes a cross entropy loss function as an optimization target, wherein the cross entropy loss function is expressed as:
Figure SMS_37
Figure SMS_38
Figure SMS_39
wherein: x represents the input sample, y represents the actual failure signature of the sample,
Figure SMS_40
a predictive failure tag representing a sample, C represents a tag set, <' > and->
Figure SMS_41
Representing the operation of the sample from the input of the neural network to the nth exit, +.>
Figure SMS_42
Parameters such as weight and bias representing the process network;
s32: training the loss weighted summation of the output points of the diagnosis results of each sample, and updating the parameters of the distributed neural network by adopting an SGD method, wherein the loss function of the distributed neural network is expressed as follows:
Figure SMS_43
wherein: n represents the number of sort outlets,
Figure SMS_44
representing the weight of each exit, +.>
Figure SMS_45
Representing an estimate of the nth exit.
In step S4, the confidence coefficient of the sample is used as a basis for judging whether the confidence coefficient of the branch outlet point has confidence in the prediction result, if the confidence coefficient of the sample is smaller than a given threshold value, the confidence coefficient is confidence, otherwise, the confidence coefficient of the sample is not confidence, and the calculation formula of the confidence coefficient of the sample is:
Figure SMS_46
wherein:Cfor the set of all the real tags,xas a vector of the probability that the vector of the probability,
Figure SMS_47
in step S5, the traffic calculation formula for identifying the mode fault is:
Figure SMS_48
wherein:lfor the branch exit samples to be a percentage of the total input samples,fto share the feature image size of the convolution block output to the backbone model, o is the number of feature image channels of the shared convolution block output to the backbone model, and constant 4 means that in a 64-bit normal Windows system, one 32-bit floating point number occupies 4 bytes.
Experimental cases
Experimental data
The experimental data set is a bearing data set disclosed by Kassi university, the experiment adopts a driving end bearing, the sampling frequency is 12Khz, and in the data set, three fault types exist, and each fault type has three different damage sizes. A total of nine fault conditions and one normal condition. The three fault types are Roller Fault (RF), outer ring fault (OF) and inner ring fault (IF), respectively. The damage sizes were 0.18mm,0.36mm and 0.54mm, respectively. Adding Gaussian white noise with signal-to-noise ratios of-3 dB, 0dB, 3dB, 6dB and 9dB on driving end vibration signals under four load (0, 1,2,3 HP) conditions, cutting the variable load vibration signals added with the noise once every 784 data point lengths and converting the variable load vibration signals into 28
Figure SMS_49
28, and the converted image is shown in fig. 3. The data set was divided into training and test sets for a total of 45396 samples in a 5:1 ratio. The specific composition information of the samples is shown in Table 1.
TABLE 1
Figure SMS_50
Model structure
The built model structure is shown in fig. 4, and the model framework comprises an input point, a shared convolution block, a branch model, a trunk model and two exit points. The model adopts a 3 multiplied by 3 convolution kernel stacking mode to extract the characteristic vector of the input image so as to improve the receptive field and the nonlinear expression capability of the convolution layer. The number of the shared convolution kernels is set to be 1, so that the communication quantity between the shared convolution blocks and the adjacent convolution blocks of the trunk is reduced, and the original image characteristics are reserved to the maximum extent. And (3) placing CBoF behind the last convolution block of the branch to replace a full connection layer so as to quantize the final output characteristic vector of the convolution block and reduce the model parameter number. The trunk adopts a residual network block and a global average pooling structure, so that a full connection layer is omitted, the fluidity of the feature vector is increased, and the number of model parameters is reduced. The model parameters are shown in table 2.
TABLE 2
Figure SMS_51
Model training
The loss weighted summation of the two outlets is subjected to joint training, and the model super parameters are set as follows: backbone model weighting values
Figure SMS_52
(ii) =0.8 branch model weighting values +.>
Figure SMS_53
=0.2, batch size of 16, optimizer SGD, momentum parameter of 0.9, initial learning rate of 0.1, learning rate attenuation value of 0.0001, number of iterations of 100.
(1) The method of the invention is used for testing in a mixed scene with variable load and multiple noise
The accuracy and communication costs of the trained models in variable load, multi-noise and variable load single noise environments are given in table 3. From the results of the variable load and multi-noise environment test, the model branch can process 47.98% of samples, the recognition accuracy is 100%, the overall recognition accuracy of the model reaches 99.27%, and the model has good anti-interference capability; from the results of the variable load and single noise environment test, the number of the model branch processing samples is inversely proportional to the communication cost required by the model in the collaborative reasoning process, the number of the model branch processing samples in various types of noise is different, and the recognition accuracy can reach 100%; the interference of the 3dB noise to the model reasoning is maximum, but the overall recognition accuracy of the model can be maintained at 96%, the overall accuracy of the model reasoning is continuously improved in the process of changing the test environment from-3 dB to noiseless, and the required communication cost is continuously reduced.
TABLE 3 Table 3
Figure SMS_54
(2) The method of the invention is compared with other neural network models
Table 4 gives the results of the constructed model and other neural network models in terms of inference speed, model parameters, communication cost and recognition accuracy in variable load, multi-noise and variable load, single noise environments, t=0 represents model trunk independent inference, t=1 represents model branch independent inference, T
Figure SMS_55
Representing model branches and trunks collaborative reasoning. As can be seen from the results of the independent reasoning of each model, the identification accuracy of the DDNN (T=0) in the variable load, multi-noise and variable load and single noise environments is superior to that of other independent reasoning models, and the parameter quantities of the DDNN (T=1) are AlexNet, leNet-5, 54%, 37% and 20% of the model parameter quantities of the Vgg16 model respectively although the model is slightly behind the other models in terms of the reasoning speedAt least, the reasoning speed is the fastest, but the recognition accuracy is low; as can be seen from the independent model branch reasoning, the independent model trunk reasoning and the collaborative reasoning results of the model branches and the trunks, under the premise of guaranteeing the recognition accuracy, after 560 parameters are added, compared with the independent model trunk reasoning, the DDNN (collaborative) model branch reasoning has 18 percent of reasoning speed, 32 percent of communication cost is reduced (the average communication quantity of the independent model trunk reasoning is 4 multiplied by 28B), and the accuracy of the independent model branch reasoning shows worst.
TABLE 4 Table 4
Figure SMS_56
Confusion matrix analysis
The confusion matrix of the independent model branch reasoning, the independent model trunk reasoning and the collaborative model branch trunk reasoning is shown in fig. 5, the overall accuracy of the former verification is only 91.74%, the label 8 is most easily misidentified, the label 0 is least easily misidentified, and the misidentification rates of the two are 28.17% and 0 respectively; the overall accuracy of the latter two verifications was 99.27%, tag 8 was also the most likely to be misidentified, tag 0,1,3,6,7 was the least likely to be misidentified, and their misidentification rates were 3.33% and 0, respectively.
t-SNE visual analysis
The t-SNE visual views of the independent model branch reasoning, the independent model trunk reasoning and the collaborative model branch reasoning are shown in fig. 6, a large number of label samples obviously have the phenomenon of fuzzy decision boundaries, in the result of the collaborative model reasoning, the sample processed by the model branch does not have the phenomenon of the decision boundary model, the result of the model trunk processing sample is similar to the result of the independent model trunk reasoning, and the phenomenon of fuzzy decision boundaries of a small number of label samples exists.
The invention provides a rolling bearing fault diagnosis method based on a distributed deep neural network, aiming at the problems of low fault diagnosis accuracy and large diagnosis time delay of a rolling bearing under the complex working conditions of variable load and large noise. The method adopts data graying to carry out image conversion on bearing vibration signals to construct an image data set; providing a branch to be added at the bottom layer of the model as a branch model, exiting part of simple samples in advance, and replacing a full-connection layer of the branch model with a convolution feature bag; through the cooperative calculation of the branch model and the trunk model, the fault diagnosis of the rolling bearing with high precision, low communication cost and low delay under the complex working conditions of variable load and large noise is finally realized. In addition, the constructed model has good effect in the environment of variable load and single noise, and shows good generalization capability.

Claims (7)

1. A rolling bearing fault diagnosis method based on a distributed deep neural network is characterized in that: comprises the steps of,
the method comprises the steps of S1, manufacturing a data set, namely firstly, collecting a noise-free vibration acceleration signal of a fault bearing through an acceleration sensor above a bearing seat, adding additive Gaussian white noise to the vibration signal, then intercepting the vibration signal according to a certain data point length, converting the vibration signal into a two-dimensional image, endowing the two-dimensional image with a real fault label to construct the data set, and finally dividing the data set into a training set and a testing set according to a preset proportion;
s2, constructing a distributed deep neural network model, wherein the model framework comprises: one sample input point, one shared convolution block, one branch model, one trunk model and two sample diagnosis result output points;
s3, carrying out combined training on the branch model and the trunk model, wherein during training, the whole network can be combined trained by weighting and summing the loss values of the cross entropy loss functions of the output points of the diagnosis results of each sample, and the output points of the diagnosis results of each sample can reach ideal corresponding loss values relative to the depth of the output points of the diagnosis results of each sample, so that the predicted values are infinitely close to the true values;
s4, model reasoning, namely quickly extracting initial characteristics of an input test set sample image by a branch model, exiting the sample by a branch outlet point and outputting a classification result under the condition that a sample diagnosis result output point of the branch model has confidence to a prediction result, otherwise, transmitting the image characteristics of the sample which is not exited in a shared convolution block to a trunk model, and extracting and classifying the characteristics of the next step;
and S5, fault identification, namely summarizing the results and the communication quantity of the reasoning of the branch model and the trunk model, and outputting the final fault diagnosis precision and the consumed communication quantity.
2. The rolling bearing fault diagnosis method based on the distributed deep neural network according to claim 1, wherein: the step S1 of adding additive Gaussian white noise to each load vibration signal based on no noise comprises the following steps:
s11: different noise conditions are simulated by adjusting the signal-to-noise ratio, which is expressed in decibel form as:
Figure QLYQS_1
Figure QLYQS_2
wherein:
Figure QLYQS_3
for signal power, +.>
Figure QLYQS_4
Is noise power +.>
Figure QLYQS_5
In order to be a value of the vibration signal,Nis the vibration signal length;
s12: for a signal with zero mean and known variance, the power can be calculated using the variance
Figure QLYQS_6
It means that, therefore, for a standard normal distributed noise, its power is 1, and therefore, the original signal +.>
Figure QLYQS_7
And then calculate the power at the desired signal to noiseNoise signal generated by comparison->
Figure QLYQS_8
Finally, additive white gaussian noise is generated by the following formula and then added to the original vibration signal +.>
Figure QLYQS_9
In, let the original vibration signal +.>
Figure QLYQS_10
With a desired signal-to-noise ratio; the calculation formula of the vibration signal with the original vibration signal added with the additive Gaussian white noise is as follows:
X=M+x i
Figure QLYQS_11
wherein: m is gaussian white noise and randn represents a function that produces a random number or matrix of standard normal distribution.
3. The rolling bearing fault diagnosis method based on the distributed deep neural network according to claim 2, characterized in that: in the step S1, the vibration signal is intercepted according to a certain data point length and is converted into a two-dimensional image, and the calculation formula of the conversion process is expressed as follows:
Figure QLYQS_12
wherein:Prepresenting the pixel intensities of a two-dimensional image,La value representing the addition of the original vibration signal to the additive white gaussian noise vibration signal,
Figure QLYQS_13
Krepresenting the single-sided size of the two-dimensional image.
4. The rolling bearing fault diagnosis method based on the distributed deep neural network according to claim 1, wherein: in the step S2, the key structure of the distributed deep neural network includes:
1) Extracting feature vectors of an input sample image by adopting a 3×3 convolution kernel stacking mode so as to improve the receptive field and nonlinear expression capability of a convolution layer;
2) The number of the shared convolution kernels is set to be 1, so that the communication quantity between the shared convolution blocks and the adjacent convolution blocks of the trunk is reduced, and the original image characteristics are reserved to the maximum extent;
3) The convolution feature bag is placed behind the last convolution block of the branch to replace a full connection layer, so that the feature vector finally output by the convolution block is quantized, and the model parameter quantity is reduced;
4) The trunk adopts a residual network block and a global average pooling structure, so that a full connection layer is omitted, the fluidity of the feature vector is increased, and the number of model parameters is reduced.
5. The rolling bearing fault diagnosis method based on the distributed deep neural network according to claim 1, wherein: the combined training process of the model in the step S3 is that,
s31: the branch model and the trunk model are respectively provided with a classifier, and each sample diagnosis result output point takes a cross entropy loss function as an optimization target, wherein the cross entropy loss function is expressed as:
Figure QLYQS_14
Figure QLYQS_15
Figure QLYQS_16
wherein:
Figure QLYQS_17
representing input samples, +_>
Figure QLYQS_18
True failure label representing sample, +.>
Figure QLYQS_19
A predictive failure tag representing a sample, C represents a tag set, <' > and->
Figure QLYQS_20
Representing the operation of the sample from the input of the neural network to the nth exit, +.>
Figure QLYQS_21
Weights and bias parameters representing the process network;
s32: training the loss weighted summation of the output points of the diagnosis results of each sample, and updating the parameters of the distributed neural network by adopting an SGD method, wherein the loss function of the distributed neural network is expressed as follows:
Figure QLYQS_22
wherein: n represents the number of sort outlets,
Figure QLYQS_23
representing the weight of each exit, +.>
Figure QLYQS_24
Representing an estimate of the nth exit.
6. The rolling bearing fault diagnosis method based on the distributed deep neural network according to claim 1, wherein: in the step S4, the confidence coefficient of the sample is used as a basis for judging whether the confidence coefficient of the branch outlet point has the confidence to the prediction result, if the confidence coefficient of the sample is smaller than the given threshold value, the confidence coefficient is the confidence, otherwise, the confidence coefficient of the sample is not the confidence, and the calculation formula of the confidence coefficient of the sample is:
Figure QLYQS_25
wherein:Cfor the set of all the real tags,xas a vector of the probability that the vector of the probability,
Figure QLYQS_26
7. the rolling bearing fault diagnosis method based on the distributed deep neural network according to claim 1, wherein: in the step S5, the traffic calculation formula for identifying the mode fault is as follows:
Figure QLYQS_27
wherein:
Figure QLYQS_28
for branch exit samples as a percentage of the total input samples, +.>
Figure QLYQS_29
To share the feature image size of the convolution block output to the backbone model, o is the number of feature image channels of the shared convolution block output to the backbone model, and constant 4 means that in a 64-bit normal Windows system, one 32-bit floating point number occupies 4 bytes.
CN202111529933.6A 2021-12-15 2021-12-15 Rolling bearing fault diagnosis method based on distributed deep neural network Active CN114383844B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111529933.6A CN114383844B (en) 2021-12-15 2021-12-15 Rolling bearing fault diagnosis method based on distributed deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111529933.6A CN114383844B (en) 2021-12-15 2021-12-15 Rolling bearing fault diagnosis method based on distributed deep neural network

Publications (2)

Publication Number Publication Date
CN114383844A CN114383844A (en) 2022-04-22
CN114383844B true CN114383844B (en) 2023-06-23

Family

ID=81196798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111529933.6A Active CN114383844B (en) 2021-12-15 2021-12-15 Rolling bearing fault diagnosis method based on distributed deep neural network

Country Status (1)

Country Link
CN (1) CN114383844B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109765053A (en) * 2019-01-22 2019-05-17 中国人民解放军海军工程大学 Utilize the Fault Diagnosis of Roller Bearings of convolutional neural networks and kurtosis index
CN110647867A (en) * 2019-10-09 2020-01-03 中国科学技术大学 Bearing fault diagnosis method and system based on self-adaptive anti-noise neural network
CN112254964A (en) * 2020-09-03 2021-01-22 太原理工大学 Rolling bearing fault diagnosis method based on rapid multi-scale convolution neural network
WO2021243838A1 (en) * 2020-06-03 2021-12-09 苏州大学 Fault diagnosis method for intra-class self-adaptive bearing under variable working conditions

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180307979A1 (en) * 2017-04-19 2018-10-25 David Lee Selinger Distributed deep learning using a distributed deep neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109765053A (en) * 2019-01-22 2019-05-17 中国人民解放军海军工程大学 Utilize the Fault Diagnosis of Roller Bearings of convolutional neural networks and kurtosis index
CN110647867A (en) * 2019-10-09 2020-01-03 中国科学技术大学 Bearing fault diagnosis method and system based on self-adaptive anti-noise neural network
WO2021243838A1 (en) * 2020-06-03 2021-12-09 苏州大学 Fault diagnosis method for intra-class self-adaptive bearing under variable working conditions
CN112254964A (en) * 2020-09-03 2021-01-22 太原理工大学 Rolling bearing fault diagnosis method based on rapid multi-scale convolution neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多输入层卷积神经网络的滚动轴承故障诊断模型;昝涛;王辉;刘智豪;王民;高相胜;;振动与冲击(12);全文 *

Also Published As

Publication number Publication date
CN114383844A (en) 2022-04-22

Similar Documents

Publication Publication Date Title
CN110361176B (en) Intelligent fault diagnosis method based on multitask feature sharing neural network
CN112435221A (en) Image anomaly detection method based on generative confrontation network model
CN109272500B (en) Fabric classification method based on adaptive convolutional neural network
CN110991295B (en) Self-adaptive fault diagnosis method based on one-dimensional convolutional neural network
CN110647830B (en) Bearing fault diagnosis method based on convolutional neural network and Gaussian mixture model
CN111562108A (en) Rolling bearing intelligent fault diagnosis method based on CNN and FCMC
CN108764298B (en) Electric power image environment influence identification method based on single classifier
CN113505655A (en) Bearing fault intelligent diagnosis method for digital twin system
CN113567159B (en) Scraper conveyor state monitoring and fault diagnosis method based on edge cloud cooperation
CN113780242A (en) Cross-scene underwater sound target classification method based on model transfer learning
CN113109782B (en) Classification method directly applied to radar radiation source amplitude sequence
CN114997211A (en) Cross-working-condition fault diagnosis method based on improved countermeasure network and attention mechanism
CN113158722A (en) Rotary machine fault diagnosis method based on multi-scale deep neural network
CN114169377A (en) G-MSCNN-based fault diagnosis method for rolling bearing in noisy environment
CN116152678A (en) Marine disaster-bearing body identification method based on twin neural network under small sample condition
CN114383844B (en) Rolling bearing fault diagnosis method based on distributed deep neural network
CN113657244A (en) Fan gearbox fault diagnosis method and system based on improved EEMD and speech spectrum analysis
CN105512383A (en) Dredging process regulation and control parameter screening method based on BP neural network
CN112132207A (en) Target detection neural network construction method based on multi-branch feature mapping
CN116956739A (en) Ball mill bearing life prediction method based on ST-BiLSTM
CN116663744A (en) Energy consumption prediction method and system for near-zero energy consumption building
CN116385950A (en) Electric power line hidden danger target detection method under small sample condition
CN112616160B (en) Intelligent short-wave frequency cross-frequency-band real-time prediction method and system
CN109521176B (en) Virtual water quality monitoring method based on improved deep extreme learning machine
CN114676721A (en) Radar active suppression interference identification method and system based on radial basis function neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant