CN110348468B - Bearing sub-health identification method for optimized strong reconstruction edge noise reduction automatic encoder of correlation vector machine - Google Patents

Bearing sub-health identification method for optimized strong reconstruction edge noise reduction automatic encoder of correlation vector machine Download PDF

Info

Publication number
CN110348468B
CN110348468B CN201910388451.XA CN201910388451A CN110348468B CN 110348468 B CN110348468 B CN 110348468B CN 201910388451 A CN201910388451 A CN 201910388451A CN 110348468 B CN110348468 B CN 110348468B
Authority
CN
China
Prior art keywords
input
data
noise reduction
output
vector machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910388451.XA
Other languages
Chinese (zh)
Other versions
CN110348468A (en
Inventor
张利
周佳宁
孙军
岳承军
张皓博
邱存月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning University
Original Assignee
Liaoning University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning University filed Critical Liaoning University
Priority to CN201910388451.XA priority Critical patent/CN110348468B/en
Publication of CN110348468A publication Critical patent/CN110348468A/en
Application granted granted Critical
Publication of CN110348468B publication Critical patent/CN110348468B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M13/00Testing of machine parts
    • G01M13/04Bearings
    • G01M13/045Acoustic or vibration analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Testing Of Devices, Machine Parts, Or Other Structures Thereof (AREA)
  • Complex Calculations (AREA)

Abstract

A bearing sub-health identification method of an optimized strong reconstruction edge noise reduction automatic encoder of a correlation vector machine comprises the following steps: (1) signal sampling; (2) denoising; (3) feature extraction; (4) fault diagnosis division; and (5) diagnosing the state of the mechanical equipment. The invention uses the mode of adding the weights of the two kernel functions to replace the single action of the original single kernel function, and improves the correlation vector machine. And solving the variance of the input sample mapped into the feature space under the combined action of the Fisher criterion and the maximum entropy criterion, and determining the weight coefficient of the kernel function. And initializing parameters according to the obtained weight coefficient, estimating hyper-parameters, and finally obtaining the rolling bearing sub-health recognition classification. The invention solves the problems of limited constraint conditions, weak data compression capability, large reconstruction error and weak feature data expression capability of an edge noise reduction automatic encoder, and provides a bearing sub-health identification method for optimizing a strong reconstruction edge noise reduction automatic encoder by a correlation vector machine, which has high identification accuracy.

Description

Bearing sub-health identification method for optimization strong reconstruction edge noise reduction automatic encoder of correlation vector machine
Technical Field
The invention relates to a bearing sub-health recognition method of an optimized strong reconstruction edge noise reduction automatic encoder of a correlation vector machine, belonging to the field of fault diagnosis of rolling bearings.
Background
The development of modern industrial level is greatly developed along with the improvement of technological level, and the production equipment is replaced by intellectualization instead of the situation of originally relying on manpower. In the process, the requirements on the equipment are continuously increased, the damage of any part can have important influence on the production flow, and the rolling bearing is used as a core component of a rotating mechanical element and is one of the elements commonly used in the mechanical equipment. If the current state of the bearing can be accurately identified, various losses caused by the current state can be avoided to a great extent, and because the fault of the bearing is not caused instantaneously but is a process which is accumulated continuously, the research on the sub-health identification of the bearing is paid extensive attention by experts and scholars.
Disclosure of Invention
In order to solve the existing problems, a bearing sub-health identification method of a correlation vector machine optimized strong reconstruction edge noise reduction automatic encoder is provided.
The invention adopts the technical scheme that: the method comprises the following steps:
1) Signal sampling: respectively placing an acceleration sensor in a twelve-o-clock direction of a motor shell and a fan, processing a vibration signal acquired by a 16-channel data recorder by using MATLAB, and collecting a signal at a sampling frequency of 12KHz, wherein the sampling frequency of a signal at a driving end is 48000S/S;
2) And (3) noise reduction treatment: preprocessing a signal by adopting Fourier transform, and normalizing the signal by using a minimum value and a maximum value after the transform, so that the numerical value fluctuation range is stabilized at [0,1] on the basis of keeping the original relative relation of data;
3) Characteristic extraction: determining the number of hidden layers of the edge noise reduction automatic encoder to be three, and inputting the preprocessed data into the strong reconstruction edge noise reduction automatic encoder; extracting features after mapping under the action of improved constraint conditions, wherein the input of a hidden layer is the output of the previous hidden layer plus preprocessed data, and calculating the actual output of the hidden layer; calculating partial derivatives of the loss function to the weight and the bias on the basis of input and output, and updating the weight and the bias; comparing the set loss function with the maximum set times, stopping when the expected value is reached, or recalculating the output of the hidden layer;
4) Fault diagnosis and division: dividing fault diagnosis by using an improved correlation vector machine;
5) And (3) diagnosing the state of the mechanical equipment: the state of the bearing data is identified using the measured bearing data.
In the step 3), the specific method is as follows:
3.1 Preprocessing input data, setting the processed input value and output expected value as x;
3.2 Random initialization weights w and offsets b, using cross entropy to adjust the parameters w, b, defining a loss function:
Figure GDA0002124897900000021
wherein: d is the input dimension;
a represents the size after batch division;
ik denotes the kth element of the ith input in each batch of data;
summing the input vectors first and then summing each sample of the batch data, the autoencoder constraints are as follows (2):
Figure GDA0002124897900000022
wherein: n represents the input number;
m represents the number of hidden layer neurons;
z represents a hidden layer feature expression;
l represents a cross entropy function;
Figure GDA0002124897900000023
representing the variance of the noise interference term of the d-th dimension of the input data x;
β represents a weight coefficient;
Figure GDA0002124897900000024
the calculation mode is as formula (3):
Figure GDA0002124897900000025
wherein:
Figure GDA0002124897900000026
represents the average activation degree;
ρ represents the degree of activation of each layer of neurons;
3.3 Define the output of the previous hidden layer as x ', then the input of the next hidden layer is (x' + x);
3.4 Computing the actual output z of the output layer processed by the hidden layer;
3.5 On the basis of input and output, partial derivatives of the loss function pair w and b are lost, a scalar form is used in the reconstruction process, and the calculation mode is as follows:
Figure GDA0002124897900000031
Figure GDA0002124897900000032
wherein w qp Representing the weight of the nodes q and p;
w kp representing the weight of the nodes k and p;
b yp represents the bias of node p;
b zk represents the bias of node k;
for the ith sample of each batch of data,
Figure GDA0002124897900000033
an input representing the pth hidden layer,
Figure GDA0002124897900000034
is shown ask inputs of the output layer;
f' (x) is the first derivative of sigmoid: f' (x) = f (x) [1-f (x) ]
Calculating a partial derivative of the loss function formula (1) to obtain a formula (6):
Figure GDA0002124897900000035
Figure GDA0002124897900000036
wherein w rs Representing the weight of the nodes r and s;
b yr represents the bias of node r;
3.6 Update parameter values according to the rule of equation (7), where η is learning rate, w is weight, b y Is the offset:
Figure GDA0002124897900000037
Figure GDA0002124897900000038
3.7 Compare the set loss function with the maximum set number of times, stop when reaching our expectation, otherwise return to 3.6) continue training.
3.8 The output of the improved edge denoise auto-encoder is input to an improved correlation vector machine for classification.
In the step 4, the specific method is as follows:
4.1 Is provided with two types of input samples X 1 ,X 2
X 1 (x 11 ,x 12 ,...x 1i ),X 2 (x 21 ,x 22 ,...x 2i )(i=1,2,...l 1 ,j=1,2,...l 2 ),μ 12 Is X 1 ,X 2 The mapped mean vector is calculated by (8) and (9):
Figure GDA0002124897900000041
Figure GDA0002124897900000042
the intra-class scatter matrix is solved by equations (10), (11):
Figure GDA0002124897900000043
Figure GDA0002124897900000044
wherein K (,) is a kernel function;
Figure GDA0002124897900000045
a non-linear mapping corresponding to K (,);
K 1 ,K 2 ,K 11 ,K 22 ,K 12 the formula is as follows:
Figure GDA0002124897900000046
Figure GDA0002124897900000047
Figure GDA0002124897900000048
Figure GDA0002124897900000049
Figure GDA00021248979000000410
mapping of input samples to variances in feature space
Figure GDA00021248979000000411
The calculation mode is as the formulas (17) and (18):
Figure GDA0002124897900000051
Figure GDA0002124897900000052
under the constraint action of the maximum entropy method, the variance of input sample mapping in the feature space is solved
Figure GDA0002124897900000053
The weight coefficient λ proposed herein is calculated as formula (19):
Figure GDA0002124897900000054
4.2 According to the improved kernel function mode, the structure of the correlation vector machine is defined as follows:
Figure GDA0002124897900000055
weight w = (w) 0 ,...,w N ),K(x,x i ) The expression is as in formula (21):
Figure GDA0002124897900000056
solving for variances in input sample mappings into feature space under Fisher criterion constraints
Figure GDA0002124897900000057
Solving for variances in input sample mappings into feature space under maximum entropy criterion constraints
Figure GDA0002124897900000058
The kernel function weight coefficient λ is calculated according to the proposed method, in the way as equation (22):
Figure GDA0002124897900000059
4.3 Hyper-parameter α initialization, weight covariance Σ as in equation (23):
Σ=-H=(φ T Bφ+A) -1 (23)
wherein H is Hessian matrix, and y = [ y = 1 ,y 2 ,Λ,y N ] T ,B=diag(β 12 ,Λ,β N ),β n =σ[y(x n )]{1-σ[y(x n )]},A=diag(α 01 ,Λ,α N );
4.4 Estimate the hyperparameters, by the formula (24):
Figure GDA00021248979000000510
μ i is X i The mapped mean vector;
4.5 Correlation vector machine classification generally adopts Sigmoid probability function, and its expression form is as follows (25):
Figure GDA0002124897900000061
Figure GDA0002124897900000062
wherein: y (x, w) MP )=φ(x)w MP ,w MP Calculated by the formula (26):
Figure GDA0002124897900000063
Solving to obtain a probability predicted value p (t =1 calx) of the sample pair category, p (t =0 calx), wherein the category corresponding to the larger value of the values is the recognition result of x.
The beneficial effects of the invention are as follows: the method is characterized in that sparsity constraint conditions are added on the basis of original edge noise reduction constraint conditions, the input of a hidden layer is changed into the output of a previous hidden layer and preprocessed data are added, and the problems that an edge noise reduction automatic encoder is limited in constraint conditions, weak in data compression capacity, large in reconstruction error and weak in feature data expression capacity are solved. When the construction information in the characteristic sample data is different and the distribution is uneven, the original single kernel function is replaced by a mode of adding the weights of the two kernel functions, the variance of the input sample mapped into the characteristic space is solved under the combined action of the Fisher criterion and the maximum entropy criterion by using the proposed method, and the weight coefficient of the kernel function is determined. And (4) solving a weight coefficient by the proposed calculation mode, then initializing parameters, estimating hyper-parameters, and finally obtaining sub-health identification classification. The sub-health recognition accuracy of three methods of a correlation vector machine edge noise reduction automatic encoder (MDAE-RVM), a correlation vector machine strong reconstruction edge noise reduction automatic encoder (IMDAE-RVM) and a correlation vector machine optimization strong reconstruction edge noise reduction automatic encoder (IMDAE-IRVM) is compared by utilizing measured bearing data, the sub-health recognition accuracy of the correlation vector machine optimization strong reconstruction edge noise reduction automatic encoder (IMDAE-IRVM) is higher than that of the other two methods, and the improved method can correctly recognize the state of the bearing data so as to verify the effectiveness of the invention.
Description of the drawings:
fig. 1 is a diagram of an improved constraint MDAE topology.
FIG. 2 is a diagram of an IMDAE topology with improved hidden layer input.
FIG. 3 is a flow chart of a method IMDAE-IRVM.
Fig. 4a is a normal state driving end time domain vibration signal.
Fig. 4b is a time domain vibration signal of the inner ring fault driving end.
FIG. 4c is a ball failure drive end time domain vibration signal.
FIG. 4d is a time domain vibration signal of a faulty driving end of the outer ring.
Fig. 5a is a normal state drive-end time domain vibration signal after fourier transform.
Fig. 5b is a time-domain vibration signal of the failed driving end of the inner ring after fourier transform.
FIG. 5c is a Fourier transformed ball fault drive-end time domain vibration signal.
Fig. 5d is the outer ring fault drive end time domain vibration signal after fourier transform.
Figure 6a is a graph of a ball drive end vibration signal before fourier transformation for a sample section.
Figure 6b is a graph of ball drive end vibration signal after fourier transformation of a portion of the sample.
Fig. 7a is a graph of the inner ring drive end vibration signal before fourier transform of a portion of a sample.
Fig. 7b is a graph of the inner ring drive end vibration signal after fourier transformation of a portion of the sample.
Fig. 8a is a graph of the outer ring drive end vibration signal before fourier transform of a portion of the sample.
Figure 8b is a graph of the outer ring drive end vibration signal after fourier transformation of a portion of the sample.
Figure 9a is a graph of the normal drive end vibration signal before fourier transformation of a portion of the sample.
Figure 9b is a graph of the normal drive end vibration signal after fourier transformation of a portion of the sample.
FIG. 10 is a graph of the error variation of the hidden layer number with increasing neuron number.
Fig. 11 is a mean square error trend graph.
Detailed Description
A bearing sub-health identification method for an optimized strong reconstruction edge noise reduction automatic encoder of a correlation vector machine comprises the following steps:
1) Signal sampling: the acceleration sensor is respectively placed in the twelve o' clock direction of the motor shell and the fan, the MATLAB processes vibration signals collected by the 16-channel data recorder, signals are collected at the sampling frequency of 12KHz, and the sampling frequency of signals at the driving end is 48000S/S.
2) And (3) noise reduction treatment: and preprocessing the signals by adopting Fourier transform, and normalizing the signals by using a minimum value and a maximum value after the Fourier transform, so that the numerical fluctuation range is stabilized at [0,1] on the basis of keeping the original relative relationship of the data.
3) Feature extraction: determining the number of hidden layers of an edge noise reduction automatic encoder to be three, wherein the number of neurons of the hidden layers is 400, inputting preprocessed data into a strong reconstruction edge noise reduction automatic encoder (IMDAE), extracting characteristics after mapping under the action of improved constraint conditions, inputting the hidden layers by adding the preprocessed data to the output of the previous hidden layer, calculating the actual output of the hidden layers, calculating partial derivatives of a loss function to weight and bias on the basis of input and output, updating the weight and bias, comparing the set loss function with the maximum set number of times, stopping when the expected value is reached, or recalculating the output of the hidden layers. Wherein, the number of the neurons of the hidden layer is obtained when the best effect can be obtained by testing and adjusting.
The specific method comprises the following steps:
3.1 Pre-processing the input data, and setting the processed input value and output expected value as x.
3.2 Random initialization weights w and offsets b, using cross entropy to adjust the parameters w, b, defining a loss function:
Figure GDA0002124897900000081
wherein: d is the input dimension;
a represents the size after batch division;
ik represents the kth element of the ith input in each batch of data;
summing the input vectors first and then summing each sample of the batch data, the autoencoder constraints are as follows (2):
Figure GDA0002124897900000082
wherein: n represents the input number;
m represents the number of hidden layer neurons;
z represents a hidden layer feature expression;
l represents a cross entropy function;
Figure GDA0002124897900000083
a variance of a noise interference term representing a d-th dimension of the input data x;
β represents a weight coefficient;
Figure GDA0002124897900000084
the calculation mode is as formula (3):
Figure GDA0002124897900000085
wherein:
Figure GDA0002124897900000086
represents the average activation degree;
ρ represents the degree of activation of neurons in each layer.
3.3 Define the output of the previous hidden layer as x ', then the input of the next hidden layer is (x' + x).
3.4 Compute the actual output z of the output layer processed through the hidden layer.
3.5 On the basis of input and output, partial derivatives of the loss function to w and b are lost, a scalar form is used in the reconstruction process, and the calculation mode is as follows:
Figure GDA0002124897900000091
Figure GDA0002124897900000092
wherein, w qp Representing the weight of the nodes q and p;
w kp representing the weight of the nodes k and p;
b yp represents the bias of node p;
b zk represents the bias of node k;
for the ith sample of each batch of data,
Figure GDA0002124897900000093
represents the input of the pth hidden layer,
Figure GDA0002124897900000094
represents the input of the kth output layer;
f' (x) is the first derivative of sigmoid: f' (x) = f (x) [1-f (x) ]
Calculating a partial derivative of the loss function formula (1) to obtain a formula (6):
Figure GDA0002124897900000095
Figure GDA0002124897900000096
wherein w rs Representing the weight of the nodes r and s;
b yr representing the bias of node r.
3.6 Update the parameter values according to the rule of equation (7), where η is the learning rate, w is the weight, b y Is the offset:
Figure GDA0002124897900000097
Figure GDA0002124897900000098
3.7 Compare the set loss function with the maximum set number of times, stop when reaching our expectation, otherwise return to 3.6) continue training.
3.8 The output of the improved edge noise reduction auto-encoder is input to an improved correlation vector machine for classification.
4) Fault diagnosis and division: dividing fault diagnosis by using an improved correlation vector machine; on the basis of a sub-health Identification Method (IMDAE) of a strong reconstruction edge noise reduction automatic encoder, a correlation vector machine is improved, an original single kernel function is replaced by a mode of adding weights of two kernel functions, and a weight coefficient in an improved kernel function is solved by using sample variance in a feature space under the constraint of a Fisher criterion and a maximum entropy criterion, so that the performance of processing data by the kernel function can be improved, and the identification accuracy rate is improved.
The specific method comprises the following steps:
4.1 Is provided with two types of input samples X 1 ,X 2
X 1 (x 11 ,x 12 ,...x 1i ),X 2 (x 21 ,x 22 ,...x 2i )(i=1,2,...l 1 ,j=1,2,...l 2 ),μ 12 Is X 1 ,X 2 The mapped mean vector is calculated by (8) and (9):
Figure GDA0002124897900000101
Figure GDA0002124897900000102
the intra-class scatter matrix is solved by equations (10), (11):
Figure GDA0002124897900000103
Figure GDA0002124897900000104
wherein K (,) is a kernel function;
Figure GDA0002124897900000105
a non-linear mapping corresponding to K (,);
K 1 ,K 2 ,K 11 ,K 22 ,K 12 the formula is as follows:
Figure GDA0002124897900000106
Figure GDA0002124897900000107
Figure GDA0002124897900000111
Figure GDA0002124897900000112
Figure GDA0002124897900000113
mapping of input samples to variances in feature space
Figure GDA0002124897900000114
The calculation mode is as the formulas (17) and (18):
Figure GDA0002124897900000115
Figure GDA0002124897900000116
under the constraint action of the maximum entropy method, the same method is used for solvingSolving the variance in the mapping of input samples into feature space
Figure GDA0002124897900000117
The weight coefficient λ proposed herein is calculated as formula (19):
Figure GDA0002124897900000118
4.2 According to the improved kernel function mode, the structure of the correlation vector machine is defined as follows:
Figure GDA0002124897900000119
weight w = (w) 0 ,...,w N ),K(x,x i ) The expression is as in formula (21):
Figure GDA00021248979000001110
solving for variances in input sample mappings into feature space under Fisher criterion constraints
Figure GDA00021248979000001111
Solving for variances in input sample mappings into feature space under maximum entropy criterion constraints
Figure GDA00021248979000001112
The kernel function weight coefficient λ is calculated according to the proposed method, in the way as shown in equation (22):
Figure GDA00021248979000001113
4.3 Hyper-parameter α initialization, weight covariance Σ as in equation (23):
Σ=-H=(φ T Bφ+A) -1 (23)
wherein H is Hessian matrix, and y = [ y = 1 ,y 2 ,Λ,y N ] T ,B=diag(β 12 ,Λ,β N ),β n =σ[y(x n )]{1-σ[y(x n )]},A=diag(α 01 ,Λ,α N )。
4.4 Estimate the hyperparameters, by the formula (24):
Figure GDA0002124897900000121
μ i is X i The mapped mean vector;
4.5 Correlation vector machine classification generally adopts Sigmoid probability function, and its expression form is as follows (25):
Figure GDA0002124897900000122
Figure GDA0002124897900000123
wherein: y (x, w) MP )=φ(x)w MP ,w MP Calculated by equation (26):
Figure GDA0002124897900000124
solving to obtain a predicted probability value p (t =1 calx) of the sample pair category, p (t =0 calx), wherein the category corresponding to the larger value in the values is the recognition result of x.
5) And (3) diagnosing the state of the mechanical equipment: the state of the bearing data is identified using the measured bearing data.
Example 1:
according to statistics, more than 70% of faults of the rolling bearing are expressed in a vibration mode. Generally, the rolling bearing has many reasons for faults, but it is not late as long as the problems can be found before the faults occur and measures can be taken in time, and common fault problems mainly include inner ring faults, outer ring faults and ball faults, and have great significance in monitoring the health condition of the rolling bearing. Therefore, the experimental data is actually measured data of the rolling bearing, the real state in work is reflected, and the data distribution conditions under various fault states can be seen more clearly by comparing the data before and after preprocessing.
The proposed method selects a vibration signal for fault feature extraction. Aiming at the situation that rolling bearing signals contain a large number of complex signal features, a traditional feature extraction method usually depends on manual processing, and meanwhile a special preprocessing process is needed for vibration signals, so that a sub-health recognition method of a strong reconstruction edge noise reduction automatic encoder improved by a correlation vector machine is provided, the constraint conditions and the input of a hidden layer of a prototype automatic encoder are improved, and data reconstructed by the strong reconstruction edge noise reduction automatic encoder (IMDAE) are input to the improved correlation vector machine for sub-health recognition. The problems that the construction information in the characteristic sample data is different and uneven in distribution and the sample data with a complex data structure cannot be effectively processed by using a single kernel function can be well solved.
1. The theoretical basis of the scheme of the invention is as follows:
1. automatic encoder
The automatic encoder as an unsupervised learning neural network model has a three-layer neural network structure comprising an input layer, a hidden layer and an output layer. It can be divided into two parts, an encoder and a decoder. Compared with other three-layer neural networks, the form is the same, but partial limitation is made on input and output, high-dimensional input data can be converted into low-dimensional hidden layer nodes through an encoder part, the low-dimensional hidden layer is reconstructed into original input data through a decoder part, and another expression of the data can be obtained through the hidden layer due to the fact that the hidden layer reconstructs an input signal.
The automatic encoder operates as follows:
(1) X is input, and the output desired value is set to x.
(2) And (3) constructing an encoder and a decoder, and randomly initializing to generate a weight w and an offset b, b'.
(3) The respective hidden layers are calculated from the given input and output, and the layer output is output.
(4) Using the inputs and outputs, an error function is calculated to calculate the partial derivatives of each neural unit.
(5) The weight w and the offsets b, b' are updated.
(6) The training is stopped when the desired conditions are reached by comparing the set loss function with the maximum set number. Otherwise, selecting the next learning sample, returning to the step (3), and entering the next round of learning.
The noise reduction automatic encoder can effectively remove the noise of original features as the improvement of the automatic encoder, and has better effect on feature extraction. The edge noise reduction automatic encoder overcomes the defects of the noise reduction automatic encoder in the aspects of calculated amount, processing time and the like in a mode of marginalizing noise interference. The most key part is the expansion mode of the loss function, the aim of approximating the expected loss function of the model can be achieved by using the Taylor expansion mode, and the sensitivity problem caused by the input vector to the hidden layer neuron is fully considered.
The strong reconstruction edge noise reduction automatic encoder solves the problems that the edge noise reduction automatic encoder is weak in data compression capability, limited in constraint condition capability and large in error before and after reconstruction, on one hand, sparsity constraint conditions are added on the basis of the constraint conditions of the edge noise reduction automatic encoder, the defects of the edge noise reduction automatic encoder in the aspects of the constraint conditions and data compression are overcome, and the MDAE topological structure of the improved constraint conditions is shown in an attached drawing 1; on the other hand, the input of the hidden layer is improved into the output of the previous hidden layer and the preprocessed data are added, the expression capability of a characteristic sample is enhanced, the reconstruction error is reduced, and the MDAE topological structure input by the hidden layer is improved as shown in the attached figure 2; and finally, identifying the output of an automatic coder (IMDAE) for strong edge noise reduction by using a correlation vector machine (SVM).
2. Improved IMDAE-RVM training process
The operation flow of IMDAE-RVM is shown in the attached figure 3, and the specific operation is as follows:
(1) Input data is preprocessed using fourier transform and normalization methods.
(2) Defining an expected value of a strong reconstruction edge noise reduction automatic encoder (IMDAE), randomly initializing a weight and a bias, setting a constraint condition of the IMDAE, taking preprocessed data as input of the strong reconstruction edge noise reduction automatic encoder (IMDAE), mapping under the action of the constraint condition, extracting features, calculating actual output of a hidden layer by adding the preprocessed data to the output of a previous hidden layer, calculating a partial derivative of a loss function to the weight and the bias on the basis of input and output, updating the weight and the bias, comparing the set loss function with the maximum set number of times, stopping when the expected value is reached, or recalculating the output of the hidden layer.
(3) Inputting data reconstructed by a strong reconstruction edge noise reduction automatic encoder (IMDAE) into a correlation vector machine for improving a kernel function to perform sub-health identification, firstly defining the structure of the correlation vector machine, wherein the kernel function is the sum of a Gaussian kernel function multiplied by a weight coefficient and a linear kernel function multiplied by the weight coefficient, calculating the variance of a sample mapped into a sample space under the constraint of a Fisher criterion and a maximum entropy criterion for determining the weight coefficient of the kernel function, solving the weight coefficient by a proposed calculation mode, initializing parameters, estimating hyper-parameters, and finally obtaining sub-health identification classification.
The technical scheme of the invention comprises the following implementation processes:
1. signal acquisition: the experimental data acquisition work is completed through an acceleration sensor, the sensor is respectively placed in twelve o' clock directions of a motor shell and a fan, MATLAB processes vibration signals acquired by a data recorder 16 channel, signals are collected under the sampling frequency of 12KHz, the sampling frequency of a drive end signal is 48000S/S, data are divided into 0HP and 3HP loads, a bearing is in a normal state, an outer ring fault, an inner ring fault and a rolling fault are respectively in corresponding rotating speed conditions of 1797r/min and 1730r/min, the fault diameter is selected to be 0.007mm, and vibration signals of four faults are listed as shown in figures 4 a-d. The experimental data were selected as shown in table 1:
TABLE 1 eight groups of bearing Experimental data parameters
Figure GDA0002124897900000141
Figure GDA0002124897900000151
2. And (3) noise reduction treatment: since the neural network is not able to process time series under normal conditions and the vibration signal of the rolling bearing is mostly a regular periodic signal, the vibration signal of the rolling bearing is preprocessed by fourier transform, and the converted signals are shown in fig. 5 a-d. And then, carrying out corresponding transformation and then carrying out normalization processing by using the minimum value-maximum value, so that the numerical value fluctuation range is stabilized in [0,1] on the basis of keeping the original relative relationship of the data, and the formula is as shown in formula (27):
Figure GDA0002124897900000152
here, the minimum value min A Max, max A V is the original of the attribute a and v' is the normalized value. As can be seen from the above equation, the maximum value and the minimum value of the characteristic values correspond to 1 and 0, respectively. In order to more clearly see the difference between the Fourier processing and the Fourier processing, the data (1000) of a part of the experiments are intercepted and compared, and the results are respectively shown in the attached figures 6a-b, 7a-b, 8a-b and 9 a-b.
3. Feature extraction: aiming at the selection of the experimental model topological structure, comparing the change conditions of the number of hidden layers of one layer, two layers and three layers, which are increased along with the number of neurons, and the error percentage of the sample set, as shown in the attached drawing 10, within a certain range, the error of the sample set is in a descending trend along with the increase of the number of neurons under the condition that the number of hidden layers is not changed, for the two layers of hidden layers, the error is the lowest when the number of neurons reaches 500 and is about 0.07, compared with the two layers of hidden layers, the three layers of hidden layers reach the lowest point when the number of neurons is 400, and the error is about 0.03, so that in order to ensure the accuracy of the final experiment, the number of hidden layers of the edge noise reduction automatic encoder selected in the experiment is three layers, and the number of neurons in the hidden layers is 400.
4. Fault diagnosis and division: different faults can be more effectively divided through the improved RVM, aiming at the improved kernel function, the weight coefficient of the kernel function needs to be calculated, a method for calculating the weight coefficient according to the variance of the input sample in the characteristic space is provided, and the variance of the input sample in the characteristic space under the constraint of the Fisher criterion is calculated in the table 2
Figure GDA0002124897900000153
And the variance of the input samples in the feature space under the constraint of the maximum entropy method
Figure GDA0002124897900000154
Lambda =0.52, 1-lambda =0.48 was calculated.
TABLE 2 feature space sample variance and weight coefficient under different constraints
Figure GDA0002124897900000155
5. And (3) diagnosing the state of the mechanical equipment: selecting four experimental results to calculate a mean value, comparing and analyzing the identification accuracy rates of different methods, wherein the table 3 shows the accuracy rate comparison before and after the improvement of the method, the sub-health identification method of the MDAE-RVM is characterized in that the sub-health identification method is input through the reconstruction of an edge noise reduction automatic encoder (MDAE) and then is classified by using a Related Vector (RVM), the sub-health identification accuracy rates of the four experiments are 85.6%, 80.3%, 82.7% and 83.1%, and the mean value of the four experiments is 82.9%; according to the IMDAE-RVM sub-health identification method, input information is reconstructed by using IMDAE, then a correlation vector machine (RVM) is used for classification, the accuracy of four experiments is respectively 90.3%, 90.7%, 89.9% and 91.4%, the mean value of the four experiments is 90.55%, and compared with an MDAE-RVM method, the improved IMDAE-RVM method is improved by 7.65 percentage points in identification accuracy; the sub-health identification method of the IMDAE-IRVM is characterized in that a single kernel function is improved on the basis of a strong reconstruction edge noise reduction automatic encoder, the sample variance of a feature space is used for solving the weight coefficient of the improved kernel function, the identification accuracy rates of four experiments are respectively 94.2%, 95.7%, 94.6% and 94.3%, the four mean values are 94.7%, and compared with the IMDAE-RVM, the sub-health identification method of the IMDAE-IRVM is improved by 11.8 and 4.15 percentage points in accuracy rate. The trend of the mean square error with increasing number of iterations is shown in figure 11.
TABLE 3 method before and after improvement classification accuracy comparison
Figure GDA0002124897900000161
Table 4 shows the comparison of the accuracy of the bearing sub-health recognition by several different improved methods, and it can be seen from the table that the sub-health recognition method (IMDAE-IRVM) for the RVM optimized strong reconstruction MDAE improves the accuracy of the particle swarm optimized BP neural network (IPSO-BP), the ant colony optimized support vector machine (IACO-SVM) and the genetic optimized BP neural network (IGA-BP) by 7.2%, 2.2% and 3.46%, respectively.
TABLE 4 comparison of accuracy rates for different improved methods
Figure GDA0002124897900000162
The data used in the experiment is actually measured bearing data, and the data of a drive end with the sampling frequency of 12KHz, the bearing load of 0HP and the experiment rotating speed of 1797r/min is taken as a normal state; the data of a sampling frequency of 12KHz, a bearing load of 3HP and an experimental rotating speed of 1730r/min are taken as the sub-health state; the sampling frequency is 12KHz, the bearing load is 0HP, the experimental rotating speed is 1797r/min, and the fault diameter selects the drive end fault data of 0.007mm as the fault state. The output data is binary coded, i.e. the normal state is [ 10 ], the sub-health state is [ 00 ], and the fault state is [ 01 ]. The results of the experiment are shown in Table 5, 6:
TABLE 5 comparison of the results
Figure GDA0002124897900000163
Figure GDA0002124897900000171
As can be seen from Table 5, the expected output of the second set of data for the test sample is the normal state of [ 10 ], the output of the sub-health recognition method using IMDAE-RVM is [ 0.2062.6535.0001 ], it can be seen from the data that the sub-health state should be classified, which is not the expected output, and the output of the sub-health recognition method using IMDAE-IRVM is [0.79030.0134 0.0988], it can be seen from the data that the classification into the normal state is the expected output, indicating that the improved method is correctly recognized.
TABLE 6 comparison of the results
Figure GDA0002124897900000172
From Table 6, it can be seen that the expected output of the seventh set of data of the test sample is a fault state of [ 01], the output of the sub-health identification method using IMDAE-RVM is [ 0.0245.5783 0.1591], the data shows that the sub-health state should be classified, which is not the expected output, the output of the sub-health identification method using IMDAE-IRVM is [ 0.0351.2769.0.8984 ], and the data shows that the classification into the fault state is the expected output, which indicates that the improved method is correctly identified. In addition, when the fifth group of data of the test sample is observed, the sub-health state output is expected to be [ 0], the sub-health recognition method output using the IMDAE-RVM is [ 0.3826.4427.0032 ], the sub-health recognition method output using the IMDAE-IRVM is [ 0.0001.6352.0093 ], and as can be seen from the data, both methods can successfully recognize the test sample, but the output values of the IMDAE-RVM method, namely 0.3826 and 0.4427, are not greatly different, so that the problem of inaccurate classification is easily caused, and compared with the IMDAE-RVM method, the difference between the values of the output values of the IMDAE-IRVM method, namely [ 0.0001.6352.0093 ], is large, so that the situation of wrong classification is not easily caused.
By comparing tables 5 and 6, the fact that the correct state of the test sample can be accurately identified by the IMDAE-IRVM method compared with the IMDAE-RVM method, the size difference between the numerical values is obvious, the situation of wrong classification cannot occur, and the effectiveness of the sub-health identification method (IMDAE-IRVM) for optimizing the strongly-reconstructed MDAE by the RVM is proved.

Claims (1)

1. A bearing sub-health identification method for a correlation vector machine optimized strong reconstruction edge noise reduction automatic encoder is characterized by comprising the following steps:
1) Signal sampling: respectively placing an acceleration sensor in a twelve-o-clock direction of a motor shell and a fan, processing a vibration signal acquired by a 16-channel data recorder by using MATLAB, and collecting a signal at a sampling frequency of 12KHz, wherein the sampling frequency of a signal at a driving end is 48000S/S;
2) And (3) noise reduction treatment: preprocessing a signal by adopting Fourier transform, and normalizing by using a minimum value-a maximum value after the Fourier transform, so that the numerical value fluctuation range is stabilized at 0,1 on the basis of keeping the original relative relation of data;
3) Feature extraction: determining the number of hidden layers of the edge noise reduction automatic encoder to be three layers, and inputting the preprocessed data into the strong reconstruction edge noise reduction automatic encoder; extracting features after mapping under the action of improved constraint conditions, wherein the input of a hidden layer is the output of the previous hidden layer plus preprocessed data, and calculating the actual output of the hidden layer; calculating partial derivatives of the loss function to the weight and the bias on the basis of input and output, and updating the weight and the bias; comparing the set loss function with the maximum set times, stopping when the expected value is reached, or recalculating the output of the hidden layer;
4) Fault diagnosis and division: dividing fault diagnosis by using an improved correlation vector machine;
5) And (3) diagnosing the state of the mechanical equipment: identifying the state of the bearing data by using the actually measured bearing data;
in the step 3), the specific method is as follows:
3.1 Pre-processing input data, and setting the processed input value and output expected value as x;
3.2 Random initialization weights w and offsets b, using cross entropy to adjust the parameters w, b, defining a loss function:
Figure FDA0003826335360000011
wherein: d is the input dimension;
a represents the size after batch division;
ik denotes the kth element of the ith input in each batch of data;
summing the input vectors first and then summing each sample of the batch data, the autoencoder constraints are as follows (2):
Figure FDA0003826335360000012
wherein: n represents the input number;
m represents the number of hidden layer neurons;
z represents a hidden layer feature expression;
l represents a cross entropy function;
Figure FDA0003826335360000021
representing the variance of the noise interference term of the d-th dimension of the input data x;
β represents a weight coefficient;
Figure FDA0003826335360000022
the calculation mode is as formula (3):
Figure FDA0003826335360000023
wherein:
Figure FDA0003826335360000024
represents the average activation degree;
ρ represents the degree of activation of each layer of neurons;
3.3 Define the output of the previous hidden layer as x ', then the input of the next hidden layer is (x' + x);
3.4 Computing the actual output z of the output layer processed by the hidden layer;
3.5 On the basis of input and output, partial derivatives of the loss function pair w and b are lost, a scalar form is used in the reconstruction process, and the calculation mode is as follows:
Figure FDA0003826335360000025
Figure FDA0003826335360000026
wherein, w qp Representing the weight of the nodes q and p;
w kp representing the weight of the nodes k and p;
b yp represents the bias of node p;
b zk represents the offset of node k;
for the ith sample of each batch of data,
Figure FDA0003826335360000027
represents the input of the pth hidden layer,
Figure FDA0003826335360000028
represents the input of the kth output layer;
f' (x) is the first derivative of sigmoid: f' (x) = f (x) [1-f (x) ]
Calculating a partial derivative of the loss function formula (1) to obtain a formula (6):
Figure FDA0003826335360000031
wherein, w rs Representing the weight of the nodes r and s;
b yr represents the offset of node r;
3.6 Update the parameter values according to the rule of equation (7), where η is the learning rate, w is the weight, b y Is the offset:
Figure FDA0003826335360000032
3.7 Compare the set loss function with the maximum set number of times, stop when reaching our expectation, otherwise return to 3.6) continue training.
3.8 The output of the improved edge noise reduction auto-encoder is input to an improved correlation vector machine for classification.
In the step 4), the specific method is as follows:
4.1 Is provided with two types of input samples X 1 ,X 2
X 1 (x 11 ,x 12 ,...x 1i ),X 2 (x 21 ,x 22 ,...x 2i )(i=1,2,...l 1 ,j=1,2,...l 2 ),μ 12 Is X 1 ,X 2 The mapped mean vector is calculated by (8), (9):
Figure FDA0003826335360000033
Figure FDA0003826335360000034
the intra-class scatter matrix is solved by equations (10), (11):
Figure FDA0003826335360000035
Figure FDA0003826335360000041
wherein K (,) is a kernel function;
Figure FDA0003826335360000042
a non-linear mapping corresponding to K (,);
K 1 ,K 2 ,K 11 ,K 22 ,K 12 the formula is as follows:
Figure FDA0003826335360000043
Figure FDA0003826335360000044
Figure FDA0003826335360000045
Figure FDA0003826335360000046
Figure FDA0003826335360000047
mapping of input samples to variances in feature space
Figure FDA0003826335360000048
The calculation mode is as the formulas (17) and (18):
Figure FDA0003826335360000049
Figure FDA00038263353600000410
under the constraint action of the maximum entropy method, the input sample mapping is solved in the feature spaceVariance of (2)
Figure FDA00038263353600000411
The weight coefficient λ proposed herein is calculated as formula (19):
Figure FDA00038263353600000412
4.2 According to the improved kernel function mode, the structure of the correlation vector machine is defined as follows:
Figure FDA00038263353600000413
weight w = (w) 0 ,...,w N ),K(x,x i ) The expression is as in formula (21):
Figure FDA0003826335360000051
solving for variances in input sample mappings into feature space under Fisher criterion constraints
Figure FDA0003826335360000052
Solving for variances in input sample mappings into feature space under maximum entropy criterion constraints
Figure FDA0003826335360000053
The kernel function weight coefficient λ is calculated according to the proposed method, in the way as equation (22):
Figure FDA0003826335360000054
4.3 Hyper-parameter α initialization, weight covariance Σ as in equation (23):
Σ=-H=(φ T Bφ+A) -1 (23)
wherein H is Hessian matrix, and y = [ y = 1 ,y 2 ,Λ,y N ] T ,B=diag(β 12 ,Λ,β N ),β n =σ[y(x n )]{1-σ[y(x n )]},A=diag(α 01 ,Λ,α N );
4.4 Estimate the hyperparameter again, in the manner of equation (24):
Figure FDA0003826335360000055
μ i is X i The mapped mean vector;
4.5 Correlation vector machine classification generally adopts Sigmoid probability function, and its expression form is as follows (25):
Figure FDA0003826335360000056
wherein: y (x, w) MP )=φ(x)w MP ,w MP Calculated from equation (26):
Figure FDA0003826335360000057
solving to obtain a probability predicted value p (t =1 calx) of the sample pair category, p (t =0 calx), wherein the category corresponding to the larger value of the values is the recognition result of x.
CN201910388451.XA 2019-05-10 2019-05-10 Bearing sub-health identification method for optimized strong reconstruction edge noise reduction automatic encoder of correlation vector machine Active CN110348468B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910388451.XA CN110348468B (en) 2019-05-10 2019-05-10 Bearing sub-health identification method for optimized strong reconstruction edge noise reduction automatic encoder of correlation vector machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910388451.XA CN110348468B (en) 2019-05-10 2019-05-10 Bearing sub-health identification method for optimized strong reconstruction edge noise reduction automatic encoder of correlation vector machine

Publications (2)

Publication Number Publication Date
CN110348468A CN110348468A (en) 2019-10-18
CN110348468B true CN110348468B (en) 2023-01-20

Family

ID=68174645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910388451.XA Active CN110348468B (en) 2019-05-10 2019-05-10 Bearing sub-health identification method for optimized strong reconstruction edge noise reduction automatic encoder of correlation vector machine

Country Status (1)

Country Link
CN (1) CN110348468B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110967188B (en) * 2019-11-08 2020-11-24 珠海格力电器股份有限公司 Rolling bearing residual life prediction method and system based on iterative correlation vector machine
CN111626361B (en) * 2020-05-28 2023-08-11 辽宁大学 Bearing sub-health identification method for improving capsule network optimization hierarchical convolution
CN111759316A (en) * 2020-07-10 2020-10-13 郑州轻工业大学 Lower limb artificial limb gait recognition method based on muscle sound signal
CN113328755B (en) * 2021-05-11 2022-09-16 内蒙古工业大学 Compressed data transmission method facing edge calculation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106017879A (en) * 2016-05-18 2016-10-12 河北工业大学 Universal circuit breaker mechanical fault diagnosis method based on feature fusion of vibration and sound signals
CN107101828A (en) * 2017-05-27 2017-08-29 辽宁大学 The bearing residual life Forecasting Methodology that a kind of encoder health degree is evaluated
CN108106844A (en) * 2017-11-22 2018-06-01 辽宁大学 A kind of Method for Bearing Fault Diagnosis of the automatic coding machine of self-tuning parameter adjustment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832687A (en) * 2017-10-27 2018-03-23 武汉大学 Fault diagnostic method for transformer winding based on wireless identification sensing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106017879A (en) * 2016-05-18 2016-10-12 河北工业大学 Universal circuit breaker mechanical fault diagnosis method based on feature fusion of vibration and sound signals
CN107101828A (en) * 2017-05-27 2017-08-29 辽宁大学 The bearing residual life Forecasting Methodology that a kind of encoder health degree is evaluated
CN108106844A (en) * 2017-11-22 2018-06-01 辽宁大学 A kind of Method for Bearing Fault Diagnosis of the automatic coding machine of self-tuning parameter adjustment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于融合型深度学习的滚动轴承亚健康识别算法;张利等;《计算机应用》;20180416;全文 *
深度学习的亚健康识别算法的研究;徐骁;《中国优秀博硕士学位论文全文数据库(硕士)信息科技》;20170215(第02期);全文 *
融合型深度学习亚健康识别算法的研究;李大伟;《中国优秀博硕士学位论文全文数据库(硕士)信息科技》;20180315(第03期);全文 *

Also Published As

Publication number Publication date
CN110348468A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN110348468B (en) Bearing sub-health identification method for optimized strong reconstruction edge noise reduction automatic encoder of correlation vector machine
CN110849627B (en) Width migration learning network and rolling bearing fault diagnosis method based on same
CN109726524B (en) CNN and LSTM-based rolling bearing residual service life prediction method
CN110132554B (en) Rotary machine fault diagnosis method based on deep Laplace self-coding
CN110057584B (en) Degradation monitoring method for locomotive traction motor bearing
CN109827777B (en) Rolling bearing fault prediction method based on partial least square method extreme learning machine
CN113255848B (en) Water turbine cavitation sound signal identification method based on big data learning
CN112257530B (en) Rolling bearing fault diagnosis method based on blind signal separation and support vector machine
CN108106844B (en) A kind of Method for Bearing Fault Diagnosis of the automatic coding machine of self-tuning parameter adjustment
CN111967486A (en) Complex equipment fault diagnosis method based on multi-sensor fusion
CN115017945A (en) Mechanical fault diagnosis method and system based on enhanced convolutional neural network
CN111597651A (en) Rolling bearing performance degradation evaluation method based on HWPSO-SVDD model
CN112729834B (en) Bearing fault diagnosis method, device and system
CN113822139A (en) Equipment fault diagnosis method based on improved 1DCNN-BilSTM
CN111753891A (en) Rolling bearing fault diagnosis method based on unsupervised feature learning
CN115901249B (en) Rolling bearing performance degradation evaluation method combining feature optimization and multi-strategy optimization SVDD
Li et al. Gear pitting fault diagnosis using raw acoustic emission signal based on deep learning
CN114091525A (en) Rolling bearing degradation trend prediction method
CN112182912B (en) Manufacturing equipment spindle bearing health assessment method based on probability description and spectrum analysis
CN115290326A (en) Rolling bearing fault intelligent diagnosis method
CN114741945B (en) On-line fault diagnosis method for aero-engine
CN115587290A (en) Aero-engine fault diagnosis method based on variational self-coding generation countermeasure network
CN115238736A (en) Method for identifying early fault of rotary machine
CN113126489B (en) Intelligent BIT design method of heavy-duty gas turbine control system based on CNN-GRU-BINN
CN117458955A (en) Operation control method and system of motor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant