CN105241665A - Rolling bearing fault diagnosis method based on IRBFNN-AdaBoost classifier - Google Patents

Rolling bearing fault diagnosis method based on IRBFNN-AdaBoost classifier Download PDF

Info

Publication number
CN105241665A
CN105241665A CN201510559195.8A CN201510559195A CN105241665A CN 105241665 A CN105241665 A CN 105241665A CN 201510559195 A CN201510559195 A CN 201510559195A CN 105241665 A CN105241665 A CN 105241665A
Authority
CN
China
Prior art keywords
classifier
fault
irbfnn
adaboost
fault diagnosis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510559195.8A
Other languages
Chinese (zh)
Inventor
崔江
唐军祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201510559195.8A priority Critical patent/CN105241665A/en
Publication of CN105241665A publication Critical patent/CN105241665A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a rolling bearing fault diagnosis method based on an IRBFNN-AdaBoost classifier. The method belongs to the field of rotating machinery fault diagnosis. The rolling bearing fault diagnosis method comprises the following steps that 1) the bearing fault type, number and signals required to be measured are determined; 2) data acquisition is performed on different fault modes of signals, Fourier transform is performed on the signals, the frequency domain fault characteristics are extracted to form a sample set, and the sample set is divided into a training sample and a test sample; 3) a fault classifier is trained by the training sample; and 4) the classifier is tested by utilizing the test sample to assess performance of the classifier. The classifier is formed by multiple classifiers through cascading, and optimization is performed by using a PSO (Particle Swarm Optimization, PSO in short) optimization algorithm. Correct rate of rolling bearing fault diagnosis can be significantly enhanced by the classifier.

Description

Rolling bearing fault diagnosis method based on IRBFNN-AdaBoost classifier
Technical Field
The invention relates to a rolling bearing fault diagnosis method based on an IRBFNN-AdaBoost classifier, which is a method for cascading t RBFNNs by using an AdaBoost algorithm and performing global optimization on the weight of each RBFNN by using PSO (particle swarm optimization), and belongs to the field of rotary machine fault diagnosis.
Background
The rolling bearing is widely applied to various rotating mechanical equipment in the fields of industry, aerospace and the like, and the health of the running state of the rolling bearing usually directly influences the performance of the whole equipment. The service life of the rolling bearing is large in discreteness, the service lives of bearings produced in the same batch can be greatly different, so that the rolling bearing is not suitable for a method for regularly checking and replacing the rolling bearing, on the other hand, the fault occurrence rate of the rolling bearing is high, once the fault occurs, immeasurable economic loss can be brought, even disastrous occurrence of machine damage and death can be caused, therefore, the condition monitoring and fault diagnosis of the rolling bearing are always widely regarded, and the rolling bearing has very important research significance and practical application value.
In recent years, with the development of fault diagnosis technology, fault diagnosis methods for rolling bearings are increasing, mainly including: fault mechanism based methods, signal processing based methods, artificial intelligence based methods, and the like. Because the fault mechanism of the rolling bearing is very complex and the fault model is difficult to accurately establish, the method based on the fault mechanism is less applied in practice. The method based on signal processing is easily interfered by noise, has poor applicability, and is usually used together with other intelligent fault diagnosis algorithms. The method based on artificial intelligence is an information processing technology based on a knowledge model, integrates signal processing, machine learning and pattern recognition, overcomes the defects of a single method, has great advantages in the application of the field of fault diagnosis, and is a trend of future development. Therefore, the present invention considers performing fault diagnosis using a classifier with excellent classification performance, such as a radial basis function neural network. In order to overcome the defects of a single neural network diagnosis algorithm in the common method and improve the accuracy of fault diagnosis, the invention considers the adoption of a multi-classifier integration method, namely, the AdaBoost algorithm is adopted to cascade t RBFNNs for integration; in order to improve the performance of the integrated classifier, the PSO algorithm is considered to optimize the weight of each RBFNN, so that the performance of the classifier is optimal or close to optimal, and the accuracy of fault diagnosis of the rolling bearing can be obviously improved.
Disclosure of Invention
The invention provides a rolling bearing fault diagnosis method based on an IRBFNN-AdaBoost classifier, which is applied to the field of rolling bearing fault diagnosis, t RBFNNs are cascaded by using an AdaBoost algorithm, and the weight of each RBFNN is optimized by using PSO (particle swarm optimization), so that the comprehensive performance of the classifier is optimal or close to optimal, and the fault diagnosis accuracy of the rolling bearing is obviously improved.
In order to achieve the purpose, the invention adopts the following technical scheme:
a rolling bearing fault diagnosis method based on an IRBFNN-AdaBoost classifier comprises the following steps:
step 1: analyzing the rolling bearing to be measured, and determining the fault type and number of the rolling bearing and a required measuring signal;
step 2: data acquisition and fault feature extraction. Respectively acquiring sample data of normal bearing, inner ring fault, outer ring fault and rolling element fault by using a vibration sensor, importing the sample data into a computer, performing Fourier transform on vibration signals, extracting frequency domain fault characteristics capable of reflecting various fault modes, forming a fault characteristic sample set, and dividing the sample set into a training sample set and a testing sample set;
and step 3: sample data is input and initialized. Selecting m groups of sample data from the fault feature sample space obtained in the step 2 as training samples, and initializing the weight of the jth sample in the t-th cycle as follows: dt(j) 1/m. Where j is 1, 2,.. times, m, T is 1, 2,. times, T is the number of iterations (i.e., the number of classifiers), and T is the maximum number of iterations.
And 4, step 4: and (5) training an RBFNN weak classifier. RBFNN is typically a two-layer forward network, with the basis functions in the hidden layer using radial basis functions that produce a localized response to the input excitation, i.e., the hidden layer element only responds meaningfully with non-zeros when the input falls within a small designated region in the input space. The output of the output layer is the weighted sum of the outputs of all hidden layer units. The weight value input into the hidden layer unit is fixed to be 1, and the weight value between the hidden layer unit and the output unit is adjustable.
The RBF radial basis function selected by the invention is a Gaussian function, and the specific formula is as follows:
G ( X , T i ) = exp ( - | | X - T i | | 2 2 σ i 2 ) , i = 1,2 , . . . , M - - - ( 1 - 1 )
wherein, G (X, T)i) The output of the ith unit of the hidden layer; x is a P-dimensional input vector; t isiIs the center of the ith radial basis function; sigmaiIs the normalization parameter of the ith hidden node; exp (·) is an exponential function; i X-Ti| | represents the euclidean distance from the sample X to the center of the radial basis function; and M is the number of hidden layer unit nodes.
RBFNN learning parameters are three, namely data center T of radial basis functioniWidth σiAnd the weight w of the hidden layer and the output layeri. Data center T of radial basis function of the inventioniWidth σiAnd the weight w of the hidden layer and the output layeriMeanwhile, a supervised learning method is used for training. The training process of the weak classifier comprises the following specific steps:
step 4.1: and (5) initializing. Carrying out normalization processing on the original data X; arbitrarily specify wi,Ti,σiValue of (d), preset allowable error, preset learning step size η1,η2,η3
Step 4.2: calculating ej. The specific calculation formula is as follows:
e j = d j - f ( X j ) = d j - Σ i = 1 M w i · G ( X j , T i ) , j = 1,2 , . . . , m - - - ( 1 - 2 )
wherein d isjIdeal output, f (X)j) For actual output, ejThe difference between the ideal output and the actual output.
Step 4.3: the amount of change in the output unit weight is calculated. The specific calculation formula is as follows:
∂ E ( n ) ∂ w i ( n ) = - 1 m Σ j = 1 m e j · exp ( - | | X j - T i | | 2 2 σ i 2 ) - - - ( 1 - 3 )
wherein,denotes E (n) to wi(n) calculating the partial derivative, wherein n is the cycle number.
Step 4.4: the weight is changed. The specific calculation formula is as follows:
w i ( n + 1 ) = w i ( n ) - η 1 ∂ E ( n ) ∂ T i ( n ) - - - ( 1 - 4 )
step 4.5: and calculating the change amount of the center of the hidden layer unit. The specific calculation formula is as follows:
∂ E ( n ) ∂ T i ( n ) = - w i m · σ i 2 Σ j = 1 m e j · exp ( - | | X j - T i | | 2 2 σ i 2 ) · ( | | X j - T i | | ) - - - ( 1 - 5 )
wherein,represents E (n) to Ti(n) calculating a partial derivative.
Step 4.6: the center is changed. The specific calculation formula is as follows:
T i ( n + 1 ) = T i ( n ) - η 2 ∂ E ( n ) ∂ T i ( n ) - - - ( 1 - 6 )
step 4.7: the amount of change in the width of the function is calculated. The specific calculation formula is as follows:
∂ E ( n ) ∂ σ i ( n ) = - w i m · σ i 3 Σ j = 1 m e j · exp ( - | | X j - T i | | 2 2 σ i 2 ) · ( | | X j - T i | | 2 ) - - - ( 1 - 6 )
wherein,denotes E (n) vs. σi(n) calculating a partial derivative.
Step 4.8: the width is changed. The specific calculation formula is as follows:
σ i ( n + 1 ) = σ i ( n ) - η 3 ∂ E ( n ) ∂ σ i ( n ) - - - ( 1 - 8 )
step 4.9: and calculating the error. The specific calculation formula is as follows:
E = 1 2 m Σ j = 1 m e j 2 - - - ( 1 - 9 )
step 4.10: and (4) judging whether the allowable error or the cycle number is reached, if so, executing the step 5, otherwise, executing the step 4.2.
And 5: calculating a classification error etThe calculation formula is as follows:
e t = Σ j = 1 m D t ( j ) , ( g t ≠ y t ) - - - ( 1 - 10 )
wherein, gtActual classification results of the t-th cycle; y istExpected classification result of the t-th cycle; dt(j) Representing the weight of the jth sample of the tth loop.
Step 6: calculating the weight at. The calculation formula is as follows:
a t = 1 2 ln ( 1 - e t e t ) - - - ( 1 - 11 )
wherein, atThe weight value of the t-th RBFNN is obtained; ln (-) is a logarithmic function based on e.
And 7: and (6) adjusting the weight value. According to the weight a obtained in step 6tAnd adjusting the weight value of the next round of training samples, wherein the calculation formula is as follows:
D t + 1 ( j ) = D t ( j ) B t · exp [ - a t · y t ( j ) · g t ( j ) ] , j = 1,2 , . . . , m - - - ( 1 - 12 )
wherein B ist=||Dt(j) Solving 2 norm by | and | DEG |, | DEG |; y ist(j) Expected classification results for the jth sample of the tth cycle; gt(j) The actual classification result of the jth sample of the tth cycle is obtained.
And 8: and (6) performing cyclic judgment. And t is t +1, and then the judgment is carried out. If the error et>es(esTo allow classification error), and the iteration time T is less than T, then returning to the step 4 and continuing to execute; otherwise step 9 is executed downwards.
And step 9: the PSO algorithm is used for optimizing the weight of the weak classifier, and the specific optimization steps are as follows:
step 9.1: a population of particles is initialized. Regarding the t weak classifiers RBFNN as particles without volume and mass in a D-dimensional space, wherein the group scale of the particles is S; the initial position of each particle is represented by its weighting coefficient: lambda [ alpha ]s=(λs1,λs2,...,λsD) The speed is noted as: v. ofs=(vs1,vs2,...,vsD) Wherein, s is 1, 2,. and t; total number of iterations of the particle is Gmax(ii) a w is a weight coefficient; c. C1、c2Is a velocity constant;
step 9.2: and calculating the individual optimal value of each particle according to the fitness function, and comparing to obtain the global optimal value in the generation of particles. In order to obtain higher recognition rate and higher recognition accuracy with a smaller number of weak classifiers, two judgment functions are introduced, specifically as follows:
F 1 ( i ) = 1 - I / S , I = Σ s = 1 N λ s - - - ( 1 - 13 )
F 2 ( i ) = 1 - n + p ( p + p &GreaterEqual; h ) 0 ( p + p < h ) - - - ( 1 - 14 )
taking the fitness function as Fi(w)=w1F1(i)+w2F2(i)。
Wherein, w1,w2Is a weight coefficient, N is negativeTotal number of samples; n is+Representing the number of the particles s which are wrongly judged as positive samples; p represents the total number of positive samples; p is a radical of+Represents the total number of particles s determined as positive samples; h is the expected hit rate, which theoretically could reach 99.99%.
Step 9.3: comparing the current adaptive value with the best position pbest, if the adaptive value is better, taking the adaptive value as the best position pbest, otherwise, keeping the pbest unchanged;
step 9.4: comparing each particle with the global optimal position gbest, if pbest is superior to the global optimal value, assigning pbest to the gbest, otherwise, keeping the gbest unchanged;
step 9.5: readjusting the speed and position of each particle according to the steps 9.3 and 9.4, wherein the specific formula is as follows:
v s k + 1 = wv s k + c 1 &CenterDot; rand &CenterDot; ( pbest s k - &lambda; s k ) + c 2 &CenterDot; rand &CenterDot; ( gbest s k - &lambda; s k ) - - - ( 1 - 15 )
&lambda; s k + 1 = &lambda; s k + v s k - - - ( 1 - 16 )
wherein rand is [ -1, 1 ]]K is not more than Gmax
Step 9.6: if the end condition is not reached (there is not enough adaptation value or the maximum number of iterations G is not exceeded according to the fitness function)max) Go to step 9.2, otherwise, go to step 10.
Step 10: and synthesizing an IRBFNN-AdaBoost classifier. T groups of weak classification functions are obtained after t rounds of training, h can be obtained by combining the t groups of weak classification functions, and the combination formula is as follows:
h = sign [ &Sigma; t a t &CenterDot; f ( g t , a t ) ] - - - ( 1 - 17 )
wherein sign (·) is a sign function.
Step 11: and performing performance evaluation on the classifier by using the test sample set, and calculating indexes such as fault diagnosis accuracy and the like. The calculation formula is as follows: the diagnosis accuracy rate is the number of samples with correct diagnosis/total number of samples × 100%.
The invention has the following beneficial effects:
t RBFNNs are cascaded by using an AdaBoost algorithm, the weight of each RBFNN is globally optimized by using PSO, and an IRBFNN-AdaBoost classifier is constructed and used for fault diagnosis of a rolling bearing of a rotary machine, so that the fault diagnosis accuracy of the rolling bearing can be obviously improved.
Drawings
FIG. 1 fault diagnosis flow chart
FIG. 2RBFNN structure diagram
FIG. 3RBFNN training flow chart
FIG. 4PSO implementation flow chart
Detailed Description
The technical scheme of the invention is explained in detail in the following with reference to the attached figures 1, 2, 3 and 4.
The invention designs a rolling bearing fault diagnosis method based on an IRBFNN-AdaBoost classifier, which mainly comprises a fault mode and test signal analysis part, a data acquisition part, a fault feature extraction part, a single RBFNN training part, a weak classifier cascade construction part, a PSO (particle swarm optimization) part, a weak classifier weight coefficient generation part and a test result analysis part. The adopted step flow chart is shown in fig. 1, and the specific operation comprises the following steps:
fault pattern analysis, data acquisition and fault feature extraction
Before the method is adopted, firstly, the rolling bearing is analyzed, the fault type (wear, fatigue, corrosion, fracture, indentation, gluing and other types) and the number (outer ring fault, inner ring fault, rolling body fault and other types) of the rolling bearing are determined, in addition, the fault signal required to be collected is analyzed, for the rolling bearing, the most common and simplest method is to collect the vibration signal when the bearing runs by using a vibration sensor, and the running state of the rolling bearing is judged by processing and analyzing the vibration signal; and then, data acquisition and fault feature extraction are carried out, sample data of normal bearing, inner ring fault, outer ring fault and rolling element fault are respectively acquired by using a vibration sensor, the sample data are imported into a computer, Fourier transform is carried out on vibration signals, frequency domain fault features capable of reflecting various fault modes are extracted to form a sample set, and the sample set is divided into a training sample set and a testing sample set.
(II) training a classifier, and specifically comprising the following steps:
step 1: sample data is input and initialized. Selecting m groups of sample data from the fault feature sample space obtained in the step (one) as training samples, and initializing the weight of the jth sample in the t-th cycle as follows: dt(j) 1/m. Wherein j is 1, 2, a., m, F1, 2, a., T is the iteration number, and T is the maximum iteration number;
step 2: and (5) training an RBFNN weak classifier. RBFNN is typically a two-layer forward network, with the basis functions in the hidden layer using radial basis functions that produce a localized response to the input excitation, i.e., the hidden layer element only responds meaningfully with non-zeros when the input falls within a small designated region in the input space. The output of the output layer is the weighted sum of the outputs of all hidden layer units. The weight value input into the hidden layer unit is fixed to be 1, and the weight value between the hidden layer unit and the output unit is adjustable.
The RBF radial basis function selected by the invention is a Gaussian function, and the specific formula is as follows:
G ( X , T i ) = exp ( - | | X - T i | | 2 2 &sigma; i 2 ) , i = 1,2 , . . . , M - - - ( 2 - 1 )
wherein, G (X, T)i) The output of the ith unit of the hidden layer; x is a p-dimensional input vector; t isiIs the center of the ith radial basis function; sigmaiIs the normalization parameter of the ith hidden node; exp (·) is an exponential function; i X-Ti| | represents the euclidean distance from the sample X to the center of the radial basis function; and M is the number of hidden layer unit nodes.
RBFNN learning parameters are three, namely data center T of radial basis functioniWidth σiAnd the weight w of the hidden layer and the output layeri. Data center T of radial basis function of the inventioniWidth σiAnd the weight w of the hidden layer and the output layeriMeanwhile, a supervised learning method is used for training. Weak (weak)The training process of the classifier comprises the following specific steps:
step 2.1: and (5) initializing. Carrying out normalization processing on the original data X: arbitrarily specify wi,Ti,σiValue of (d), preset allowable error, preset learning step size η1,η2,η3
Step 2.2: calculating ej. The specific calculation formula is as follows:
e j = d j - f ( X j ) = d j - &Sigma; i = 1 M w i &CenterDot; G ( X j , T i ) , j = 1,2 , . . . , m - - - ( 2 - 2 )
wherein d isjIdeal output, f (X)j) For actual output, ejThe difference between the ideal output and the actual output.
Step 2.3: the amount of change in the output unit weight is calculated. The specific calculation formula is as follows:
&PartialD; E ( n ) &PartialD; w i ( n ) = - 1 m &Sigma; j = 1 m e j &CenterDot; exp ( - | | X j - T i | | 2 2 &sigma; i 2 ) - - - ( 2 - 3 )
wherein,denotes E (n) to wi(n) calculating the partial derivative, wherein n is the cycle number.
Step 2.4: the weight is changed. The specific calculation formula is as follows:
w i ( n + 1 ) = w i ( n ) - &eta; 1 &PartialD; E ( n ) &PartialD; T i ( n ) - - - ( 2 - 4 )
step 2.5: and calculating the change amount of the center of the hidden layer unit. The specific calculation formula is as follows:
&PartialD; E ( n ) &PartialD; T i ( n ) = - w i m &CenterDot; &sigma; i 2 &Sigma; j = 1 m e j &CenterDot; exp ( - | | X j - T i | | 2 2 &sigma; i 2 ) &CenterDot; ( | | X j - T i | | 2 ) - - - ( 2 - 5 )
wherein,represents E (n) to Ti(n) calculating a partial derivative.
Step 2.6: the center is changed. The specific calculation formula is as follows:
T i ( n + 1 ) = T i ( n ) - &eta; 2 &PartialD; E ( n ) &PartialD; T i ( n ) - - - ( 2 - 6 )
step 2.7: the amount of change in the width of the function is calculated. The specific calculation formula is as follows:
&PartialD; E ( n ) &PartialD; &sigma; i ( n ) = - w i m &CenterDot; &sigma; i 3 &Sigma; j = 1 m e j &CenterDot; exp ( - | | X j - T i | | 2 2 &sigma; i 2 ) &CenterDot; ( | | X j - T i | | 2 ) - - - ( 2 - 7 )
wherein,denotes E (n) vs. σi(n) calculating a partial derivative.
Step 2.8: the width is changed. The specific calculation formula is as follows:
&sigma; i ( n + 1 ) = &sigma; i ( n ) - &eta; 3 &PartialD; E ( n ) &PartialD; &sigma; i ( n ) - - - ( 2 - 8 )
step 2.9: and calculating the error. The specific calculation formula is as follows:
E = 1 2 m &Sigma; j = 1 m e j 2 - - - ( 2 - 9 )
step 2.10: and (4) judging whether the allowable error or the cycle number is reached, if so, executing the step 3, otherwise, executing the step 2.2.
And step 3: calculating a classification error etThe calculation formula is as follows:
e t = &Sigma; j = 1 m D t ( j ) , ( g t &NotEqual; y t ) - - - ( 2 - 10 )
wherein, gtActual classification results of the t-th cycle; y istExpected classification result of the t-th cycle; dt(j) Representing the weight of the jth sample of the tth loop.
And 4, step 4: calculating the weight at. The calculation formula is as follows:
a t = 1 2 ln ( 1 - e t e t ) - - - ( 2 - 11 )
wherein, atThe weight value of the ith RBFNN is obtained; ln (-) is a logarithmic function based on e.
And 5: and (6) adjusting the weight value. According to the weight a obtained in the step 4tAnd adjusting the weight value of the next round of training samples, wherein the calculation formula is as follows:
D t + 1 ( j ) = D t ( j ) B t &CenterDot; exp [ - a t &CenterDot; y t ( j ) &CenterDot; g t ( j ) ] , j = 1,2 , . . . , m - - - ( 2 - 12 )
wherein B ist=||Dt(j) Solving 2 norm by | and | DEG |, | DEG |; y ist(j) Expected classification results for the jth sample of the tth cycle; gt(j) The actual classification result of the jth sample of the tth cycle is obtained.
Step 6: and (6) performing cyclic judgment. And t is t +1, and then the judgment is carried out. If the error et>es(esTo allow classification error), and the iteration time T is less than T, then the step 2 is returned to, and the execution is continued; otherwise step 7 is executed downwards.
And 7: the PSO algorithm is used for optimizing the weight of the weak classifier, and the specific optimization steps are as follows:
step 7.1: a population of particles is initialized. Regarding the t weak classifiers RBFNN as particles without volume and mass in a D-dimensional space, wherein the group scale of the particles is S; the initial position of each particle is represented by its weighting coefficient: lambda [ alpha ]s=(λs1,λs2,...,λsD) The speed is noted as: v. ofs=(vs1,vs2,...,vsD) Wherein, s is 1, 2,. and t; total number of iterations of the particle is Gmax(ii) a w is a weight coefficient; c. C1、c2Is a velocity constant;
step 7.2: and calculating the individual optimal value of each particle according to the fitness function, and comparing to obtain the global optimal value in the generation of particles. In order to obtain higher recognition rate and higher recognition accuracy with a smaller number of weak classifiers, two judgment functions are introduced, specifically as follows:
F 1 ( i ) = 1 - I / S , I = &Sigma; s = 1 N &lambda; s - - - ( 2 - 13 )
F 2 ( i ) = 1 - n + p ( p + p &GreaterEqual; h ) 0 ( p + p < h ) - - - ( 2 - 14 )
taking a fitness function asF1(w)=w1F1(i)+w2F2(i)。
Wherein, w1,w2Is a weight coefficient, and N is the total number of negative samples; n is+Representing the number of the particles s which are wrongly judged as positive samples; p represents the total number of positive samples; p is a radical of+Represents the total number of particles s determined as positive samples; h is the expected hit rate, which theoretically could reach 99.99%.
Step 7.3: comparing the current adaptive value with the best position pbest, if the adaptive value is better, taking the adaptive value as the best position pbest, otherwise, keeping the pbest unchanged;
step 7.4: comparing each particle with the global optimal position gbest, if pbest is superior to the global optimal value, assigning pbest to the gbest, otherwise, keeping the gbest unchanged;
step 7.5: readjusting the speed and position of each particle according to the steps 9.3 and 9.4, wherein the specific formula is as follows:
v s k + 1 = wv s k + c 1 &CenterDot; rand &CenterDot; ( pbest s k - &lambda; s k ) + c 2 &CenterDot; rand &CenterDot; ( gbest s k - &lambda; s k ) - - - ( 2 - 15 )
&lambda; s k + 1 = &lambda; s k + v s k - - - ( 2 - 16 )
wherein rand is [ -1, 1 ]]K is not more than Gmax
Step 7.6: if the end condition is not reached (there is not enough adaptation value or the maximum number of iterations G is not exceeded according to the fitness function)max) Go to step 7.2, otherwise, go to step 8.
And 8: and synthesizing an IRBFNN-AdaBoost classifier. T groups of weak classification functions are obtained after t rounds of training, h can be obtained by combining the t groups of weak classification functions, and the combination formula is as follows:
h=sign[Σtat·f(gt,at)](2-17)
wherein sign (·) is a sign function.
And (III) performing performance evaluation on the classifier by using the test sample set, and calculating indexes such as fault diagnosis accuracy and the like. The calculation formula is as follows: the diagnosis accuracy rate is the number of samples with correct diagnosis/total number of samples × 100%.

Claims (2)

1. A rolling bearing fault diagnosis method based on an IRBFNN-AdaBoost classifier is characterized by mainly comprising the following basic steps:
1) carrying out experimental analysis on the bearing to be tested, and determining the fault type, the fault number and the required measurement signal of the bearing;
2) data acquisition and fault feature extraction. Collecting vibration signals of a fault bearing during operation, guiding the vibration signals into a computer, carrying out Fourier transform on the signals, extracting fault characteristics capable of reflecting various fault modes, and forming a characteristic sample set. Here, the feature sample set is divided into a training sample set and a testing sample set;
3) and constructing and training the IRBFNN-AdaBoost classifier by utilizing a training sample set. In the design and training process of the classifier, optimizing and adjusting the structure of the classifier by using a PSO optimization algorithm, and forming an IRBFNN-AdaBoost fault classifier with an optimized structure;
4) and performing performance evaluation on the IRBFNN-AdaBoost fault classifier by using the test sample set, and calculating indexes such as fault diagnosis accuracy and the like.
2. The IRBFNN-AdaBoost classifier according to the step 3) of claim 1 is an integrated classifier fault diagnosis method based on PSO optimization, which is characterized in that, unlike the common method in which a single classifier method is adopted, the invention utilizes and integrates a plurality of neural network sub-classifiers, and cascades t sub-classifiers together by the AdaBoost method, and then optimizes the weight of each sub-classifier by the PSO algorithm to make it reach global optimum or near optimum, thereby making the overall performance of the classifier reach optimum or near optimum.
CN201510559195.8A 2015-09-06 2015-09-06 Rolling bearing fault diagnosis method based on IRBFNN-AdaBoost classifier Pending CN105241665A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510559195.8A CN105241665A (en) 2015-09-06 2015-09-06 Rolling bearing fault diagnosis method based on IRBFNN-AdaBoost classifier

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510559195.8A CN105241665A (en) 2015-09-06 2015-09-06 Rolling bearing fault diagnosis method based on IRBFNN-AdaBoost classifier

Publications (1)

Publication Number Publication Date
CN105241665A true CN105241665A (en) 2016-01-13

Family

ID=55039388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510559195.8A Pending CN105241665A (en) 2015-09-06 2015-09-06 Rolling bearing fault diagnosis method based on IRBFNN-AdaBoost classifier

Country Status (1)

Country Link
CN (1) CN105241665A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105738109A (en) * 2016-02-22 2016-07-06 重庆大学 Bearing fault classification diagnosis method based on sparse representation and ensemble learning
CN106408088A (en) * 2016-11-22 2017-02-15 北京六合智汇技术有限责任公司 Depth-learning-theory-based fault diagnosis method for rotating machinery
CN106441893A (en) * 2016-09-22 2017-02-22 北京邮电大学 Train rolling bearing fault and impurity vibration distinguishing method
CN108036920A (en) * 2017-12-29 2018-05-15 南京航空航天大学 A kind of free speed measuring system of high-speed wind tunnel rotary missile
CN108827671A (en) * 2018-03-21 2018-11-16 南京航空航天大学 A kind of Trouble Diagnostic Method of Machinery Equipment
CN110263380A (en) * 2019-05-23 2019-09-20 东华大学 A kind of spinning process cascade modeling piecewise interval method for parameter configuration
CN116086537A (en) * 2023-02-08 2023-05-09 杭州安脉盛智能技术有限公司 Equipment state monitoring method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5811683A (en) * 1996-04-04 1998-09-22 Agency Of Industrial Science & Technology, Ministry Of International Trade & Industry Method and apparatus for location of anomalous signal in a radial bearing
KR20150001158A (en) * 2013-06-26 2015-01-06 주식회사 포스코 Diagnostic system for bearing
CN104502103A (en) * 2014-12-07 2015-04-08 北京工业大学 Bearing fault diagnosis method based on fuzzy support vector machine
CN104596767A (en) * 2015-01-13 2015-05-06 北京工业大学 Method for diagnosing and predicating rolling bearing based on grey support vector machine

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5811683A (en) * 1996-04-04 1998-09-22 Agency Of Industrial Science & Technology, Ministry Of International Trade & Industry Method and apparatus for location of anomalous signal in a radial bearing
KR20150001158A (en) * 2013-06-26 2015-01-06 주식회사 포스코 Diagnostic system for bearing
CN104502103A (en) * 2014-12-07 2015-04-08 北京工业大学 Bearing fault diagnosis method based on fuzzy support vector machine
CN104596767A (en) * 2015-01-13 2015-05-06 北京工业大学 Method for diagnosing and predicating rolling bearing based on grey support vector machine

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐启华 等: "神经网络集成在发动机故障诊断中的应用研究", 《航空动力学报》 *
李宝栋 等: "基于PSO-RBF 神经网络的电机轴承故障诊断", 《自动化与仪器仪表》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105738109A (en) * 2016-02-22 2016-07-06 重庆大学 Bearing fault classification diagnosis method based on sparse representation and ensemble learning
CN106441893A (en) * 2016-09-22 2017-02-22 北京邮电大学 Train rolling bearing fault and impurity vibration distinguishing method
CN106441893B (en) * 2016-09-22 2018-08-10 北京邮电大学 Train rolling bearing fault vibrates differentiating method with impurity
CN106408088A (en) * 2016-11-22 2017-02-15 北京六合智汇技术有限责任公司 Depth-learning-theory-based fault diagnosis method for rotating machinery
CN106408088B (en) * 2016-11-22 2019-05-24 周孝忠 A kind of rotating machinery method for diagnosing faults based on deep learning theory
CN108036920A (en) * 2017-12-29 2018-05-15 南京航空航天大学 A kind of free speed measuring system of high-speed wind tunnel rotary missile
CN108827671A (en) * 2018-03-21 2018-11-16 南京航空航天大学 A kind of Trouble Diagnostic Method of Machinery Equipment
WO2019178930A1 (en) * 2018-03-21 2019-09-26 南京航空航天大学 Fault diagnosis method for mechanical device
CN110263380A (en) * 2019-05-23 2019-09-20 东华大学 A kind of spinning process cascade modeling piecewise interval method for parameter configuration
CN110263380B (en) * 2019-05-23 2020-11-24 东华大学 Spinning process cascade modeling subsection interval parameter configuration method
CN116086537A (en) * 2023-02-08 2023-05-09 杭州安脉盛智能技术有限公司 Equipment state monitoring method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110849627B (en) Width migration learning network and rolling bearing fault diagnosis method based on same
CN105241665A (en) Rolling bearing fault diagnosis method based on IRBFNN-AdaBoost classifier
CN109582003B (en) Bearing fault diagnosis method based on pseudo label semi-supervised kernel local Fisher discriminant analysis
CN113255848B (en) Water turbine cavitation sound signal identification method based on big data learning
CN105528504B (en) Rolling bearing health state evaluation method based on CFOA-MKHSVM
CN111914883B (en) Spindle bearing state evaluation method and device based on deep fusion network
CN105738109B (en) Bearing fault classification diagnosis method based on rarefaction representation and integrated study
CN111562108A (en) Rolling bearing intelligent fault diagnosis method based on CNN and FCMC
CN110132554B (en) Rotary machine fault diagnosis method based on deep Laplace self-coding
CN105760839A (en) Bearing fault diagnosis method based on multi-feature manifold learning and support vector machine
CN112257530B (en) Rolling bearing fault diagnosis method based on blind signal separation and support vector machine
CN206504869U (en) A kind of rolling bearing fault diagnosis device
CN106323635A (en) Rolling bearing fault on-line detection and state assessment method
CN111753891B (en) Rolling bearing fault diagnosis method based on unsupervised feature learning
CN102706573A (en) Fault classification diagnosis method of equipment
CN110399854B (en) Rolling bearing fault classification method based on mixed feature extraction
CN110880369A (en) Gas marker detection method based on radial basis function neural network and application
CN114429152A (en) Rolling bearing fault diagnosis method based on dynamic index antagonism self-adaption
CN114487129B (en) Flexible material damage identification method based on acoustic emission technology
Zhao et al. A novel deep fuzzy clustering neural network model and its application in rolling bearing fault recognition
Song et al. Data and decision level fusion-based crack detection for compressor blade using acoustic and vibration signal
CN113268833A (en) Migration fault diagnosis method based on deep joint distribution alignment
CN112287980B (en) Power battery screening method based on typical feature vector
CN114091525A (en) Rolling bearing degradation trend prediction method
CN112504682A (en) Chassis engine fault diagnosis method and system based on particle swarm optimization algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160113

WD01 Invention patent application deemed withdrawn after publication