CN110288046B - Fault prediction method based on wavelet neural network and hidden Markov model - Google Patents
Fault prediction method based on wavelet neural network and hidden Markov model Download PDFInfo
- Publication number
- CN110288046B CN110288046B CN201910587643.3A CN201910587643A CN110288046B CN 110288046 B CN110288046 B CN 110288046B CN 201910587643 A CN201910587643 A CN 201910587643A CN 110288046 B CN110288046 B CN 110288046B
- Authority
- CN
- China
- Prior art keywords
- state
- hidden
- probability
- time
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/29—Graphical models, e.g. Bayesian networks
- G06F18/295—Markov models or related models, e.g. semi-Markov models; Markov random fields; Networks embedding Markov models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
Abstract
The invention discloses a fault prediction method based on a wavelet neural network and a hidden Markov model, which comprises the following steps: 1. inputting a sample; 2. performing data dimension reduction on sample data by using a wavelet neural network, updating weights and bias values from an input layer to a hidden layer and from the hidden layer to an output layer, if the difference value between the numerical value of the output layer and the numerical value of the input layer exceeds a threshold value, returning to 1, otherwise, turning to 3;3 outputting a wavelet neural network model; 4 initializing a hidden Markov model; 5, adopting different samples, and replacing sample data by using a neuron numerical value of a hidden layer of a wavelet neural network; 6 establishing a hidden Markov model; 7, updating parameters of the hidden Markov model by using a forward-backward algorithm, and calculating the conditional probability; 8, if the calculated conditional probability is converged, turning to 9, otherwise, returning to 5;9 outputting the final hidden Markov model; 10, inputting historical operation data of the equipment to be detected, and calculating the maximum decline probability of the equipment by using a hidden Markov model.
Description
Technical Field
The invention discloses a fault prediction method, and particularly relates to a fault prediction method based on a wavelet neural network and a hidden Markov model.
Background
The maintenance and guarantee of the urban rail transit vehicle-mounted operation equipment goes through three stages of after-repair, preventive maintenance and optional maintenance, wherein the possibility of equipment failure is monitored and inferred by using the operation data of the equipment under the condition that the equipment still can normally work in the optional maintenance, so that maintenance measures are taken, the operation efficiency can be greatly improved for the urban rail transit operated at high frequency, and the potential safety hazard caused by equipment failure is reduced. At present, compared with the traditional regression prediction, the neural network-based fault prediction method can extract the intrinsic rules and essential characteristics of data only by using sample data without establishing a visual mathematical model, has the advantages of learning, associative memory, simple reasoning, self-adaption and the like, but due to the limitations of the neural network, for example, the neural network is sensitive to an initial weight, and the weight of the network is gradually adjusted along the direction of local improvement, so that the algorithm falls into a local extremum, and the weight converges to a local minimum point; the network structure selection is generally selected by experience, so that overfitting is easy to form when the network structure selection is too large, and non-convergence is easy to form when the network structure selection is too small.
The current means for predicting the failure by using the neural network mainly comprises: in order to prevent the system from falling into a local optimal solution, an optimization algorithm is adopted to select an initial value of the neural network, so that the convergence performance of the network is improved; in order to reduce the parameter uncertainty problem of the prediction model, a proper network structure is selected by adopting algorithms such as PCA (principal component analysis) and the like, so that the method is different from the traditional experience selection mode. Reference to
[1] Liuhaoran, zhao Cuixiang, li Xuan, etc. A neural network optimization algorithm research based on improved genetic algorithm [ J ]. Instrument and Meter report, 2016,37 (7): 1573-1580.
[2] PCA-based probabilistic neural network architecture optimization [ J ]. University of Qinghua academic Press (Nature science edition), 2008,48 (1): 141-144.
[3]Bar-Itzhack I Y,Oshman Y.Attitude determination from vector observations:quaternion estimation[J].Aerospace and Electronic Systems,IEEE Transactions on,1985(1):128-136.
[4] A hexapod robot autonomous navigation closed-loop control system based on a fuzzy neural network is designed according to the following steps of J robot, 2018,40 (1) and 16-23.
[5] Li Xinjie, a method for realizing a fault prediction technology in an on-demand maintenance mode [ J ] equipment management and maintenance, 2017 (17): 68-70.
[6] Chenxing field, bear little volt, full dawn light, etc. A big data reduction method for relay protection state evaluation [ J ]. China Motor engineering reports, 2015,35 (3): 538-548.
[7] The method comprises the steps of (1) designing a hidden Markov model-based aeromechanical system fault diagnosis algorithm [ J ]. Modern industrial economy and informatization 2016,6 (5): 44-45.
[8] Wangxing, xiaguojing, hanjie, et al, hidden Markov model-based turbine failure diagnosis method research [ J ]. Report on engineering mechanics of China, 2016,14 (6).
Disclosure of Invention
The invention provides a fault prediction method based on a wavelet neural network and a hidden Markov model, which utilizes the neural network to extract the fault reason of the collected data so as to achieve the purpose of data dimension reduction, and then utilizes the hidden Markov model to carry out probability prediction on the fault so as to ensure the reliability of network structure and weight selection. The method mainly comprises the following steps: the method comprises the following steps: and collecting sample data. Step two: and constructing a neural network of a neural network hidden layer by utilizing wavelet analysis, constructing an automatic coding machine, and reducing the dimension of sample data by utilizing the neural network. Step three: and predicting equipment failure by using a hidden Markov model.
In order to solve the problems, the invention adopts the following technical scheme: a failure prediction method based on a wavelet neural network and a hidden Markov model is characterized by comprising the following steps:
step 1: acquiring sample data, including historical operation data, maintenance data and environment data, wherein the historical operation data refers to the time of fault-free operation of equipment, the maintenance data refers to the frequency of equipment maintenance and the time of safe operation after maintenance, and the environment data refers to the current, voltage, operation temperature, humidity of a PCB (printed circuit board) and the vibration degree of vehicle-mounted equipment;
and 2, step: establishing a neural network and reducing the dimension of data;
because the acquired data cannot be mutually independent, for example, the operation time of the equipment is influenced by environmental factors, the maintenance frequency also influences the normal operation time of the equipment after maintenance, the factors cannot be distinguished by using a standard data model, the data acquired by the sensor has correlation, and in order to prevent overfitting of the data, the data needs to be subjected to dimension reduction treatment and is expressed in a mutually independent form.
The wavelet function is used for replacing an excitation function of a traditional neural network hidden layer to construct a three-layer neuron network, the scale and the translation function of the wavelet function are used as weights from an input layer to the hidden layer to replace a traditional empirical value selection mode, local convergence is avoided, the wavelet function is used for approximating data, and the iteration speed of the traditional neural network is improved.
Step 2.1 initial conditions:
determining an initial connection weight and bias from a neural network input sample to a hidden layer neuron, an initial connection weight and bias from the hidden layer neuron to an output layer, and the number of neurons in the input layer, the hidden layer and the output layer;
in the initial conditions, the non-fault operation time, temperature, humidity, voltage, maintenance times and the non-fault operation time after maintenance of the equipment are selected as neurons of an input layer and an output layer of the neural network, the number of the neurons of the hidden layer is 3, the initial weight is 1/7, and the bias is a random value of [ -0.25,0.25 ].
Step 2.2 hidden layer excitation function:
in the formula:representing a wavelet function; a is a j ,b j Representing a scale function and a translation function of a jth neuron of the hidden layer, wherein x represents a signal input to the hidden layer after an input signal is subjected to weight and bias calculation; then the kth godThe output over the network can be expressed as:
in the formula (f) k (x) Representing the kth output value, x, of the neural network i Denotes the x (th) order i An ith dimension input of samples, n represents the number of hidden layer neurons, m represents the number of sample inputs, and n < m, w kj Represents the connection weight, w, from the jth neuron of the hidden layer to the kth output ji Representing the ith input sample x i Connection weight, λ, to the jth neuron of the hidden layer i Denotes input layer bias, λ j Indicating a hidden layer bias, will f k (x) The method comprises the following three parts:
step 2.3, automatic coding:
approximating the input function by using the neuron output of the step 2.2, and defining a system error function:
solving an error function E to w kj 、w ji 、λ j 、λ i Scale function a j And a translation function b j Partial derivatives of (a):
step 2.4: for the partial derivative, a gradient descent algorithm is used for w kj 、w ji 、λ j 、λ i 、a j And b j Updating, defining the learning step length of the gradient descent algorithm as beta, and updating p +1 parameter factors by the p-th sample as follows:
step 2.5, training the next sample, returning to step 2.2, calculating the output of the neural network according to the updated parameter factors, comparing with the original data, calculating the system error, if the error value is smaller than the set error threshold value, judging that the output result approaches the original data at the moment, stopping training, and at the moment, the hidden layer is the first-order characteristic representation of the system;
step 3, probability prediction based on a hidden Markov model:
according to step 2, from the raw data X N×m Extracting fault sequence H of equipment N×r =[h 1 ,h 2 ,...,h r ]R is the characteristic dimension of the data, r < m, N is the number of samples, m represents the dimension of the data contained in each sample, h represents the required data extracted from each sample
Step 3.1 initial conditions:
let the hidden markov model be λ = (N, M, pi, a, B),
(1) N represents the number of hidden states of the hidden markov model, and N = (N) represents the random process of the hidden states in which the change process of the device parameters is set 1 ,N 2 ,N 3 ,...,N n ) The hidden state of the system at time t is q t ,q t ∈N;
(2) M represents the observed state of the system, represents the fault sequence extracted by the neural network, and M = [ M = 1 ,M 2 ,...,M r ]The observed state of the system at time t is O t ,O t ∈M;
(3) π represents the probability matrix of the initial hidden state, π = (π) 1 ,π 2 ,...,π n ),π i =P(q 1 =N i ),1≤i≤n;
q 1 Indicating the initial state of the system, N i Representing the ith hidden state of the hidden Markov model, and p () representing the probability that the initial state of the system is the ith hidden state;
(4) A is a state transition matrix, which represents a probability matrix of the device transitioning from a current hidden state to another hidden state, A = (a) ij ) n×n N × n denotes the dimension of the matrix, where a ij Representing the probability of transition from state i to state j, a ij =P(q t+1 =N j |q t =N i )1≤i,j≤n;
q t Representing the hidden state of the system at time t, q t+1 Representing the hidden state of the system at time t +1, q t =N i Indicating that the system belongs to the i-th hidden state at time t, p () indicating that the system belongs to the N-th hidden state from time t i State to N j The probability of state transition, n is the number of hidden states of the system.
(5) B is an observation value probability matrix which represents the transition probability from the hidden state to the observed state of the device, and B = (B) jk ) r×n ,b jk Representing the transition probability from the hidden state k to the observed state j, note b jk =b j (k),b j (k)=P(O t =M k |q t =N j ),1≤j≤n,1≤k≤r,Q t Indicating the observed state of the system at time t, M k Indicating that the system belongs to the k-th observation state at time t, q t Representing the hidden state of the system at the moment t, j is the hidden layer state order, n is the total state number of the hidden layer, k is the observation state order, r is the total state number of the observation layer, and p () represents the probability of the system from the jth hidden state to the kth observation state at the moment t. Step 3.2 build up of Fault model
Selecting collected data of different states of equipment, including normal operation state of the equipment, 4 wear degradation states and fault states of different degrees of the equipment under the condition of no fault, establishing a hidden Markov finite model, performing model training on the collected equipment state data by using a forward-backward algorithm, and determining a state transition matrix of a hidden state of the equipment, wherein the calculation steps are as follows:
(1) Initializing a hidden Markov model matrix: pi = (pi) 1 ,π 2 ,...,π n ),A=(a ij ) n×n ,B=(b jk ) r×n ;
(2) Taking T groups of measurement data from the sample data as an observation state sequence of the model;
(3) Mapping data to a hidden layer of the neural network according to the calculation result of the wavelet neural network, reducing the dimension of the data, and outputting an observation sequence O = [ O ] 1 ,O 2 ,...O T ];
(4) Defining a forward probability a t (i) Indicating that the hidden state is N at time T (T < T) i The observation sequence is [ O ] 1 ,O 2 ,...O t ]Probability of (c):
a 1 (i)=π i b i (O 1 ) (16)
wherein, a 1 (i) Representing the forward probability of the ith hidden state at the initial moment of the system; pi i An initial probability matrix representing i hidden states of the first initial probability matrix; b is a mixture of i (O 1 ) Indicating that the system is in a hidden state at the initial moment of time i Observe O 1 The probability of (d); n is a radical of j Representing the jth hidden state of the system; λ represents a hidden Markov model; a is t (j)a ji Indicating that the hidden state is N at time t j The observation sequence is [ O ] 1 ,O 2 ,...O t ]At time t +1, the hidden state is N i The probability of (d); b i (O t+1 ) Indicating a hidden state as N i Observe O t+1 The probability of (d); p () means that the observed sequence is [ O ] at time t +1 1 ,O 2 ,...O t ,O t+1 ]Hidden state is N i The probability of (c).
(5) Defining a backward probability beta t (i) Indicating that the hidden state is N at time T (T < T) i And the sequence observed from time T +1 to time T is [ O ] t+1 ,O t+2 ,...O T ]Probability of (c):
β T (i)=1 (18)
wherein q is t =N i Indicating that the hidden state is N at time t i (ii) a λ represents a hidden Markov model; beta is a beta t+1 (j) Indicating that the hidden state is N at the time of t +1 j Backward probability of (d); a is ij Representing the probability of transitioning from state i to state j; a is ij β t+1 (j) Indicating that the hidden state is N at the time of t +1 j Hidden state is N at time t i The probability of (d); a is ij b j (O t+1 )β t+1 (j) Indicates that the observed sequence is [ O ] t+1 ,O t+2 ,...O T ]The hidden state at the time t +1 is N j Hidden state is N at time t i The probability of (d); p () means that the hidden layer state is N at time t i The probability of (c).
(6) Calculating the sum of the forward probability and the backward probability of the current observation sequence
a t (i) Indicating that the hidden layer is N at time t i Forward probability of, beta t (i) Denotes that the hidden layer is N at time t i N is the number of hidden layer states.
Given a sequence of observations, at time t the device is in state N i Probability of (c):
a t (i) Denotes that the hidden layer is N at time t i Forward probability of, beta t (i) Denotes that the hidden layer is N at time t i N is the number of hidden layer states.
(7) Sequencing by given observationColumn, at time t +1 the device is in state N i Probability of (2)
a t (i) Denotes that the hidden layer is N at time t i Forward probability of, beta t+1 (j) Indicating that the hidden layer is N at the time of t +1 j N is the number of hidden layer states, a ij Representing the probability of transition from state i to state j, b j (O t+1 ) Indicating that the hidden layer state at the time t is N j And O is observed at the time t +1 t+1 The probability of (c).
(8) If P (O | lambda) does not converge, returning to the step (2), otherwise, calculating hidden Markov model parameters:
assuming that the number of samples for calculation when P (O | λ) converges is D, then:
π i representing the probability of the state i, and calculating the average value of the probability for each sample;indicating that the d-th sample state at the initial time is N i The probability of (c).
Represents the d-th sample at time t, from state N i Transition to State N j T is the number of the collected time instants, and D is the number of samples; a is a ij Indicating the final state N i Transition to State N j The probability of (d);indicating that the d-th sample state at the time t is N i The probability of (c).
b j (k) Representing the transition probability of the hidden state k to the observed state j.
(9) After training is finished, outputting a final hidden Markov model lambda = (N, M, pi, A, B);
step 3.3 Fault prediction
When the equipment is subjected to fault prediction, a historical observation sequence O = [ O ] is output 1 ,O 2 ,...O T ]And calculating the maximum possible degradation state of the trained hidden Markov model according to the trained hidden Markov model, and the steps are as follows:
(1) State initialization:
δ 1 (i)=π i b i (O 1 ),i=1,2,...,n (26)
n is the number of states of the hidden layer, pi i Indicating an initial time state N i Probability of (b) i (O 1 ) Denotes the observation of the initial time as O 1 In a state of N i Probability of (d), δ 1 (i) Indicates that O is observed at the initial time 1 The system is at N i The state of (c).
(2) State at recursion time t:
δ t (i)=max(δ t -1(1),δ t-1 (2),...,δ t-1 (n))·b i (O t ) (28)
max(δ t-1 (1),δ t-1 (2),...,δ t-1 (n)) represents the maximum possible state of n states of the system at time t-1; b i (O t ) Indicates that O was observed t The system is in state N i The probability of (d); delta t (i) Represents that the observed sequence at the time t is O = [ O ] 1 ,O 2 ,...O t ]While the system is in state N i The probability of (c).
a nk Indicating that the system is in state N at time t-1 n At time t, in state N k The probability of (d);representing the maximum possible probability that the system is in state k at time t.
(3) Time of dayThe maximum value represents the maximum possible state of the device, and the degraded state of the device.
In step 2, the non-fault operation time, temperature, humidity, voltage, maintenance frequency and the non-fault operation time after maintenance of the equipment are selected as the neurons of the input layer and the output layer of the neural network, the number of the neurons of the hidden layer is 3, the initial weight is 1/7, and the bias is a random value of [ -0.25,0.25 ].
Step 3.1 (4) (5), the initial values B of the model are uniformly distributed, and the sum of all parameters of B is 1, pi = (1, 0,. 0, 0);
in step 3, the normal state, the degraded state of 20%,40%,60%,80% and the fault state of the equipment are adopted as the hidden state of the equipment.
Compared with the closest prior art, the invention has the following beneficial effects:
according to the invention, a wavelet function is adopted to replace an excitation function of a hidden layer of a traditional neural network to construct a three-layer neuron network, the scale and translation function of the wavelet function are used as weights from an input layer to the hidden layer, and a traditional mode of selecting an empirical value is replaced, so that local convergence is avoided, the wavelet function is used for approximating data, and the iteration speed of the traditional neural network is improved;
according to the method, the neural network is adopted to perform dimensionality reduction processing on the data, so that the situation that the hidden Markov model is directly adopted to process the summed data is avoided, the calculation complexity of the hidden Markov model is reduced, and the failure prediction rate is improved;
the invention adopts the neuron of the hidden layer of the wavelet neural network to replace the original data, thereby reducing the correlation between the original data.
The method adopts the hidden Markov model to carry out fault prediction on the equipment by adopting the running data based on the time sequence, adopts the degradation state of the equipment as the hidden state of the equipment, and improves the reliability of the equipment prediction.
Drawings
FIG. 1 is a flow chart of a fault prediction method based on a wavelet neural network and a hidden Markov model.
Fig. 2 is a neural network dimension reduction model, which is a fault prediction method based on a wavelet neural network and a hidden markov model according to the present invention.
Fig. 3 is a hidden markov model, a fault prediction method based on a wavelet neural network and the hidden markov model provided by the present invention.
Detailed Description
As shown in fig. 1 to 3, a failure prediction method based on a wavelet neural network and a hidden markov model includes the following steps:
step 1: acquiring sample data, including historical operation data, maintenance data and environment data, wherein the historical operation data refers to the time of fault-free operation of equipment, the maintenance data refers to the frequency of equipment maintenance and the time of safe operation after maintenance, and the environment data refers to the current, voltage, operation temperature, humidity of a PCB (printed Circuit Board) and the vibration degree of vehicle-mounted equipment;
step 2: establishing a neuron network and reducing the dimension of data;
because the acquired data cannot be mutually independent, for example, the operation time of the equipment is influenced by environmental factors, the maintenance frequency also influences the normal operation time of the equipment after maintenance, the factors cannot be distinguished by using a standard data model, the data acquired by the sensor has correlation, and in order to prevent overfitting of the data, the data needs to be subjected to dimension reduction treatment and is expressed in a mutually independent form.
The wavelet function is used for replacing an excitation function of a traditional neural network hidden layer to construct a three-layer neuron network, the scale and the translation function of the wavelet function are used as weights from an input layer to the hidden layer to replace a traditional empirical value selection mode, local convergence is avoided, the wavelet function is used for approximating data, and the iteration speed of the traditional neural network is improved.
Step 2.1 initial conditions:
determining an initial connection weight and bias from a neural network input sample to a hidden layer neuron, an initial connection weight and bias from the hidden layer neuron to an output layer, and the number of neurons in the input layer, the hidden layer and the output layer;
step 2.2 hidden layer excitation function:
in the formula:representing a wavelet function; a is j ,b j Representing a scale function and a translation function of a jth neuron of the hidden layer, wherein x represents a signal input to the hidden layer after an input signal is subjected to weight and bias calculation; the kth neural network output can be expressed as:
in the formula (f) k (x) Representing the kth output value, x, of the neural network i Denotes the x (th) order i An ith dimension input of samples, n represents the number of hidden layer neurons, m represents the number of sample inputs, and n < m, w kj Represents the connection weight, w, from the jth neuron to the kth output of the hidden layer ji Representing the ith input sample x i Connection weight, λ, to the jth neuron of the hidden layer i Denotes input layer bias, λ j Indicating a hidden layer bias, will f k (x) The method comprises the following three parts:
step 2.3, automatic coding:
approximating the input function by using the neuron output of the step 2.2, and defining a system error function:
solving an error function E to w kj 、w ji 、λ j 、λ i Scale factor a j And a translation coefficient b j Partial derivatives of (a):
step 2.4: for the partial derivative, a gradient descent algorithm is used for w kj 、w ji 、λ j 、λ i 、a j And b j Updating, defining the learning step length of the gradient descent algorithm as beta, and updating p +1 parameter factors by the p-th sample as follows:
step 2.5, training the next sample, returning to step 2.2, calculating the output of the neural network according to the updated parameter factors, comparing the output with the original data, calculating the system error, if the error value is smaller than the set error threshold value, judging that the output result approaches the original data at the moment, stopping training, and representing the hidden layer as the first-order characteristic of the system at the moment;
and 3, probability prediction based on a hidden Markov model:
according to step 2, from the raw data X N×m Extracting fault sequence H of equipment N×r =[h 1 ,h 2 ,...,h r ]R is the characteristic dimension of the data, and r is less than m, N is the number of samples,
m represents the dimension of data contained in each sample, and h represents the required data extracted from each sample;
step 3.1 initial conditions:
let the hidden markov model be λ = (N, M, pi, a, B),
(1) N represents the number of hidden states of the hidden markov model, and N = (N) represents the random process of the hidden states in which the change process of the device parameters is set 1 ,N 2 ,N 3 ,...,N n ) The hidden state of the system at time t is q t ,q t ∈N;
(2) M represents the observed state of the system, represents the fault sequence extracted by the neural network, and M = [ M = 1 ,M 2 ,...,M r ]The observed state of the system at time t is O t ,O t ∈M;
(3) π represents the probability matrix of the initial hidden state, π = (π) 1 ,π 2 ,...,π n ),π i =P(q 1 =N i ),1≤i≤n;
q 1 Indicating the initial state of the system, N i Represents the ith hidden state of the hidden Markov model, and p () represents the initial state of the systemProbability of the ith hidden state;
(4) A is a state transition matrix, which represents a probability matrix of the device transitioning from a current hidden state to another hidden state, A = (a) ij ) n×n N × n denotes the matrix dimension, where a ij Representing the probability of transition from state i to state j, a ij =P(q t+1 =N j |q t =N i )1≤i,j≤n;
q t Representing the hidden state of the system at time t, q t+1 Representing the hidden state of the system at time t +1, q t =N i Representing that the system belongs to the ith hidden state at time t, p () representing the system from N at time t i State to N j The probability of state transition, n is the number of hidden states of the system.
(5) B is an observation value probability matrix which represents the transition probability from the hidden state to the observed state of the device, and B = (B) jk ) r×n ,b jk Representing the transition probability from the hidden state k to the observed state j, note b jk =b j (k),b j (k)=P(O t =M k |q t =N j ),1≤j≤n,1≤k≤r,Q t Indicating the observed state of the system at time t, M k Indicating that the system belongs to the k-th observation state at time t, q t Representing the hidden state of the system at the moment t, j is the hidden layer state order, n is the total state number of the hidden layer, k is the observation state order, r is the total state number of the observation layer, and p () represents the probability of the system from the jth hidden state to the kth observation state at the moment t. Step 3.2 build up of Fault model
Selecting collected data of different states of equipment, including normal operation state of the equipment, 4 wear degradation states and fault states of different degrees of the equipment under the condition of no fault, establishing a hidden Markov finite model, performing model training on the collected equipment state data by using a forward-backward algorithm, and determining a state transition matrix of a hidden state of the equipment, wherein the calculation steps are as follows:
(1) Initializing a hidden Markov model matrix: pi = (pi) 1 ,π 2 ,...,π n ),A=(a ij ) n×n ,B=(b jk ) r×n ;
(2) Taking T groups of measurement data from the sample data as an observation state sequence of the model;
(3) Mapping data to a hidden layer of the neural network according to the calculation result of the wavelet neural network, reducing the dimension of the data, and outputting an observation sequence O = [ O ] 1 ,O 2 ,...O T ];
(4) Defining a forward probability a t (i) Indicating that the hidden state is N at time T (T < T) i The observation sequence is [ O ] 1 ,O 2 ,...O t ]Probability of (c):
a 1 (i)=π i b i (O 1 ) (16)
wherein, a 1 (i) Representing the forward probability of the ith hidden state at the initial moment of the system; pi i An initial probability matrix representing i hidden states of the first initial probability matrix; b i (O 1 ) Indicating that the system is in a hidden state at the initial moment of time i Observe O 1 The probability of (d); n is a radical of j Representing the jth hidden state of the system; λ represents a hidden Markov model; a is t (j)a ji Indicating that the hidden state is N at time t j The observation sequence is [ O ] 1 ,O 2 ,...O t ]At time t +1, the hidden state is N i The probability of (d); b i (O t+1 ) Indicating a hidden state as N i Observe O t+1 The probability of (d); p () represents the observed sequence as [ O ] at time t +1 1 ,O 2 ,...O t ,O t+1 ]Hidden state is N i The probability of (c).
(5) Defining a backward probability beta t (i) Indicating that the hidden state is N at time T (T < T) i And the sequence observed from time T +1 to time T is [ O ] t+1 ,O t+2 ,...O T ]Probability of (c):
β T (i)=1 (18)
wherein q is t =N i Indicating that the hidden state is N at time t i (ii) a λ represents a hidden Markov model; beta is a t+1 (j) Indicating that the hidden state is N at the moment of t +1 j Backward probability of (2); a is a ij Representing the probability of transitioning from state i to state j; a is ij β t+1 (j) Indicating that the hidden state is N at the time of t +1 j Hidden state is N at time t i The probability of (d); a is ij b j (O t+1 )β t+1 (j) Indicates that the observed sequence is [ O ] t+1 ,O t+2 ,...O T ]The hidden state at the time t +1 is N j Hidden state is N at time t i The probability of (d); p () denotes the hidden layer state at time t as N i The probability of (c).
(6) Calculating the sum of the forward probability and the backward probability of the current observation sequence
a t (i) Denotes that the hidden layer is N at time t i Forward probability of, beta t (i) Indicating that the hidden layer is N at time t i N is the number of hidden layer states.
(7) Given a sequence of observations, at time t the device is in state N i Probability of (c):
a t (i) Indicating that the hidden layer is N at time t i Forward probability of, beta t (i) Indicating that the hidden layer is N at time t i N is the number of hidden layer states.
(8) Given an observation sequence, at time t +1 the device is in state N i Probability of (2)
a t (i) Denotes that the hidden layer is N at time t i Forward probability of, beta t+1 (j) Indicating that the hidden layer is N at the time of t +1 j N is the number of hidden layer states, a ij Representing the probability of transition from state i to state j, b j (O t+1 ) Indicating that the hidden layer state at the time t is N j The observation of O at time t +1 t+1 The probability of (c).
(9) If P (O | lambda) is not converged, returning to the step (2), otherwise, calculating the parameters of the hidden Markov model:
assuming that the number of samples used for calculation when P (O | λ) converges is D, then:
π i representing the probability of the state i, and calculating the average value of the probability for each sample;indicating that the d-th sample state at the initial time is N i The probability of (c).
Represents the d-th sample at time t, from state N i Transition to State N j T is the number of the collected moments, and D is the number of samples; a is a ij Indicating the final state N i Transition to State N j The probability of (d);indicating that the d-th sample state at the time t is N i The probability of (c).
b j (k) Representing the transition probability of the hidden state k to the observed state j.
(10) After training is finished, outputting a final hidden Markov model lambda = (N, M, pi, A, B);
step 3.3 Fault prediction
When the equipment is subjected to fault prediction, a historical observation sequence O = [ O ] is output 1 ,O 2 ,...O T ]And calculating the maximum possible degradation state of the trained hidden Markov model according to the trained hidden Markov model, and the steps are as follows:
(1) State initialization:
δ 1 (i)=π i b i (O 1 ),i=1,2,...,n (26)
n is the number of states of the hidden layer, pi i Indicating an initial time state N i Probability of (b) i (O 1 ) Denotes an observation of O at the initial time 1 In a state of N i Probability of (d), δ 1 (i) Indicates that O is observed at the initial time 1 The system is in N i The state of (1).
(2) State at recursion time t:
δ t (i)=max(δ t-1 (1),δ t-1 (2),...,δ t-1 (n))·b i (O t ) (28)
max(δ t-1 (1),δ t-1 (2),...,δ t-1 (n)) represents the maximum possible state of n states of the system at time t-1; b i (O t ) Indicates that O is observed t The system is in state N i The probability of (d); delta t (i) Represents that the observed sequence at the time t is O = [ O ] 1 ,O 2 ,...O t ]While the system is in state N i The probability of (c).
a nk Indicating that the system is in state N at time t-1 n At time t, in state N k The probability of (d);representing the maximum possible probability that the system is in state k at time t.
(3) Time of dayThe maximum value represents the maximum possible state of the device, and the degraded state of the device.
In step 3.1 (4) (5), the initial values B of the model are uniformly distributed, and the sum of all parameters of B is 1, pi = (1, 0,. 0, 0);
in step 3.2, the normal state, the degraded state of 20%,40%,60%,80% and the fault state of the equipment are adopted as the hidden state of the equipment.
The above description is only a preferred embodiment of the present invention, and is not limited to the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.
Claims (4)
1. A failure prediction method based on a wavelet neural network and a hidden Markov model is characterized by comprising the following steps:
step 1: acquiring sample data, including historical operation data, maintenance data and environment data, wherein the historical operation data refers to the time of fault-free operation of equipment, the maintenance data refers to the frequency of equipment maintenance and the time of safe operation after maintenance, and the environment data refers to the current, voltage, operation temperature, humidity of a PCB (printed Circuit Board) and the vibration degree of vehicle-mounted equipment;
and 2, step: establishing a neuron network and reducing the dimension of data;
the wavelet function is used for replacing an excitation function of a traditional neural network hidden layer to construct a three-layer neuron network, the scale and the translation function of the wavelet function are used as weights from an input layer to the hidden layer to replace a traditional empirical value selection mode, local convergence is avoided, the wavelet function is used for approximating data, and the iteration speed of the traditional neural network is improved;
step 2.1 initial conditions:
determining an initial connection weight and bias from a neural network input sample to a hidden layer neuron, an initial connection weight and bias from the hidden layer neuron to an output layer, and the number of neurons in the input layer, the hidden layer and the output layer;
step 2.2 hidden layer excitation function:
in the formula:representing a wavelet function; a is j ,b j The scale coefficient and the translation coefficient of the jth neuron of the hidden layer are represented, and x represents a signal which is input to the hidden layer after the input signal is subjected to weight and bias calculation; the kth neural network output can be expressed as:
in the formula (f) k (x) Representing the kth output value, x, of the neural network i Denotes the x th i The ith dimension of each sample is input, n represents the number of hidden layer neurons, and m represents samplesThe number of the inputs is n < m, w kj Represents the connection weight, w, from the jth neuron of the hidden layer to the kth output ji Representing the ith input sample x i Connection weight, λ, to the jth neuron of the hidden layer i Denotes input layer bias, λ j Indicating a hidden layer bias, will f k (x) The method comprises the following three parts:
step 2.3, automatic coding:
approximating the input function by using the neuron output of the step 2.2, and defining a system error function:
solving an error function E to w kj 、w ji 、λ j 、λ i Scale factor a j And a translation coefficient b j Partial derivatives of (a):
step 2.4: for the partial derivative, a gradient descent algorithm is used for w kj 、w ji 、λ j 、λ i 、a j And b j Updating, defining the learning step length of the gradient descent algorithm as beta, and updating p +1 parameter factors by the p-th sample as follows:
step 2.5, training the next sample, returning to step 2.2, calculating the output of the neural network according to the updated parameter factors, comparing with the original data, calculating the system error, if the error value is smaller than the set error threshold value, judging that the output result approaches the original data at the moment, stopping training, and at the moment, the hidden layer is the first-order characteristic representation of the system;
and 3, probability prediction based on a hidden Markov model:
according to step 2, from the raw data X N×m Extracting fault sequence H of equipment N×r =[h 1 ,h 2 ,...,h r ]R is the characteristic dimension of the data, r is less than m, N is the number of samples, m represents the dimension of the data contained in each sample, and h represents the required data extracted from each sample;
step 3.1 initial conditions:
let hidden markov model be λ = (N, M, pi, a, B),
(1) N represents the hidden state number of the hidden markov model, and the change process of the device parameter is regarded as the random process of the hidden state, N = (N) 1 ,N 2 ,N 3 ,...,N n ) The hidden state of the system at time t is q t ,q t ∈N;
(2) M represents the observed state of the system, represents the fault sequence extracted by the neural network, and M = [ M = 1 ,M 2 ,...,M r ]The observed state of the system at time t is O t ,O t ∈M;
(3) π represents the probability matrix of the initial hidden state, π = (π) 1 ,π 2 ,...,π n ),π i =P(q 1 =N i ),1≤i≤n;q 1 Indicating the initial state of the system, N i Represents the ith hidden state of the hidden Markov model, and p () represents the system initial state as the ith hidden stateThe probability of (d);
(4) A is a state transition matrix, which represents a probability matrix of the device transitioning from a current hidden state to another hidden state, A = (a) ij ) n×n N × n denotes the matrix dimension, where a ij Representing the probability of transitioning from state i to state j,
a ij =P(q t+1 =N j |q t =N i )1≤i,j≤n;
q t representing the hidden state of the system at time t, q t+1 Representing the hidden state of the system at time t +1, q t =N i Indicating that the system belongs to the i-th hidden state at time t, p () indicating that the system belongs to the N-th hidden state from time t i State to N j Probability of state transition, n is the number of hidden states of the system;
(5) B is an observation value probability matrix which represents the transition probability from the hidden state to the observed state of the device, and B = (B) jk ) r×n ,b jk Representing the transition probability from the hidden state k to the observed state j, note b jk =b j (k),b j (k)=P(O t =M k |q t =N j ),1≤j≤n,1≤k≤r,Q t Indicating the observed state of the system at time t, M k Indicating that the system belongs to the k-th observation state at time t, q t Representing the hidden state of the system at the moment t, j is the state order of a hidden layer, n is the total state number of the hidden layer, k is the order of the observed state, r is the total state number of the observed layer, and p () represents the probability of the system from the jth hidden state to the kth observed state at the moment t;
step 3.2 build up of Fault model
Selecting collected data of different states of equipment, including normal operation state of the equipment, 4 wear degradation states and fault states of different degrees of the equipment under the condition of no fault, establishing a hidden Markov finite model, performing model training on the collected equipment state data by using a forward-backward algorithm, and determining a state transition matrix of a hidden state of the equipment, wherein the calculation steps are as follows:
(1) Initializing a hidden Markov model matrix: pi = (pi) 1 ,π 2 ,...,π n ),A=(a ij ) n×n ,B=(b jk ) r×n ;
(2) Taking T groups of measurement data from the sample data as an observation state sequence of the model;
(3) Mapping data to a hidden layer of the neural network according to the calculation result of the wavelet neural network, reducing the dimension of the data, and outputting an observation sequence O = [ O ] 1 ,O 2 ,...O T ];
(4) Defining a forward probability a t (i) Indicating that the hidden state is N at time T (T < T) i The observation sequence is [ O ] 1 ,O 2 ,...O t ]Probability of (c):
a 1 (i)=π i b i (O 1 ) (16)
wherein, a 1 (i) Representing the forward probability of the ith hidden state at the initial moment of the system; pi i An initial probability matrix representing i hidden states of the first initial probability matrix; b i (O 1 ) Indicating that the system is in a hidden state at the initial moment of time i Observe O 1 The probability of (d); n is a radical of j Representing the jth hidden state of the system; λ represents a hidden Markov model; a is a t (j)a ji Indicating that the hidden state is N at time t j The observation sequence is [ O ] 1 ,O 2 ,...O t ]At time t +1, the hidden state is N i The probability of (d); b i (O t+1 ) Indicating a hidden state as N i Observe O t+1 The probability of (d); p () means that the observed sequence is [ O ] at time t +1 1 ,O 2 ,...O t ,O t+1 ]Hidden state is N i The probability of (d);
(5) Defining a backward probability beta t (i) Indicating that the hidden state is N at time T (T < T) i And the sequence observed from the time T +1 to the time T is [ O ] t+1 ,O t+2 ,...O T ]Probability of (c):
β T (i)=1 (18)
wherein q is t =N i Indicating that the hidden state is N at time t i (ii) a λ represents a hidden Markov model; beta is a t+1 (j) Indicating that the hidden state is N at the moment of t +1 j Backward probability of (d); a is ij Representing the probability of transitioning from state i to state j; a is a ij β t+1 (j) Indicating that the hidden state is N at the time of t +1 j Hidden state is N at time t i The probability of (d); a is ij b j (O t+1 )β t+1 (j) Indicates that the observed sequence is [ O ] t+1 ,O t+2 ,...O T ]The hidden state at the time t +1 is N j Hidden state is N at time t i The probability of (d); p () means that the hidden layer state is N at time t i The probability of (d);
(6) Calculating the sum of the forward probability and the backward probability of the current observation sequence by the formulas (17) and (19)
a t (i) Indicating that the hidden layer is N at time t i Forward probability of, beta t (i) Indicating that the hidden layer is N at time t i N is the number of hidden layer states; given observation sequence O = [ O ] 1 ,O 2 ,...O t ]At time t the device is in state N i Probability of (gamma) t (i) Comprises the following steps:
a t (i) Denotes that the hidden layer is N at time t i Forward probability of, beta t (i) Denotes that the hidden layer is N at time t i N is the number of hidden layer states;
(7) Given observation sequence O = [ O ] 1 ,O 2 ,...O t ,O t+1 ]At time t +1 the device is in state N i Transition to State N j Probability of (2)
a t (i) Indicating that the hidden layer is N at time t i Forward probability of, beta t+1 (j) Indicating that the hidden layer is N at the time of t +1 j N is the number of hidden layer states, a ij Representing the probability of transition from state i to state j, b j (O t+1 ) Indicating that the hidden layer state at the time t is N j The observation of O at time t +1 t+1 The probability of (d);
(8) If P (O | lambda) does not converge, returning to the step (2), otherwise, calculating hidden Markov model parameters:
assuming that the number of samples for calculation when P (O | λ) converges is D, then:
π i representing the probability of the state i, and calculating the average value of the probability for each sample;indicating that the d-th sample state at the initial time is N i The probability of (d);
represents the d-th sample at time t, from state N i Transition to State N j T is the number of the collected moments, and D is the number of samples; a is a ij Representing the final state N i Transition to State N j The probability of (d);indicating that the d-th sample state at the time t is N i The probability of (d);
b j (k) Representing the transition probability of the hidden state k to the observed state j;
(9) After training is finished, outputting a final hidden Markov model lambda = (N, M, pi, A, B);
step 3.3 Fault prediction
When the equipment is subjected to fault prediction, a historical observation sequence O = [ O ] is output 1 ,O 2 ,...O T ]And calculating the maximum possible degradation state of the trained hidden Markov model according to the trained hidden Markov model, wherein the method comprises the following steps:
(1) State initialization:
δ 1 (i)=π i b i (O 1 ),i=1,2,...,n (26)
n is the number of states of the hidden layer, pi i Indicating an initial time state N i Probability of (b) i (O 1 ) Denotes an observation of O at the initial time 1 In a state of N i Probability of (d), δ 1 (i) Indicates that O is observed at the initial time 1 The system is at N i The state of (2);
(2) State at recursion time t:
δ t (i)=max(δ t-1 (1),δ t-1 (2),...,δ t-1 (n))·b i (O t ) (28)
max(δ t-1 (1),δ t-1 (2),...,δ t-1 (n)) represents the maximum possible state of n states of the system at time t-1; b i (O t ) Indicates that O was observed t The system is in state N i The probability of (d); delta t (i) Denotes that the observed sequence at time t is O = [ O ] 1 ,O 2 ,...O t ]While the system is in state N i The probability of (d);
a nk indicating that the system is in state N at time t-1 n At time t, in state N k The probability of (d);representing the maximum possible probability that the system is in state k at time t;
2. The method for predicting the failure based on the wavelet neural network and the hidden markov model as claimed in claim 1, wherein in step 2, the non-failure operation time, the temperature, the humidity, the voltage, the maintenance times and the non-failure operation time after maintenance of the device are selected as the neurons of the input layer and the output layer of the neural network, the number of the neurons of the hidden layer is 3, the initial weight is 1/7, and the bias is a random value of [ -0.25,0.25 ].
4. the method for predicting the faults based on the wavelet neural network and the hidden Markov model as claimed in claim 1, wherein in the step 3, the normal state, the degraded state and the fault state of the equipment are 20%,40%,60% and 80% as the hidden states of the equipment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910587643.3A CN110288046B (en) | 2019-07-02 | 2019-07-02 | Fault prediction method based on wavelet neural network and hidden Markov model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910587643.3A CN110288046B (en) | 2019-07-02 | 2019-07-02 | Fault prediction method based on wavelet neural network and hidden Markov model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110288046A CN110288046A (en) | 2019-09-27 |
CN110288046B true CN110288046B (en) | 2022-11-18 |
Family
ID=68021625
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910587643.3A Active CN110288046B (en) | 2019-07-02 | 2019-07-02 | Fault prediction method based on wavelet neural network and hidden Markov model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110288046B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111259261B (en) * | 2020-01-02 | 2023-09-26 | 中国铁道科学研究院集团有限公司通信信号研究所 | High-speed rail driving network collaborative alarm optimization method based on state transition prediction |
CN111223574A (en) * | 2020-01-14 | 2020-06-02 | 宁波市海洋与渔业研究院 | Penaeus vannamei boone enterohepatic sporulosis early warning method based on big data mining |
CN111565118B (en) * | 2020-04-17 | 2022-08-05 | 烽火通信科技股份有限公司 | Virtualized network element fault analysis method and system based on multi-observation dimension HMM |
CN111882078B (en) * | 2020-06-28 | 2024-01-02 | 北京交通大学 | Method for optimizing state maintenance strategy of running part component of rail transit train |
CN112069045A (en) * | 2020-08-14 | 2020-12-11 | 西安理工大学 | Cloud platform software performance prediction method based on hidden Markov |
CN112257777B (en) * | 2020-10-21 | 2023-09-05 | 平安科技(深圳)有限公司 | Off-duty prediction method and related device based on hidden Markov model |
CN113053536B (en) * | 2021-01-15 | 2023-11-24 | 中国人民解放军军事科学院军事医学研究院 | Infectious disease prediction method, system and medium based on hidden Markov model |
CN113298240B (en) * | 2021-07-27 | 2021-11-05 | 北京科技大学 | Method and device for predicting life cycle of servo drive system |
CN116020879B (en) * | 2023-02-15 | 2023-06-16 | 北京科技大学 | Technological parameter-oriented strip steel hot continuous rolling space-time multi-scale process monitoring method and device |
CN117114352B (en) * | 2023-09-15 | 2024-04-09 | 北京阿帕科蓝科技有限公司 | Vehicle maintenance method, device, computer equipment and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10307186A (en) * | 1997-05-07 | 1998-11-17 | Mitsubishi Electric Corp | Signal processor |
CN102867132A (en) * | 2012-10-16 | 2013-01-09 | 南京航空航天大学 | Aviation direct-current converter online fault combined prediction method based on fractional order wavelet transformation |
CN104504296A (en) * | 2015-01-16 | 2015-04-08 | 湖南科技大学 | Gaussian mixture hidden Markov model and regression analysis remaining life prediction method |
CN105834835A (en) * | 2016-04-26 | 2016-08-10 | 天津大学 | Method for monitoring tool wear on line based on multiscale principal component analysis |
CN105841961A (en) * | 2016-03-29 | 2016-08-10 | 中国石油大学(华东) | Bearing fault diagnosis method based on Morlet wavelet transformation and convolutional neural network |
CN106405384A (en) * | 2016-08-26 | 2017-02-15 | 中国电子科技集团公司第十研究所 | Simulation circuit health state evaluation method |
CN106599920A (en) * | 2016-12-14 | 2017-04-26 | 中国航空工业集团公司上海航空测控技术研究所 | Aircraft bearing fault diagnosis method based on coupled hidden semi-Markov model |
CN107122802A (en) * | 2017-05-02 | 2017-09-01 | 哈尔滨理工大学 | A kind of fault detection method based on the rolling bearing for improving wavelet neural network |
CN108090427A (en) * | 2017-12-07 | 2018-05-29 | 上海电机学院 | Fault Diagnosis of Gear Case method based on flock of birds algorithm and Hidden Markov Model |
CN108490807A (en) * | 2018-05-09 | 2018-09-04 | 南京恩瑞特实业有限公司 | Train fault analogue system and test method |
CN108763654A (en) * | 2018-05-03 | 2018-11-06 | 国网江西省电力有限公司信息通信分公司 | A kind of electrical equipment fault prediction technique based on Weibull distribution and hidden Semi-Markov Process |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7260501B2 (en) * | 2004-04-21 | 2007-08-21 | University Of Connecticut | Intelligent model-based diagnostics for system monitoring, diagnosis and maintenance |
-
2019
- 2019-07-02 CN CN201910587643.3A patent/CN110288046B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10307186A (en) * | 1997-05-07 | 1998-11-17 | Mitsubishi Electric Corp | Signal processor |
CN102867132A (en) * | 2012-10-16 | 2013-01-09 | 南京航空航天大学 | Aviation direct-current converter online fault combined prediction method based on fractional order wavelet transformation |
CN104504296A (en) * | 2015-01-16 | 2015-04-08 | 湖南科技大学 | Gaussian mixture hidden Markov model and regression analysis remaining life prediction method |
CN105841961A (en) * | 2016-03-29 | 2016-08-10 | 中国石油大学(华东) | Bearing fault diagnosis method based on Morlet wavelet transformation and convolutional neural network |
CN105834835A (en) * | 2016-04-26 | 2016-08-10 | 天津大学 | Method for monitoring tool wear on line based on multiscale principal component analysis |
CN106405384A (en) * | 2016-08-26 | 2017-02-15 | 中国电子科技集团公司第十研究所 | Simulation circuit health state evaluation method |
CN106599920A (en) * | 2016-12-14 | 2017-04-26 | 中国航空工业集团公司上海航空测控技术研究所 | Aircraft bearing fault diagnosis method based on coupled hidden semi-Markov model |
CN107122802A (en) * | 2017-05-02 | 2017-09-01 | 哈尔滨理工大学 | A kind of fault detection method based on the rolling bearing for improving wavelet neural network |
CN108090427A (en) * | 2017-12-07 | 2018-05-29 | 上海电机学院 | Fault Diagnosis of Gear Case method based on flock of birds algorithm and Hidden Markov Model |
CN108763654A (en) * | 2018-05-03 | 2018-11-06 | 国网江西省电力有限公司信息通信分公司 | A kind of electrical equipment fault prediction technique based on Weibull distribution and hidden Semi-Markov Process |
CN108490807A (en) * | 2018-05-09 | 2018-09-04 | 南京恩瑞特实业有限公司 | Train fault analogue system and test method |
Non-Patent Citations (5)
Title |
---|
LSSVM与HMM在航空发动机状态预测中的应用研究;崔建国等;《计算机工程》;20171015(第10期);全文 * |
Performance evaluation of HMM and neural network in motorbike fault detection system;R Nair Pravin etc.;《2011 International Conference on Recent Trends in Information Technology (ICRTIT)》;IEEE;20110605;全文 * |
基于TMS320F2812的数据采集及电力电缆故障识别;邹会荣;《中国优秀硕士学位论文全文电子期刊网 工程科技Ⅱ辑》;20090115;全文 * |
基于隐马尔可夫模型的航空机械系统故障诊断算法设计;柳楠;《现代工业经济和信息化》;20160315(第05期);全文 * |
太阳能发电多维随机过程动态模型研究;张子璇;《中国优秀硕士学位论文全文 工程科技Ⅱ辑》;20190415;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110288046A (en) | 2019-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110288046B (en) | Fault prediction method based on wavelet neural network and hidden Markov model | |
CN111813084B (en) | Mechanical equipment fault diagnosis method based on deep learning | |
Zhu et al. | A Joint Long Short-Term Memory and AdaBoost regression approach with application to remaining useful life estimation | |
CN112926273A (en) | Method for predicting residual life of multivariate degradation equipment | |
Kordestani et al. | A modular fault diagnosis and prognosis method for hydro-control valve system based on redundancy in multisensor data information | |
Hu et al. | Deep bidirectional recurrent neural networks ensemble for remaining useful life prediction of aircraft engine | |
CN110309537B (en) | Intelligent health prediction method and system for aircraft | |
CN114218872B (en) | DBN-LSTM semi-supervised joint model-based residual service life prediction method | |
CN115204302A (en) | Unmanned aerial vehicle small sample fault diagnosis system and method | |
Huang et al. | Bayesian neural network based method of remaining useful life prediction and uncertainty quantification for aircraft engine | |
CN114839881A (en) | Intelligent garbage cleaning and environmental parameter big data internet of things system | |
CN114266201B (en) | Self-attention elevator trapping prediction method based on deep learning | |
Lan et al. | Remaining useful life estimation of turbofan engine using LSTM neural networks | |
Leoshchenko et al. | Using modern architectures of recurrent neural networks for technical diagnosis of complex systems | |
CN114911185A (en) | Security big data Internet of things intelligent system based on cloud platform and mobile terminal App | |
Oosterom et al. | Virtual sensor for the angle-of-attack signal in small commercial aircraft | |
You et al. | An adaptable UAV sensor data anomaly detection method based on TCN model transferring | |
Cui et al. | Intelligent health management of fixed-wing UAVs: A deep-learning-based approach | |
Long et al. | A data fusion fault diagnosis method based on LSTM and DWT for satellite reaction flywheel | |
CN114970745B (en) | Intelligent security and environment big data system of Internet of things | |
CN114358244B (en) | Big data intelligent detection system of pressure based on thing networking | |
CN112560252B (en) | Method for predicting residual life of aeroengine | |
Li et al. | Predicting remaining useful life of industrial equipment based on multivariable monitoring data analysis | |
CN113837443A (en) | Transformer substation line load prediction method based on depth BilSTM | |
Wang et al. | Fault diagnosis for nonlinear systems via neural networks and parameter estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |