CN104200269A - Real-time fault diagnosis method based on online learning minimum embedding dimension network - Google Patents

Real-time fault diagnosis method based on online learning minimum embedding dimension network Download PDF

Info

Publication number
CN104200269A
CN104200269A CN201410456900.7A CN201410456900A CN104200269A CN 104200269 A CN104200269 A CN 104200269A CN 201410456900 A CN201410456900 A CN 201410456900A CN 104200269 A CN104200269 A CN 104200269A
Authority
CN
China
Prior art keywords
network
partiald
output
time series
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410456900.7A
Other languages
Chinese (zh)
Inventor
陶洪峰
黄红梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN201410456900.7A priority Critical patent/CN104200269A/en
Publication of CN104200269A publication Critical patent/CN104200269A/en
Pending legal-status Critical Current

Links

Abstract

The invention provides a real-time fault diagnosis method based on an online learning minimum embedding dimension network. The method includes determining a minimum embedding dimension number of a time sequence based on a neural network; performing normalized processing on network input samples, and reconstructing network input and output sample values according to the time sequence; training a learning network online, and adjusting the network structure dynamically; performing anti-normalized calculation on the output results of the learning network, figuring out the difference between the output value and an actual observation value, comparing the difference with a set detection threshold, and achieving the purpose of fault diagnosis. The method has the advantages that a node increasing rule and a simplification rule are added in the network online learning process, the activation of the neurons is judged according to the maximum output of a hidden layer in the process of increasing, a slide window is added in the simplification process, the error simplification is avoided, the network online adjustment is only performed on the neurons with large response, the network calculation quantity is reduced, and the instantaneity of fault diagnosis is improved.

Description

A kind of based on the minimum real-time fault diagnosis method that embeds dimension network of on-line study
Technical field
The present invention relates to fault diagnosis field, be specifically related to a kind of based on the minimum real-time fault diagnosis method that embeds dimension network of on-line study.
Background technology
Along with the further raising that security of system is required, people wish to work as when system breaks down can provide fault detect and isolation information automatically, and system degradation trend is judged, to avoid fault further expand and propagate.Like this, just can there is time enough to take reliable measure to prevent that fault from occurring, and avoids unnecessary loss.
Fault diagnosis is the gordian technique that strengthens system reliability, and the real-time that improves fault diagnosis is the key factor of this technology.Existing fault diagnosis technology can be divided into two classes: first kind method is mainly based on seasonal effect in time series method, according to the variation tendency of system past and present status, the further state of development of estimating system, see with actual detected value calculated difference the threshold value that whether reaches fault, thereby judgement removes system, whether break down; Equations of The Second Kind method is the method based on qualitative analysis, according to the Qualitative Knowledge of system, carries out analysis ratiocination, thereby realizes fault diagnosis.
Time Series Method is regarded data as a sequence of arranging by chronological order, and sets up matching seasonal effect in time series mathematical model by the correlativity between adjacent data.At present, conventional Time Series Method mainly contains two kinds: a kind of is parameter model, first supposes that temporal data model meets some requirements, then through the estimation of model parameter is obtained to corresponding output valve.If the model of supposing and realistic model are variant, it is poor that diagnosis performance can become, and this method is to carry out fitting data sequence with linear model, and therefore it is not suitable for nonlinear system in essence.Another kind is non-parametric model method, and it does not need the accurate mathematical model of system, and application is comparatively extensive.
In various non-parametric model methods, neural net method, owing to not needing to set up in advance the mathematical model of reflection system physics law, therefore has extremely strong non-linear mapping capability, and it is used widely in fault diagnosis.At present, comprise that the multiple neural network structures such as BP network, RBF network, recurrent neural networks have all obtained application in time series analysis.But, utilize neural network must pass through the process of off-line training and cannot realize online training time series modeling, therefore network lacks the ability of study to the variation of sequence, and the measured deviation of sequence, uncertain disturbance etc. all can cause network generalization to decline.MRAN (Minimal Resource Allocating Network) learning algorithm has added the condition of simplifying, those are removed the smaller node of network output contribution, but ought not satisfy condition, need to adjust selected all hidden layers center and weights, until meet certain error requirements.When selected hidden node is many, this adjustment process calculated amount is larger, more consuming time.Especially each state of failure system is in the early stage constantly variation of fault, and the diagnostic network of off-line training is also difficult to meet the requirement of real-time.
Summary of the invention
While the object of the invention is to reduce system, become the calculated amount of status fault diagnostic method, improve real-time and the accuracy of fault diagnosis.
According to technical scheme provided by the invention, described a kind of real-time fault diagnosis method based on the minimum embedding dimension of on-line study network comprises the steps:
1) first step: determine the minimum dimension that embeds of time series based on neural network.Nonlinear Time Series x (t) | x (t) ∈ R 1, t=0,1...L, it in T predicted value is constantly:
Y(T+1)=f(X(T)) (1)
Wherein, X (T)=[x (T), x (T-1) ... x (T-p)], L ∈ Z +, represent whole length of time series, p ∈ Z +, representing the embedding dimension, Y (T+1) is T predicted value constantly.Based on neural network, to time series modeling, can obtain:
x ^ ( T + 1 ) = f ^ ( x ( T ) , x ( T - 1 ) , . . . , x ( T - p ) ) - - - ( 2 )
Target is to make approach as far as possible x (T+1).
According to embedding dimension theorem, neighbour's data are at the p dimension reconstruction attractor 2 near points of giving an example, still very near in p+1 dimension reconstruction attractor, otherwise are exactly pseudo-neighbour.The minimum dimension time series that embeds requires not have pseudo-neighbour's data.By judgement a (i, p), whether surpass set-point and judge whether these data are pseudo-adjacent
a ( i , p ) = | x i + p - x n ( i , p ) + p | | | y i ( p ) - y n ( i , p ) ( p ) | | - - - ( 3 )
For avoiding the impact of a (i, p) initial value and signal skew, definition:
E ( p ) = 1 N - p Σ i = 1 N - p a ( i , p ) - - - ( 4 )
E (p) represents the mean value of all a (i, p), and the E here (p) only relies on and embeds dimension p.When dimension changes from p to p+1, definition:
E 1(p)=E(p+1)/E(p) (5)
If work as p>p 0time, E 1(p) just no longer change p 0+ 1 is exactly the embedding dimension of required time series network.
2) second step: network input sample is normalized, and according to time series reconstructed network input and output sample value.For Nonlinear Time Series value x (1), x (2) ..., x (m), utilizes formula
x ′ ( i ) = x ( i ) - x min x max - x min , ( i = 1,2 · · · m ) - - - ( 6 )
By data normalization, x wherein max, x minthe maximal value and the minimum value that represent whole time series data.
According to the minimum dimension p that embeds of definite network, by the time series x'(1 after normalization), x'(2) ... x'(m) be reconstructed, form the input and output sample pair of neural network, by x'(1), x'(2) ... x'(m) be divided into k group, every group has p+1, front p value inputted the input of node, a rear expectation value as learning network output node as learning network.
3) the 3rd step: online training study network, dynamically adjust network structure.Select the output of RBF network calculations
f ( X i ) = w 0 + Σ k = 1 k = K w k φ k ( X i ) - - - ( 7 )
X wherein i=[x 1... x n] ∈ R nfor network input, f (X i) ∈ R is the output of corresponding network, w kthe output that is k hidden node connects weights, w 0for output offset constant, be the action function of hidden node, radial basis function is elected Gaussian type as:
φ k ( X i ) = exp ( - 1 σ k 2 | | X i - u k | | 2 ) - - - ( 8 )
U k∈ R nfor existing data center, σ kexpansion constant for this RBF function.
When on-line study neural network starts, without Hidden unit, network dynamically determines whether in learning process need to be by input X iincrease to Hidden unit, increase criterion is:
|e i|=Y i-f(X i)>e min (9)
&beta; max = Max k ( &phi; k ) < e tn - - - ( 10 )
If network output f is (X i) and desired value Y ibetween error enough large, according to formula (9), to add hidden neuron, e minrepresentative expectation approximation accuracy.Input X for formula (10) ieffect lower network neuron output φ krepresent k neuronic activity.
The initial e of algorithm tnmax, e tnby exponential law, successively decrease, the hidden node parameter newly increasing by criterion is:
w k+1=e n
u k+1=X n (11)
σ k+1=ν‖X n-u nr
Wherein ν is iteration factor, u nrbe and input sample X nnearest central point.
If sample (X i, Y i) do not meet and increase neuronic criterion condition, just need to utilize gradient descent method to center u k, expansion constant σ k, weight w kadjust.The real-time of adjusting for improving network, N neuron parameter of Zhi Dui center and sample close together adjusted:
&Delta;u k = - &PartialD; E &PartialD; u k = - &PartialD; E &PartialD; f ( X i ) &PartialD; f ( X i ) &PartialD; &phi; k &PartialD; &phi; k &PartialD; u k = ( Y i - f ( X i ) ) w k [ 2 ( X i - u k ) exp ( - 1 &sigma; k 2 | | X i - u k | | 2 ) 1 &sigma; k 2 ] - - - ( 12 )
u k=u k+ηΔu k (13)
&Delta;&sigma; k = - &PartialD; E &PartialD; &sigma; k = - &PartialD; E &PartialD; f ( X i ) &PartialD; f ( X i ) &PartialD; &phi; k &PartialD; &phi; k &PartialD; &sigma; k = ( Y i - f ( X i ) ) w k [ 2 | | X i - u k | | 2 exp ( - 1 &sigma; k 2 | | X i - u k | | 2 ) 1 &sigma; k 3 ] - - - ( 14 )
σ k=σ k+ηΔσ k (15)
&Delta;w k = - &PartialD; E &PartialD; w k = - &PartialD; E &PartialD; f ( X i ) &PartialD; f ( X i ) &PartialD; w k = ( Y i - f ( X i ) ) &phi; k - - - ( 16 )
w k=w k+ηΔw k (17)
Wherein η is learning rate, k=1 ..., N,
For simplifying as far as possible network structure, improve on-line study speed, the relatively little hidden node of neural network output contribution is removed from learning network, principle is that this removing will not affect the performance of overall network.Calculate each sample to (X i, Y i) output of hidden node
&sigma; k i = w k exp ( - 1 &sigma; k 2 | | X i - u k | | 2 ) - - - ( 18 )
Obtain maximum hidden node output and be standardized as:
&lambda; k i = | o i o max | ( k = 1 , &CenterDot; &CenterDot; &CenterDot; K ) - - - ( 19 )
If met remove k hidden node.
4) the 4th step: the output of renormalization network, and relatively realize fault diagnosis according to threshold value.The Output rusults of learning network is carried out to renormalization computing
f ^ ( X i ) = x min + ( x max - x min ) f ( X i ) - - - ( 20 )
The input sample new to each, utilize the Time Series Method of on-line study network of the present invention, dynamically adjust network hidden node number, after position and weights, obtain learning network output, ask for the difference between output valve and actual observed value, then by this difference, compare with the detection threshold of setting, realize the object of fault diagnosis.
The invention has the beneficial effects as follows:
1) described a kind of based on the minimum real-time fault diagnosis method that embeds dimension network of on-line study, in network design process, combining node increases criterion and contributes the relatively little criterion of simplifying based on network is exported, in increase process, utilize the neuronic activity of maximum output judgement of hidden layer, simplify in process and added moving window, avoided simplifying by mistake.
2) described a kind of based on the minimum real-time fault diagnosis method that embeds dimension network of on-line study, the online adjustment process of network is only carried out compared with large neuron output response ratio, has greatly reduced network calculations amount, has improved the real-time of fault diagnosis.
3) described a kind of based on the minimum real-time fault diagnosis method that embeds dimension network of on-line study, network dimensionality does not need to provide a large amount of subjective parameters, has improved accuracy, asks in process restricted little to the number of needed sample.
Accompanying drawing explanation
Fig. 1 is on-line study neural network real-time fault diagnosis algorithm flow chart
Embodiment
Below in conjunction with accompanying drawing, the specific embodiment of the present invention is described further.
As shown in Figure 1, when on-line fault diagnosis method of the present invention is carried out fault judgement to real application systems, its on-line learning algorithm is obviously better than batch processing learning algorithm.At every turn, when processing new data, do not need again to train, improved real-time and diagnostic accuracy, and on-line study limits without any priori the number of training sample, the requirement of realistic application.
Choose the tank reactor of continuous stirring as the invention process case, utilize and set forth fault diagnosis algorithm of the present invention.Tank reactor model description is:
d C A dt = q V ( C Af - C A ) - k 0 exp ( - E RT ) C A - - - ( 21 )
dT dt = q V ( T f - T ) + - &Delta;H &rho; C P k 0 exp ( - E RT ) C A + UA V&rho; C P ( T c - T ) - - - ( 22 )
In above-mentioned model, q is time-varying parameter, the method that adopts parameter and state joint to estimate, and using q as extended mode, state variable is defined as x=[x 1, x 2, x 3]=[C a, T, q] t, controlled quentity controlled variable u=T c, output variable y=[y 1, y 2] t=[C a, T] t, the object of reactor control law is to carry out the tracking of concentration set point.Adopt Euler's discrete method that discretize is carried out in formula (21), (22), and be x according to original state 1(0)=0.2 mol/L, x 2(0)=400 open, x 3(0)=100 liters/min, sampling time interval dt=0.2 divides and in computing machine, represents tank reactor model.
When tank reactor system is from k=100, charging flow velocity q index curve declines:
q ( k ) = q ( 100 ) + 1 - exp ( k - 100 80 ) - - - ( 23 )
Hope can be by reaction density C athe Neural Network Online learning algorithm designing by the present invention with temperature of reaction T calculates, and compares with system concentration, temperature under normal circumstances, if its difference surpasses certain threshold value, thinks that system breaks down.
First according to definite method of minimum embedding dimension, determine respectively temperature of reaction T and reaction density C aminimum embed dimension, in computing machine, according to formula (21), (22), construct successively reaction density C atime series with temperature of reaction T.
To reaction density C atime series solve, calculate E con(p), can obtain E con(1)=2.0932, E con(2)=1.0705, E con(3)=1.0514, E con(4)=1.0469.Due to | E con(3)-E con(4) |=0.0045<0.005, so corresponding embedding dimension p=4; The embedding dimension of trying to achieve equally temperature of reaction T is also 4.
Secondly, according to the minimum of trying to achieve, embed dimension, reconstruct temperature of reaction T and reaction density C atime series, when the input data of on-line study network are x'(1), x'(2),, x'(p), x'(2), x'(3) ..., x'(p+1) and x'(k), x'(k+1),, in the time of x'(k+p-1), the desired output of map network is x'(p+1), x'(p+2) and x'(k+p), and then obtain the real-time training sample pair of neural network.
Utilize fixed training sample to the on-line learning algorithm proposing with the present invention respectively to temperature of reaction T and reaction density C atrain, obtain network center, the parameters such as weights, computational grid output f (X i), X wherein i=(x'(i), x'(i+1) ..., x'(i+p-1)).When on-line learning algorithm is initial, e tnmax, e tnby exponential law, successively decrease, e tn=max{ ξ maxγ n, ζ min, 0< γ <1 is constant, with the flatness that guarantees that network approaches, wherein ξ maxand ζ minit is the initial parameter of selecting.For the condition of simplifying in further limiting network on-line study process, avoid simplifying fault, affect the accuracy of network output, in the process of simplifying, add moving window, work as m continuous sample standard deviation establishment just carried out to network and simplify, wherein M is selectable data window length.
Finally to network output renormalization, computing obtains: and then ask for the learning network calculating output valve of temperature and concentration and the difference between actual observed value, and ask for the difference between output valve and actual observed value, then by this difference, compare with the detection threshold of setting, realize the object of fault diagnosis.
Above-described embodiment is only for example of the present invention is clearly described, and be not the restriction to embodiments of the present invention, for those of ordinary skill in the field, can also make other changes in different forms on the basis of the above description.

Claims (1)

1. based on the minimum real-time fault diagnosis method that embeds dimension network of on-line study, its feature comprises: determine the minimum dimension that embeds of time series based on neural network; Network input sample is normalized, and according to time series reconstructed network input and output sample value; Online training study network, dynamically adjusts network structure; The Output rusults of learning network is carried out to renormalization computing, ask for the difference between output valve and actual observed value, then by this difference, compare with the detection threshold of setting, realize the object of fault diagnosis.
1) first step: determine the minimum dimension that embeds of time series based on neural network.Nonlinear Time Series x (t) | x (t) ∈ R 1, t=0,1...L, it in T predicted value is constantly:
Y(T+1)=f(X(T)) (1)
Wherein, X (T)=[x (T), x (T-1) ... x (T-p)], L ∈ Z +, represent whole length of time series, p ∈ Z +, representing the embedding dimension, Y (T+1) is T predicted value constantly.Based on neural network, to time series modeling, can obtain:
x ^ ( T + 1 ) = f ^ ( x ( T ) , x ( T - 1 ) , . . . , x ( T - p ) ) - - - ( 2 )
Target is to make approach as far as possible x (T+1).
According to embedding dimension theorem, neighbour's data are at the p dimension reconstruction attractor 2 near points of giving an example, still very near in p+1 dimension reconstruction attractor, otherwise are exactly pseudo-neighbour.The minimum dimension time series that embeds requires not have pseudo-neighbour's data.By judgement a (i, p), whether surpass set-point and judge whether these data are pseudo-adjacent
a ( i , p ) = | x i + p - x n ( i , p ) + p | | | y i ( p ) - y n ( i , p ) ( p ) | | - - - ( 3 )
For avoiding the impact of a (i, p) initial value and signal skew, definition:
E ( p ) = 1 N - p &Sigma; i = 1 N - p a ( i , p ) - - - ( 4 )
E (p) represents the mean value of all a (i, p), and the E here (p) only relies on and embeds dimension p.When dimension changes from p to p+1, definition:
E 1(p)=E(p+1)/E(p) (5)
If work as p>p 0time, E 1(p) just no longer change p 0+ 1 is exactly the embedding dimension of required time series network.
2) second step: network input sample is normalized, and according to time series reconstructed network input and output sample value.For Nonlinear Time Series value x (1), x (2) ..., x (m), utilizes formula
x &prime; ( i ) = x ( i ) - x min x max - x min , ( i = 1,2 &CenterDot; &CenterDot; &CenterDot; m ) - - - ( 6 ) By data normalization, x wherein max, x minthe maximal value and the minimum value that represent whole time series data.
According to the minimum dimension p that embeds of definite network, by the time series x'(1 after normalization), x'(2) ... x'(m) be reconstructed, form the input and output sample pair of neural network, by x'(1), x'(2) ... x'(m) be divided into k group, every group has p+1, front p value inputted the input of node, a rear expectation value as learning network output node as learning network.
3) the 3rd step: online training study network, dynamically adjust network structure.Select the output of RBF network calculations
f ( X i ) = w 0 + &Sigma; k = 1 k = K w k &phi; k ( X i ) - - - ( 7 )
X wherein i=[x 1... x n] ∈ R nfor network input, f (X i) ∈ R is the output of corresponding network, w kthe output that is k hidden node connects weights, w 0for output offset constant, be the action function of hidden node, radial basis function is elected Gaussian type as:
&phi; k ( X i ) = exp ( - 1 &sigma; k 2 | | X i - u k | | 2 ) - - - ( 8 )
U k∈ R nfor existing data center, σ kexpansion constant for this RBF function.
When on-line study neural network starts, without Hidden unit, network dynamically determines whether input Xi to be increased to Hidden unit in learning process, increases criterion and is:
|e i|=|Y i-f(X i)|>e min (9)
&beta; max = Max k ( &phi; k ) < e tn - - - ( 10 )
If network output f is (X i) and desired value Y ibetween error enough large, according to formula (9), to add hidden neuron, e minrepresentative expectation approximation accuracy.Input X for formula (10) ieffect lower network neuron output φ krepresent k neuronic activity.
The initial e of algorithm tnmax, e tnby exponential law, successively decrease, the hidden node parameter newly increasing by criterion is:
w k+1=e n
u k+1=X n (11)
σ k+1=ν‖X n-u nr
Wherein ν is iteration factor, u nrbe and input sample X nnearest central point.
If sample (X i, Y i) do not meet and increase neuronic criterion condition, just need to utilize gradient descent method to center u k, expansion constant σ k, weight w kadjust.The real-time of adjusting for improving network, N neuron parameter of Zhi Dui center and sample close together adjusted:
&Delta;u k = - &PartialD; E &PartialD; u k = - &PartialD; E &PartialD; f ( X i ) &PartialD; f ( X i ) &PartialD; &phi; k &PartialD; &phi; k &PartialD; u k = ( Y i - f ( X i ) ) w k [ 2 ( X i - u k ) exp ( - 1 &sigma; k 2 | | X i - u k | | 2 ) 1 &sigma; k 2 ] - - - ( 12 )
u k=u k+ηΔu k (13)
&Delta;&sigma; k = - &PartialD; E &PartialD; &sigma; k = - &PartialD; E &PartialD; f ( X i ) &PartialD; f ( X i ) &PartialD; &phi; k &PartialD; &phi; k &PartialD; &sigma; k = ( Y i - f ( X i ) ) w k [ 2 | | X i - u k | | 2 exp ( - 1 &sigma; k 2 | | X i - u k | | 2 ) 1 &sigma; k 3 ] - - - ( 14 )
σ k=σ k+ηΔσ k (15)
&Delta;w k = - &PartialD; E &PartialD; w k = - &PartialD; E &PartialD; f ( X i ) &PartialD; f ( X i ) &PartialD; w k = ( Y i - f ( X i ) ) &phi; k - - - ( 16 )
w k=w k+ηΔw k (17)
Wherein η is learning rate, k=1 ..., N,
For simplifying as far as possible network structure, improve on-line study speed, the relatively little hidden node of neural network output contribution is removed from learning network, principle is that this removing will not affect the performance of overall network.Calculate each sample to (X i, Y i) output of hidden node
&sigma; k i = w k exp ( - 1 &sigma; k 2 | | X i - u k | | 2 ) - - - ( 18 )
Obtain maximum hidden node output and be standardized as:
&lambda; k i = | o i o max | ( k = 1 , &CenterDot; &CenterDot; &CenterDot; K ) - - - ( 19 )
If met remove k hidden node.
4) the 4th step: the output of renormalization network, and relatively realize fault diagnosis according to threshold value.The Output rusults of learning network is carried out to renormalization computing
f ^ ( X i ) = x min + ( x max - x min ) f ( X i ) - - - ( 20 )
The input sample new to each, utilize the Time Series Method of on-line study network of the present invention, dynamically adjust network hidden node number, after position and weights, obtain learning network output, ask for the difference between output valve and actual observed value, then by this difference, compare with the detection threshold of setting, realize the object of fault diagnosis.
CN201410456900.7A 2014-09-09 2014-09-09 Real-time fault diagnosis method based on online learning minimum embedding dimension network Pending CN104200269A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410456900.7A CN104200269A (en) 2014-09-09 2014-09-09 Real-time fault diagnosis method based on online learning minimum embedding dimension network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410456900.7A CN104200269A (en) 2014-09-09 2014-09-09 Real-time fault diagnosis method based on online learning minimum embedding dimension network

Publications (1)

Publication Number Publication Date
CN104200269A true CN104200269A (en) 2014-12-10

Family

ID=52085558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410456900.7A Pending CN104200269A (en) 2014-09-09 2014-09-09 Real-time fault diagnosis method based on online learning minimum embedding dimension network

Country Status (1)

Country Link
CN (1) CN104200269A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106301610A (en) * 2016-08-29 2017-01-04 北京航空航天大学 The adaptive failure detection of a kind of superhet and diagnostic method and device
WO2018120000A1 (en) * 2016-12-30 2018-07-05 Nokia Technologies Oy Artificial neural network
CN111738439A (en) * 2020-07-21 2020-10-02 电子科技大学 Artificial intelligence processing method and processor supporting online learning

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106301610A (en) * 2016-08-29 2017-01-04 北京航空航天大学 The adaptive failure detection of a kind of superhet and diagnostic method and device
WO2018120000A1 (en) * 2016-12-30 2018-07-05 Nokia Technologies Oy Artificial neural network
US11042722B2 (en) 2016-12-30 2021-06-22 Nokia Technologies Oy Artificial neural network
CN111738439A (en) * 2020-07-21 2020-10-02 电子科技大学 Artificial intelligence processing method and processor supporting online learning

Similar Documents

Publication Publication Date Title
CN109472110B (en) Method for predicting residual service life of aeroengine based on LSTM network and ARIMA model
CN112926273B (en) Method for predicting residual life of multivariate degradation equipment
CN108960303B (en) Unmanned aerial vehicle flight data anomaly detection method based on LSTM
CN109766583B (en) Aircraft engine life prediction method based on unlabeled, unbalanced and initial value uncertain data
EP1709499B1 (en) Trending system and method using window filtering
CN104166787B (en) A kind of aero-engine method for predicting residual useful life based on multistage information fusion
CN102789545B (en) Based on the Forecasting Methodology of the turbine engine residual life of degradation model coupling
CN103389472B (en) A kind of Forecasting Methodology of the cycle life of lithium ion battery based on ND-AR model
CN107977710A (en) Multiplexing electric abnormality data detection method and device
CN109376401B (en) Self-adaptive multi-source information optimization and fusion mechanical residual life prediction method
CN114297036B (en) Data processing method, device, electronic equipment and readable storage medium
CN110309537B (en) Intelligent health prediction method and system for aircraft
CN104156612B (en) Fault forecasting method based on particle filter forward and reverse direction prediction errors
CN112487694B (en) Complex equipment residual life prediction method based on multiple degradation indexes
CN115935834A (en) History fitting method based on deep autoregressive network and continuous learning strategy
CN104200269A (en) Real-time fault diagnosis method based on online learning minimum embedding dimension network
CN117313029A (en) Multi-sensor data fusion method based on Kalman filtering parameter extraction and state updating
Jahani et al. Stochastic prognostics under multiple time-varying environmental factors
CN110533109A (en) A kind of storage spraying production monitoring data and characteristic analysis method and its device
CN112132324A (en) Ultrasonic water meter data restoration method based on deep learning model
CN103793614A (en) Catastrophe filter algorithm
CN107220705A (en) Atmospheric and vacuum distillation unit Atmospheric Tower does Forecasting Methodology
CN109065176B (en) Blood glucose prediction method, device, terminal and storage medium
Sankararaman et al. Uncertainty in prognostics: Computational methods and practical challenges
Song et al. A sliding sequence importance resample filtering method for rolling bearings remaining useful life prediction based on two Wiener-process models

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20141210