CN111859798A - Flow industrial fault diagnosis method based on bidirectional long-time and short-time neural network - Google Patents

Flow industrial fault diagnosis method based on bidirectional long-time and short-time neural network Download PDF

Info

Publication number
CN111859798A
CN111859798A CN202010675680.2A CN202010675680A CN111859798A CN 111859798 A CN111859798 A CN 111859798A CN 202010675680 A CN202010675680 A CN 202010675680A CN 111859798 A CN111859798 A CN 111859798A
Authority
CN
China
Prior art keywords
fault
model
neural network
time
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010675680.2A
Other languages
Chinese (zh)
Inventor
罗林
赵子雯
王乔
陈帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Shihua University
Original Assignee
Liaoning Shihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Shihua University filed Critical Liaoning Shihua University
Priority to CN202010675680.2A priority Critical patent/CN111859798A/en
Publication of CN111859798A publication Critical patent/CN111859798A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Testing Of Devices, Machine Parts, Or Other Structures Thereof (AREA)

Abstract

The invention discloses a flow industrial fault diagnosis method based on a bidirectional long-time and short-time neural network, which comprises the following steps: s1: data set preparation: the TE model introduces faults from 160 groups of data, and the data set is used for establishing a monitoring model; s2: feature extraction: the characteristic extraction adopts the data characteristic extraction based on a gradient hoisting machine, and an additive model formed by a plurality of classifiers is searched in the gradient descending direction; s3, establishing an experiment platform; s4: the experiment was carried out: a Keras framework is adopted to build a bidirectional long-time and short-time neural network model, and a research object is a Tennessee Iseman model; s5: and (5) experimental results. The method utilizes the strong generalization capability of the bidirectional long-short time neural network, can avoid the defects of gradient disappearance and gradient explosion of the long sequence, and solves the problems of low accuracy, frequently occurring report missing and false report phenomena and low generalization capability in the process industrial fault diagnosis.

Description

Flow industrial fault diagnosis method based on bidirectional long-time and short-time neural network
Technical Field
The invention relates to the technical field of fault diagnosis, in particular to a flow industrial fault diagnosis method based on a bidirectional long-time and short-time neural network.
Background
With the rapid development of computer technology and modern industries, the industries tend to be more intelligent and complex. Higher requirements are put on the safety of the production process. The occurrence of a fault can result in a great deal of casualties and property loss. Therefore, it is important to detect or diagnose the system failure in a timely manner. The development of fault diagnosis goes through three processes, and the first stage mainly depends on the experience, sense and simple data of experts and maintenance personnel, so that the fault diagnosis and monitoring are simple because the production equipment is simple. In the second stage, with the development of sensors and signaling technology, fault diagnosis and detection have been focused on instrumentation and has found widespread use in maintenance. In the third stage, with the development of computer technology and artificial intelligence technology, fault diagnosis and detection enter an intelligent stage.
Fault diagnosis techniques can be divided into quantitative models, qualitative models and data driving models according to different modeling methods. The quantitative model comprises a state estimation method, a parameter estimation method and an analysis redundancy method, and the methods are all characterized by needing an accurate mechanism model, but are difficult to establish an accurate model in an industrial process due to the characteristics of nonlinearity, time variation, variable coupling, time correlation, multimodality, intermittence and the like. The qualitative model mainly utilizes expert knowledge, and the causal relationship realizes fault diagnosis and positioning in a deductive reasoning mode. As the number of unknown faults increases with the development of the industry, this method cannot acquire complete knowledge, and thus has limitations. The data-driven model is mainly characterized in that a discrete model is established by utilizing data of normal working conditions, the model is refined through a large amount of data to be better adapted to the model, an accurate mechanism model is not needed, the data-driven model is very suitable for the current complex process industry, a large amount of data can be stored along with the development of a sensor technology and a data real-time storage technology in recent years, and a foundation is laid for the data-driven model. The data driving method mainly comprises multivariate statistical analysis, machine learning, signal processing and information fusion.
The traditional process industrial fault diagnosis method has the defects of low accuracy, frequently occurring missing report and false report phenomena and low generalization capability, and along with the improvement of technical means, a large amount of fault data can be stored, so that the process industrial fault diagnosis method based on the bidirectional long-time and short-time neural network is provided.
Disclosure of Invention
The flow industrial fault diagnosis method based on the bidirectional long-time neural network solves the problems in the background technology.
In order to achieve the purpose, the invention adopts the following technical scheme:
the process industry fault diagnosis method based on the bidirectional long-time neural network comprises the following steps:
s1: data set preparation: the TE model introduces faults from 160 groups of data, the data set is used for establishing a monitoring model, the faults are 21 predefined faults and 1 data set of normal working conditions, the test set under the normal condition is stored in d00_ te.txt, the test set with the training set of d00.txt fault 1 is stored in d01_ te.txt training set of d01.txt, … …, the test set of fault 21 is d21_ te.txt, the training set of d21.txt, 520 pieces under the normal working conditions are selected, and the 1 st to 520 pieces of the training data set of d00_ txt are modeled, wherein the faults 8, 12, 13, 17 and 20 are selected to verify the accuracy of the model;
S2: feature extraction: the characteristic extraction adopts the data characteristic extraction based on a gradient hoisting machine, and an additive model formed by a plurality of classifiers is searched in the gradient descending direction;
s3, establishing an experimental platform: the method comprises the steps of establishing a test platform for 8192MBRAM (national memory management system) by taking a computer model as associative thinkpa, an operating system as Windows10 family Chinese edition (64 bits), a CPU as Intel (R) core (TM) i5-8250UCPU @1.60GHz (8CPU) and an internal memory as 8192MBRAM, and performing fault experiments in a python3.7 language environment;
s4: the experiment was carried out: a Keras framework is adopted to build a bidirectional long-time and short-time neural network model, and a research object is a Tennessee Iseman model;
verifying by adopting a fault 8, a fault 12, a fault 13, a fault 17 and a fault 20, and extracting characteristics of a fault data set and a normal working condition data set through a gradient elevator;
then, the extracted features are used as the input of a bidirectional long-short time neural network, the bidirectional long-short time neural network carries out secondary classification, the experimental judgment basis is the accuracy, and the experimental result is displayed through a box type graph;
s5: the experimental results are as follows: the upper and lower limit accuracy of the fault 8, the fault 12 and the fault 17 is almost 100%, the abnormal value is relatively less, the accuracy is relatively stable and small in floating, the precision is high, the fault 13 is known to be a fault related to slow drift in reaction kinetics according to the fault type, the fault is often large in fluctuation, the upper limit accuracy is 100% in the bidirectional LSTM, the median is 0.89, the lower limit is 0.55, and the experimental result shows that the effect is good.
Preferably, the selection of the sample data in step S4 selects the fault 8, the fault 12, the fault 13, the fault 17, and the fault 20 in the tennessem dataset as the fault set, and the selection of the fault sample is performed according to the following rule that: the scale of 3 is divided into a training set and a test set.
Preferably, the sample data modeling mode is online modeling, the Time-Step length is 350, the corresponding accuracy rate is obtained when the sample data modeling mode is operated once in pycharm every Time, the obtained accuracy rate is input into a boxed graph code to obtain a boxed graph of the BILSTM, and the boxed graph can stably depict the accuracy rate distribution without being influenced by abnormal values.
The invention has the beneficial effects that: the method has the advantages that the defects of gradient disappearance and gradient explosion of long sequences can be avoided by utilizing the strong generalization capability of the bidirectional long-short time neural network, and the problems of low accuracy, frequently occurring report missing and false report phenomena and low generalization capability in the process industrial fault diagnosis are solved; the bidirectional long-time and short-time neural network is applied to the field of process engineering and meets the requirement of technical innovation; the fault in the flow process industry is diagnosed in advance through the bidirectional long-time and short-time neural network, and corresponding measures are made, so that casualties of people and property loss can be greatly reduced.
Drawings
FIG. 1 is a schematic diagram illustrating an operation process of a forgetting gate according to the present invention;
FIG. 2 is a schematic diagram illustrating the operation of the input gate of the present invention;
FIG. 3 is a schematic diagram illustrating the operation of the output gate of the present invention;
FIG. 4 is a schematic diagram of the bidirectional LSTM operation model of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
Referring to fig. 1 to 4, the process industrial fault diagnosis method based on the bidirectional long-short time neural network includes the following steps:
s1: data set preparation: the TE model introduces faults from 160 groups of data, the data set is used for establishing a monitoring model, the faults are 21 predefined faults and 1 data set of normal working conditions, the test set under the normal condition is stored in d00_ te.txt, the test set with the training set of d00.txt fault 1 is stored in d01_ te.txt training set of d01.txt, … …, the test set of fault 21 is d21_ te.txt, the training set of d21.txt, 520 pieces under the normal working conditions are selected, and the 1 st to 520 pieces of the training data set of d00_ txt are modeled, wherein the faults 8, 12, 13, 17 and 20 are selected to verify the accuracy of the model;
S2: feature extraction: the characteristic extraction adopts the data characteristic extraction based on a gradient hoisting machine, and an additive model formed by a plurality of classifiers is searched in the gradient descending direction;
s3, establishing an experimental platform: the method comprises the steps of establishing a test platform for 8192MBRAM (national memory management system) by taking a computer model as associative thinkpa, an operating system as Windows10 family Chinese edition (64 bits), a CPU as Intel (R) core (TM) i5-8250UCPU @1.60GHz (8CPU) and an internal memory as 8192MBRAM, and performing fault experiments in a python3.7 language environment;
s4: the experiment was carried out: a Keras framework is adopted to build a bidirectional long-time and short-time neural network model, and a research object is a Tennessee Iseman model;
verifying by adopting a fault 8, a fault 12, a fault 13, a fault 17 and a fault 20, and extracting characteristics of a fault data set and a normal working condition data set through a gradient elevator;
then, the extracted features are used as the input of a bidirectional long-short time neural network, the bidirectional long-short time neural network carries out secondary classification, the experimental judgment basis is the accuracy, and the experimental result is displayed through a box type graph;
s5: the experimental results are as follows: the upper and lower limit accuracy of the fault 8, the fault 12 and the fault 17 is almost 100%, the abnormal value is relatively less, the accuracy is relatively stable and small in floating, the precision is high, the fault 13 is known to be a fault related to slow drift in reaction kinetics according to the fault type, the fault is often large in fluctuation, the upper limit accuracy is 100% in the bidirectional LSTM, the median is 0.89, the lower limit is 0.55, and the experimental result shows that the effect is good.
Specifically, in the experiment of step S4, the construction of the bidirectional long-short time neural network model is based on the long-short time neural network (LSTM), wherein the long and short time neural network (LSTM) is a variation of the circulating neural network, solves the disadvantages of gradient disappearance and gradient explosion of the circulating neural network in a longer sequence, introducing a cell state (cell) on the basis of a hidden layer (hidden), mainly playing a selective role on data by a gate mode, transmitting some useful information to a next neuron, and discarding some irrelevant information, wherein the LSTM is mainly divided into a forgetting gate (forgetgate), an input gate (inputgate) and an output gate (outputgate), three gates generally take any value of (0,1) after being processed by a sigmoid or a tank network layer, when the output value is 1, any information representing a current sample can be used for a next sample, and when the output value is 0, the next sample is irrelevant to the current sample;
the forgetting gate is used for controlling whether a previous layer of hidden cell state is forgotten or not at a certain probability, and is mainly formed by inputting a previous hidden state h (t-1) and the sequence data x (t) through an activation function sigmoid to obtain f (t), the output f (t) is between 0 and 1, the mapping between 0 and 1 is a memory attenuation coefficient, the closer to zero is, the more the information is discarded, the closer to 1 is, the more the information is stored, and the existence of the forgetting gate is also the guarantee that the LSTM can remember long-term memory. The operation process of the forgetting gate is shown in figure 1;
The output result of the forgetting gate is as follows:
ft=σ(Wfht-1+UfXt+bf) (1)
σ denotes the activation function sigmoid, Wf,Uf,bfRepresenting the weight of the neural network, the forgetting gate will first read the output information h of the previous sample hidden layert-1And X of the current sampletThen outputs a specific value as the cell state Ct-1A part of (a).
An input gate: the LSTM has three inputs, the input value x (t) of the network at the current time, the output value of the LSTM at the previous time, and the cell state of the previous one. The input gate is responsible for processing the current sequence position and mainly comprises two parts, 1) sigmoid activation function is used, i (t) is output, the layer determines which values 2 are updated by the user), tanh activation function is used, and a is outputtThe results of the two are multiplied to update the state of the cell. The operation of the input gate is shown in fig. 2.
The output of the input gate is:
t=σ(Wiht-1+UiXt+bi) (2)
at=tanh(Wmht-1+UmXt+bm) (3)
an output gate: there are two LSTM outputs, one for the cellstate cell state and one for the hidden state.
The update of the cell state is the result of the combined action of the forgetting gate and the output gate and consists of two parts, the first part is ct-1And forget door ftProduct, the second part being the input gate itAnd atThe product of (a).
ct=ct-1ft+itat(4)
The output of the hidden state is output by two parts, the first part is output by O tFrom the previous sequence of hidden states ht-1And the present sequence data XtAnd activating the function sigmoid, the second part being derived from the hidden state ctAnd a tanh activation function. The operation of the output gate is shown in fig. 3.
The bi-directional LSTM (BilsTM) is a deformed structure which is introduced into the positive and negative time direction concept according to the associative consciousness of human beings before and after understanding characters on the basis of the LSTM, the BilsTM neural network structure is shown as a BilsTM neural network structure in figure 4, and each hidden layer unit stores two information, namely A and A*A participates in the forward operation, A*Participating in backward operations, forward LSTM processing from X1To XnAnd then processes from X to LSTMnTo X1The two together determine the output y, in such a way that the output comes from both the past and future features, the hidden layer unit S when doing the forward operationtAnd St-1Hidden layer unit for correlation and inverse operation
Figure BDA0002583960780000081
And
Figure BDA0002583960780000082
for the time sequence irrelevant to cause and effect, the two-way long-short time neural network extracts the levels of the original sequence characteristics through the positive direction and the negative direction by utilizing the known time sequence and the reverse position sequence, so that the model precision is obviously improved, the two-way long-short time neural network has better effect than the one-way long-short time neural network in the aspect of processing the time sequence problem, the potential is larger in the aspect of processing complex tasks, and the two-way LSTM operation model is as shown in figure 4.
Specifically, in the step S2, the Gradient Boosting algorithm (Gradient Boosting) is a combined model, and the specific algorithm process is as follows:
F*(x)=araminEy,x(y,F(x)) (5)
the problem of training a model from existing data can be regarded as an optimization problem, the goal of which is toModel F satisfying equation 5*(x) When the solution space of the optimization problem is in function space rather than numerical space, if F is defined*(x) Changing the form, defined as a model with parameters, i.e., F (x, p), the function optimization problem is transformed into a parameter optimization problem, and the formula is:
Figure BDA0002583960780000083
F*(x)=F*(x,p*) (7)
where p is a parameter of the model,
Figure BDA0002583960780000084
representing a loss function based on the parameter p, a problem of parameter optimization when the optimal parameter p is to be estimated*Usually, a step optimization method is adopted, i.e. first guess a parameter initial value p0Then, the optimal parameter p, is obtained through the iterative promotion of the M steps*Expressed in cumulative form:
Figure BDA0002583960780000085
wherein p is0As initial values of parameters, { p1,p2,…pmThe gradient descent method is also called the steepest descent method, and is a typical numerical optimization solving method, and the idea is that if a real value function is
Figure BDA0002583960780000091
Is differentiable and defined at a, then
Figure BDA0002583960780000092
The minimum value of the function is found more easily if the gradient is decreased most quickly in the opposite direction, and if the gradient decreasing method is adopted to carry out the optimal parameter p *Solving, which comprises the following steps:
Figure BDA0002583960780000093
for a model which can be represented by parameters, the parameter optimization process of the gradient descent method is a very good solving method, but is not feasible for a non-parametric model, an alternative idea is to regard the whole f (x) as a parameter and find out an optimization function in a function space, but since sample data is limited, the value range of the function f (x) is limited by a sample set, so data points on the sample cannot be accurately estimated, and in order to solve the problem of function space limitation, Friedman proposes a gradient boosting algorithm (GradientBoosting), the algorithm flow of which is as follows: the lifting algorithm essentially builds an accumulation model composed of a plurality of basis functions, and is represented as:
Figure BDA0002583960780000094
wherein beta ismCoefficient of expansion, b (X, γ)m) Is a basis function b0(X,γm) For the initialized basis function, obtaining an optimal estimation value F (x) by M-step lifting*(x) Which is required to be
Figure BDA0002583960780000101
The problem of solution (8) can therefore be converted to the following parameters, i.e. for m 1,2 … …,
Figure BDA0002583960780000102
Figure BDA0002583960780000103
obtaining a lifting model F from (9) and (10)m-1(x),βmb(X,γm) To satisfy the constraint of (8) that the loss function is minimized, in other words, the basis function added at each iteration is the basis function that minimizes the reduction of (8), then in this sense the newly added basis function is equivalent to p in the gradient descent method m=-ρmkmMeasuring basis functions by square errorNumber and negative gradient, i.e. formula
Figure BDA0002583960780000104
Figure BDA0002583960780000105
The expression (11) means that β b (x) is usediGamma) to fit a negative gradient-km(xi) The approximation is measured by the square error, and the basis function parameter gamma is obtained when the fitting error is minimumm
The gradient boosting algorithm uses a decision tree as basis function, and a tree represented as V end nodes can be represented as
Figure BDA0002583960780000106
Wherein θ ═ Rvv) As a parameter of the tree, RvRegions, gamma, divided in the space of the input variable x for the tree building processvFor each end node's value, θmCorresponds to gamma in formula (12)mIn the model expression of equation (8), the gradient lifting algorithm can be expressed in the form of M tree accumulations
Figure BDA0002583960780000111
Base tree T for each joining modelm(x,θm) Requires thetamConform to
Figure BDA0002583960780000112
The same can be obtained from the formula (12):
Figure BDA0002583960780000113
fitting the current loss function by using the base tree, and updating the gradient lifting model by using the gradient value of the current loss function and the input variable x as new samples:
Figure BDA0002583960780000114
wherein eta is a coefficient, and is used as a regularization processing measure to prevent the over-fitting phenomenon.
Specifically, in step S4, selecting sample data: selecting faults 8, 12, 13, 17 and 20 in the Tennessemann dataset as fault sets, and performing fault sampling according to the following steps of 7: 3 into training set and test set, the fault number and fault type are shown in table 1:
TABLE 1 Fault numbering and Fault types
Figure BDA0002583960780000115
Specifically, the modeling mode of the sample data is online modeling, the Time-Step length is 350, the corresponding accuracy rate can be obtained by operating in pycharm once every Time, the obtained accuracy rate is input into a boxed graph code, and a boxed graph of the BILSTM can be obtained, and the boxed graph can stably depict the accuracy rate distribution without being influenced by abnormal values.
In conclusion, the defects of gradient disappearance and gradient explosion of a long sequence can be avoided by utilizing the strong generalization capability of the bidirectional long-short time neural network, and the problems of low accuracy, frequently occurring report missing and false report phenomena and low generalization capability in the process industrial fault diagnosis are solved; the bidirectional long-short time neural network obtains remarkable results in a plurality of fields, but is less applied in the domestic process industry field at present, so that the bidirectional long-short time neural network is applied in the process industry field and meets the requirement of technical innovation; the fault in the flow process industry is diagnosed in advance through the bidirectional long-time and short-time neural network, and corresponding measures are made, so that casualties of people and property loss can be greatly reduced.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (3)

1. The flow industrial fault diagnosis method based on the bidirectional long-short-time neural network is characterized by comprising the following steps of:
s1: data set preparation: the TE model introduces faults from 160 groups of data, the data set is used for establishing a monitoring model, the faults are 21 predefined faults and 1 data set of normal working conditions, the test set under the normal condition is stored in d00_ te.txt, the test set with the training set of d00.txt fault 1 is stored in d01_ te.txt training set of d01.txt, … …, the test set of fault 21 is d21_ te.txt, the training set of d21.txt, 520 pieces under the normal working conditions are selected, and the 1 st to 520 pieces of the training data set of d00_ txt are modeled, wherein the faults 8, 12, 13, 17 and 20 are selected to verify the accuracy of the model;
s2: feature extraction: the characteristic extraction adopts the data characteristic extraction based on a gradient hoisting machine, and an additive model formed by a plurality of classifiers is searched in the gradient descending direction;
s3, establishing an experimental platform: the method comprises the steps of establishing a test platform for 8192MB RAM by using a computer model of associative thinkpa, an operating system of Windows10 family Chinese edition (64 bits), a CPU of Intel (R) core (TM) i5-8250U CPU @1.60GHz (8CPU) and an internal memory of 8192MB RAM, and performing fault experiments in a python3.7 language environment;
S4: the experiment was carried out: a Keras framework is adopted to build a bidirectional long-time and short-time neural network model, and a research object is a Tennessee Iseman model;
verifying by adopting a fault 8, a fault 12, a fault 13, a fault 17 and a fault 20, and extracting characteristics of a fault data set and a normal working condition data set through a gradient elevator;
then, the extracted features are used as the input of a bidirectional long-short time neural network, the bidirectional long-short time neural network carries out secondary classification, the experimental judgment basis is the accuracy, and the experimental result is displayed through a box type graph;
s5: the experimental results are as follows: the upper and lower limit accuracy of the fault 8, the fault 12 and the fault 17 is almost 100%, the abnormal value is relatively less, the accuracy is relatively stable and small in floating, the precision is high, the fault 13 is known to be a fault related to slow drift in reaction kinetics according to the fault type, the fault is often large in fluctuation, the upper limit accuracy is 100% in the bidirectional LSTM, the median is 0.89, the lower limit is 0.55, and the experimental result shows that the effect is good.
2. The method according to claim 1, wherein the selection of the sample data in step S4 selects failure 8, failure 12, failure 13, failure 17, and failure 20 in the tiannsistman dataset as a failure set, and the failure samples are calculated according to the following formula 7: the scale of 3 is divided into a training set and a test set.
3. The method as claimed in claim 2, wherein the sample data is modeled on-line, the Time-Step is 350, the corresponding accuracy is obtained by operating the method once in pycharm, the obtained accuracy is input into a boxed graph code, and a boxed graph of the BilSTM is obtained, and the boxed graph can stably depict the accuracy distribution without being affected by abnormal values.
CN202010675680.2A 2020-07-14 2020-07-14 Flow industrial fault diagnosis method based on bidirectional long-time and short-time neural network Pending CN111859798A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010675680.2A CN111859798A (en) 2020-07-14 2020-07-14 Flow industrial fault diagnosis method based on bidirectional long-time and short-time neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010675680.2A CN111859798A (en) 2020-07-14 2020-07-14 Flow industrial fault diagnosis method based on bidirectional long-time and short-time neural network

Publications (1)

Publication Number Publication Date
CN111859798A true CN111859798A (en) 2020-10-30

Family

ID=72983909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010675680.2A Pending CN111859798A (en) 2020-07-14 2020-07-14 Flow industrial fault diagnosis method based on bidirectional long-time and short-time neural network

Country Status (1)

Country Link
CN (1) CN111859798A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113027684A (en) * 2021-03-24 2021-06-25 明阳智慧能源集团股份公司 Intelligent control system for improving clearance state of wind generating set

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992597A (en) * 2017-12-13 2018-05-04 国网山东省电力公司电力科学研究院 A kind of text structure method towards electric network fault case
CN109783997A (en) * 2019-03-12 2019-05-21 华北电力大学 A kind of transient stability evaluation in power system method based on deep neural network
CN109931678A (en) * 2019-03-13 2019-06-25 中国计量大学 Air-conditioning fault diagnosis method based on deep learning LSTM
CN110132598A (en) * 2019-05-13 2019-08-16 中国矿业大学 Slewing rolling bearing fault noise diagnostics algorithm
CN110232395A (en) * 2019-03-01 2019-09-13 国网河南省电力公司电力科学研究院 A kind of fault diagnosis method of electric power system based on failure Chinese text
CN110261109A (en) * 2019-04-28 2019-09-20 洛阳中科晶上智能装备科技有限公司 A kind of Fault Diagnosis of Roller Bearings based on bidirectional memory Recognition with Recurrent Neural Network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992597A (en) * 2017-12-13 2018-05-04 国网山东省电力公司电力科学研究院 A kind of text structure method towards electric network fault case
CN110232395A (en) * 2019-03-01 2019-09-13 国网河南省电力公司电力科学研究院 A kind of fault diagnosis method of electric power system based on failure Chinese text
CN109783997A (en) * 2019-03-12 2019-05-21 华北电力大学 A kind of transient stability evaluation in power system method based on deep neural network
CN109931678A (en) * 2019-03-13 2019-06-25 中国计量大学 Air-conditioning fault diagnosis method based on deep learning LSTM
CN110261109A (en) * 2019-04-28 2019-09-20 洛阳中科晶上智能装备科技有限公司 A kind of Fault Diagnosis of Roller Bearings based on bidirectional memory Recognition with Recurrent Neural Network
CN110132598A (en) * 2019-05-13 2019-08-16 中国矿业大学 Slewing rolling bearing fault noise diagnostics algorithm

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JEROME H.FRIEDMAN: "GREEDY FUNCTION APPROXIMATION:A GRADIENT BOOSTING MACHINE", 《THE ANNALS OF STATISTICS》 *
王邵鹏: "基于深度学习的广告点击预测研究", 《中国优秀硕士学位论文全文数据库》 *
金余丰: "基于深度学习的滚动轴承故障诊断方法研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113027684A (en) * 2021-03-24 2021-06-25 明阳智慧能源集团股份公司 Intelligent control system for improving clearance state of wind generating set
CN113027684B (en) * 2021-03-24 2022-05-03 明阳智慧能源集团股份公司 Intelligent control system for improving clearance state of wind generating set

Similar Documents

Publication Publication Date Title
Zhou et al. Remaining useful life prediction of bearings by a new reinforced memory GRU network
CN112926273B (en) Method for predicting residual life of multivariate degradation equipment
CN111813084B (en) Mechanical equipment fault diagnosis method based on deep learning
CN111274737A (en) Method and system for predicting remaining service life of mechanical equipment
CN109766583A (en) Based on no label, unbalanced, initial value uncertain data aero-engine service life prediction technique
CN114282443B (en) Residual service life prediction method based on MLP-LSTM supervised joint model
CN112101431A (en) Electronic equipment fault diagnosis system
CN110705812A (en) Industrial fault analysis system based on fuzzy neural network
CN111340110B (en) Fault early warning method based on industrial process running state trend analysis
CN112488235A (en) Elevator time sequence data abnormity diagnosis method based on deep learning
CN116050281A (en) Foundation pit deformation monitoring method and system
Fu et al. MCA-DTCN: A novel dual-task temporal convolutional network with multi-channel attention for first prediction time detection and remaining useful life prediction
CN112434390A (en) PCA-LSTM bearing residual life prediction method based on multi-layer grid search
CN113988210A (en) Method and device for restoring distorted data of structure monitoring sensor network and storage medium
Deng et al. A remaining useful life prediction method with automatic feature extraction for aircraft engines
Zhang et al. Recurrent neural network model with self-attention mechanism for fault detection and diagnosis
CN111859798A (en) Flow industrial fault diagnosis method based on bidirectional long-time and short-time neural network
Raza et al. Application of extreme learning machine algorithm for drought forecasting
Dang et al. seq2graph: Discovering dynamic non-linear dependencies from multivariate time series
CN117520664A (en) Public opinion detection method and system based on graphic neural network
CN117272202A (en) Dam deformation abnormal value identification method and system
CN117010683A (en) Operation safety risk prediction method based on hybrid neural network and multiple agents
Zhou et al. Tunnel settlement prediction by transfer learning
CN115062764B (en) Intelligent illuminance adjustment and environmental parameter Internet of things big data system
CN113962431B (en) Bus load prediction method for two-stage feature processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201030

RJ01 Rejection of invention patent application after publication