CN114021469A - Method for monitoring one-stage furnace process based on mixed sequence network - Google Patents

Method for monitoring one-stage furnace process based on mixed sequence network Download PDF

Info

Publication number
CN114021469A
CN114021469A CN202111348802.8A CN202111348802A CN114021469A CN 114021469 A CN114021469 A CN 114021469A CN 202111348802 A CN202111348802 A CN 202111348802A CN 114021469 A CN114021469 A CN 114021469A
Authority
CN
China
Prior art keywords
network
decoder
output
hidden layer
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111348802.8A
Other languages
Chinese (zh)
Inventor
宋执环
钱金传
文成林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202111348802.8A priority Critical patent/CN114021469A/en
Publication of CN114021469A publication Critical patent/CN114021469A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/10Numerical modelling

Abstract

The invention discloses a method for monitoring a one-section furnace process based on a mixed sequence network, which comprises an encoder, a mode identification network and a decoder, wherein the encoder consists of a group of RNNs, and a last hidden layer output of the encoder reconstructs a sample of the last moment of a sequence through the decoder. The decoder is composed of a plurality of sub-decoder parts, and the final reconstruction value is obtained through weighting of the output weight of the modal identification network. The network parameters are trained by weighting the reconstruction errors, and the information entropy aiming at the weights is added into the loss function so as to obtain more accurate modal identification effect and prevent the network from collapsing to a single mode. Finally, based on the designed neural network model, a weighted square prediction error is constructed to indicate whether a fault occurs in the process, and fault variables are identified through contribution degrees. The method can accurately monitor the process of the first-stage furnace, and has high fault detection and identification accuracy.

Description

Method for monitoring one-stage furnace process based on mixed sequence network
Technical Field
The invention belongs to the field of fault detection and fault identification of a first-stage furnace process, and particularly relates to a method for monitoring the first-stage furnace process based on a mixed sequence network.
Background
In order to maintain stable operation of a production process, process monitoring technology has received great attention in the year, fault detection and fault identification are main components in process monitoring, the purpose of fault detection is to detect whether a fault exists in a currently operating industrial process, and fault identification is to locate a fault variable after the fault is determined to occur so as to help engineers perform fault troubleshooting and recovery.
The first-stage furnace is a core production device in the process of ammonia synthesis, and mainly converts natural gas into raw material hydrogen, the reaction steps of the process are complex, and stronger nonlinear relation often exists between process variables, so that the traditional data-driven fault detection method based on linear hypothesis cannot show better detection performance when applied to the process. In addition, due to changes in operating conditions and changes in process raw materials, the process data of a first stage furnace often shows multi-modal characteristics, which also makes it difficult for a traditional data-driven model to effectively monitor the process under single-condition or steady-state assumptions. In recent years, deep learning has become a research focus, which mainly includes some neural network-based feature extraction models, and has been widely applied to various fields related to industry due to its superior performance in processing nonlinear data.
The self-encoder is a common deep learning model and can be used for mining complex features in data, but the self-encoder cannot mine dynamic features in the data, and in addition, when multi-modal features expressed in the data are directly used, confusion among the features of multiple modalities may be caused, so that the monitoring performance is reduced.
Disclosure of Invention
In order to solve the problems of nonlinearity, multi-mode and the like in the process of a first-stage furnace, the invention provides a method for monitoring the process of the first-stage furnace based on a mixed sequence network.
The specific technical scheme of the invention is as follows:
a method for monitoring a one-section furnace process based on a mixed sequence network comprises the following steps:
s1: constructing a process monitoring model based on a mixed sequence network, and performing characteristic mining on a first-stage furnace process;
the process monitoring model comprises an encoder, a modal identification network and a decoder; the encoder is RNN and is used for mining dynamic characteristics in the process; after the hidden layer output at the last moment of the encoder is connected to the mode identification network, outputting a group of weights through a softmax layer after passing through the hidden layer, wherein the weights are used for indicating the mode to which the current sequence sample belongs; the decoder is also connected with the hidden layer output at the last moment of the encoder and is used for reconstructing a sample at the last moment in the input sequence; the decoder consists of a plurality of single-layer neural networks, and the final reconstruction value is obtained after the weighted summation of the output of each neural network is carried out through the weight output by the mode identification network;
s2: collecting process data under a normal working condition of a first-stage furnace process, constructing a data set, setting a sequence length L, serializing the data set, and taking the obtained sequence data set as a training data set X for model training; wherein the nth input sequence is
Figure BDA0003355204050000021
S3: inputting a training data set X into a process monitoring model based on a mixed sequence network, carrying out forward propagation to obtain a reconstructed value, and minimizing a loss function by an iterative method until model parameters are converged or the maximum iteration times is reached to obtain a trained process monitoring model;
s4: calculating a detection index WSPE by using training data, and calculating a control limit con by using a kernel density estimation methodwspe
S5: an input sequence with the required length L is constructed by utilizing a sample x and samples at the first L-1 moments of the online detection of a section of furnace, and is substituted into the process monitoring model trained in S4 to obtain the reconstructed output of the decoder of x and the output p [ p ] of the modal identification network1 p2 … pK](ii) a Denote the reconstructed output of the ith sub-decoder as
Figure BDA0003355204050000026
S6: computing detection index WSPE by using online samples and reconstructed output thereofoAnd the detection index is compared with the control limit conwspeMaking a comparison when WSPEo≤conwspeThen, the online sample is a normal sample; when WSPE is usedo>conwspeIf so, the current sample is regarded as a fault sample, and the fault sample is further subjected to fault identification; let the online sample be x ═ x1 x2 … xm]The reconstructed value of the i-th decoder after model replacement is
Figure BDA0003355204050000022
The method for calculating the contribution index of the jth variable comprises the following steps:
Figure BDA0003355204050000023
s7: and considering the variable with higher contribution degree as a fault variable according to needs.
Further, the S3 is realized by the following sub-steps:
(1) mixing XnCarry-over to the Process monitoring model, X, constructed at S1nForward propagation process to obtain hidden layer output of RNN
Figure BDA0003355204050000024
Wherein, U iseRepresents the weights that map the inputs to the hidden layer features of the RNN,
Figure BDA0003355204050000025
m is the variable number of the input sample; weRepresents the weight that maps the hidden layer output at time t-1 in the RNN to the hidden layer output at time t in the RNN,
Figure BDA0003355204050000031
hethe number of nodes of the RNN hidden layer;
Figure BDA0003355204050000032
and
Figure BDA0003355204050000033
respectively representing hidden layer outputs at time t and time t-1,
Figure BDA0003355204050000034
representing the input samples at time t, f (×) representing the non-linear activation function in the RNN;
(2)Xnobtaining characteristic output through L times of forward mapping
Figure BDA0003355204050000035
The characteristic output of RNN is obtained by forward mapping of modal identification network
Figure BDA0003355204050000036
Is marked as
Figure BDA0003355204050000037
K is the number of decoders; wherein h ismIdentifying the number of nodes of the network hidden layer for the mode; wm、bmRespectively representing the weights and biases that map the input to the hidden layer features of the modality-recognition network,
Figure BDA0003355204050000038
Figure BDA0003355204050000039
Wp、bprespectively representing mapping hidden layer features of a modality recognition network to modality recognitionThe weight and the offset of the output of the network,
Figure BDA00033552040500000310
(3) the characteristic output of RNN is mapped by the i-th decoder to obtain
Figure BDA00033552040500000311
Wherein σ () is a nonlinear activation function;
Figure BDA00033552040500000312
respectively representing the weights and offsets of the hidden layer features mapping the input to the ith decoder,
Figure BDA00033552040500000313
respectively representing weights and offsets that map the hidden layer features of the ith decoder to the output of the ith decoder,
Figure BDA00033552040500000314
hdthe number of nodes of a network hidden layer of a decoder;
during training, the loss function is defined as:
Figure BDA00033552040500000315
wherein N is the number of sequences used to train the model, alpha and beta are adjustable hyper-parameters,
Figure BDA00033552040500000316
and Lentr2In order to obtain more accurate modal identification precision by outputting the obtained information entropy through the modal identification network and simultaneously prevent the model from collapsing to a single mode in the training process and falling into a local optimal solution, the calculation method is shown as the following formulas:
Figure BDA00033552040500000317
Figure BDA00033552040500000318
further, the detection index WSPE of S4 is calculated as follows:
Figure BDA00033552040500000319
the detection index WSPE in S5o
Figure BDA0003355204050000041
Further, in S3, the loss function is minimized by a gradient descent method.
The invention has the following beneficial effects:
(1) the process detection model aims at the characteristics of strong dynamic property and nonlinearity of a furnace process in one section, an encoder part consists of a group of cyclic neural networks, and a last hidden layer output of the encoder reconstructs a sample at the last moment of a sequence through a decoder. In addition, since a multi-mode condition exists in a section of furnace process, in order to prevent confusion among the characteristics of a plurality of modes, the decoder part is composed of a plurality of sub-decoder parts, and the final reconstruction value is obtained through weighting of the output weight of the mode identification network. The network parameters are trained by weighting the reconstruction errors, and the information entropy aiming at the weights is added into the loss function so as to obtain more accurate modal identification effect and prevent the network from collapsing to a single mode. Finally, based on the designed neural network model, a weighted square prediction error is constructed to indicate whether a fault occurs in the process, and fault variables are identified through contribution degrees.
(2) Because the method is based on deep learning, compared with the traditional process monitoring model, the method has stronger process complex feature mining capability. In addition, due to the addition of the RNN structure, the model can effectively extract dynamic information in the process, so that the modal identification is more accurate, and meanwhile, the mined dynamic characteristics can well assist various tasks in process monitoring, so that the monitoring accuracy is improved. Compared with the traditional process monitoring model based on a single model, the method has the advantages that the plurality of decoders are added in the model structure design, so that the feature extraction for each mode is more accurate, and the performance of the method is further improved in the process monitoring task facing multi-mode data.
Drawings
FIG. 1 is a flow diagram of a method for monitoring a one-stage furnace process based on a mixed sequence network;
fig. 2 is a schematic diagram of a model structure of a mixed sequence network.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and preferred embodiments, and the objects and effects of the present invention will become more apparent, it being understood that the specific embodiments described herein are merely illustrative of the present invention and are not intended to limit the present invention.
The process monitoring model based on the mixed sequence network changes an encoder into a cyclic neural network on the basis of an original self-encoder so as to mine dynamic characteristics among data, and in addition, in consideration of multi-modal characteristics in a section of furnace process, a decoder part consists of a plurality of sub-decoders, and a modal identification network is constructed to output modal identification weights for subsequent weighted reconstruction.
As shown in fig. 1, the method for monitoring a first-stage furnace process based on a hybrid sequence network of the present invention mainly includes two parts, namely, an offline modeling part and an online monitoring part, wherein the online monitoring part includes fault detection and fault identification for a fault sample; the method comprises the following specific steps:
s1: constructing a process monitoring model based on a mixed sequence network, and performing characteristic mining on a first-stage furnace process;
as shown in FIG. 2, the process monitoring model includes three parts, an encoder, a modality identification network and a decoder; the encoder is RNN and is used for mining dynamic characteristics in the process; after the hidden layer output at the last moment of the encoder is connected to the mode identification network, outputting a group of weights through a softmax layer after passing through the hidden layer, wherein the weights are used for indicating the mode to which the current sequence sample belongs; the decoder is also connected with the hidden layer output at the last moment of the encoder and is used for reconstructing a sample at the last moment in the input sequence; the decoder consists of a plurality of single-layer neural networks, and the final reconstruction value is obtained after the weighted summation of the output of each neural network is carried out through the weight output by the mode identification network;
s2: collecting process data under a normal working condition of a first-stage furnace process, constructing a data set, setting a sequence length L, serializing the data set, and taking the obtained sequence data set as a training data set X for model training; wherein the nth input sequence is
Figure BDA0003355204050000051
S3: inputting a training data set X into a process monitoring model based on a mixed sequence network, performing forward propagation, performing weighted summation after obtaining modal identification network output and decoder output to obtain a reconstruction value, taking a reconstruction error as a loss function, and minimizing the loss function through an iterative method (performing iterative update on model parameters by using optimization methods such as gradient descent and the like) until the model parameters are converged or the maximum iteration times is reached to obtain the trained process monitoring model; the method specifically comprises the following substeps:
(1) mixing XnCarry-over to the Process monitoring model, X, constructed at S1nForward propagation process to obtain hidden layer output of RNN
Figure BDA0003355204050000052
Wherein, U iseRepresents the weights that map the inputs to the hidden layer features of the RNN,
Figure BDA0003355204050000053
m is the variable number of the input sample; weRepresents RNThe hidden layer output at time t-1 in N maps to the weight of the hidden layer output at time t in RNN,
Figure BDA0003355204050000054
hethe number of nodes of the RNN hidden layer;
Figure BDA0003355204050000055
and
Figure BDA0003355204050000056
respectively representing hidden layer outputs at time t and time t-1,
Figure BDA0003355204050000057
representing the input samples at time t, f (—) representing the non-linear activation function in the RNN, and X in turnnEach element is brought into the mapping for iteration, and final RNN characteristic output can be obtained;
(2)Xnobtaining the characteristic output of RNN through L times of forward mapping
Figure BDA0003355204050000058
The characteristic output of RNN is obtained by forward mapping of modal identification network
Figure BDA0003355204050000059
For indicating modal information, note
Figure BDA00033552040500000510
K is the number of decoders, wherein the size of each element indicates the probability size that the current sequence sample belongs to a certain mode; wherein h ismIdentifying the number of nodes of the network hidden layer for the mode; wm、bmRespectively representing the weights and biases that map the input to the hidden layer features of the modality-recognition network,
Figure BDA0003355204050000061
Wp、bprespectively representing weights and biases for mapping hidden layer features of the modality recognition network to the output of the modality recognition network,
Figure BDA0003355204050000062
Figure BDA0003355204050000063
(3) the characteristic output of RNN is mapped to the reconstructed value of the ith decoder by the forward mapping of the ith decoder, and the mapping function is
Figure BDA0003355204050000064
Wherein σ () is a nonlinear activation function;
Figure BDA0003355204050000065
respectively representing the weights and offsets of the hidden layer features mapping the input to the ith decoder,
Figure BDA0003355204050000066
respectively representing weights and offsets that map the hidden layer features of the ith decoder to the output of the ith decoder,
Figure BDA0003355204050000067
Figure BDA0003355204050000068
hdthe number of nodes of a network hidden layer of a decoder;
during training, the loss function is defined as:
Figure BDA0003355204050000069
wherein N is the number of sequences used to train the model, alpha and beta are adjustable hyper-parameters,
Figure BDA00033552040500000610
and Lentr2The information entropy obtained through the output of the modal identification network is used for obtaining more accurate modal identification precision and simultaneously preventing the model from collapsing to a single mode and falling in the training processThe calculation method of the incoming part optimal solution is shown as the following formulas:
Figure BDA00033552040500000611
Figure BDA00033552040500000612
Figure BDA00033552040500000613
s4: whether a furnace process has a fault is indicated by establishing a detection index based on a weighted squared prediction error. Namely, the detection index WSPE is calculated by utilizing the training data,
Figure BDA00033552040500000614
after obtaining the WSPE sets of all the training data, calculating the control limit con by using a kernel density estimation methodwspe
S5: an input sequence with the required length L is constructed by utilizing a sample x and samples at the first L-1 moments of the online detection of a section of furnace, and is substituted into the process monitoring model trained in S4 to obtain the reconstructed output of the decoder of x and the output p [ p ] of the modal identification network1 p2 … pK](ii) a Denote the reconstructed output of the ith sub-decoder as
Figure BDA00033552040500000615
S6: computing detection index WSPE by using online samples and reconstructed output thereofo
Figure BDA0003355204050000071
And the detection index and the control limit conwspeComparing, when the sample is normal, the model can reconstruct the input well, therefore, when WSPEo≤conwspeThen, the online sample is a normal sample; when WSPE is usedo>conwspeIf the current sample is a fault sample, further carrying out fault identification on the fault sample, namely identifying a fault variable; and respectively calculating the contribution degree of the corresponding variable to the fault index by using the online sample and the reconstructed value output by each decoder, and then weighting each result by using the weight output by the modal identification network to obtain the final contribution degree of the variable so as to indicate the possibility of the fault of the variable. Let the online sample be x ═ x1 x2 … xm]The reconstructed value of the i-th decoder after model replacement is
Figure BDA0003355204050000072
The method for calculating the contribution index of the jth variable comprises the following steps:
Figure BDA0003355204050000073
s7: and considering the variable with higher contribution degree as a fault variable according to needs.
In order to detect the fault detection effect of the process monitoring model based on the mixed sequence network, the detection rate (FDR) and the False Alarm Rate (FAR) of the process monitoring model are respectively calculated by utilizing offline data
Figure BDA0003355204050000074
Wherein N isfNumber of samples of failure, NnIs the normal number of samples, NfaAnd NnaThe number of samples that are alarmed in the fault and normal samples, respectively.
The advantages of the process of the present invention are illustrated below in connection with the actual process experiments of a one-stage furnace process. Data in the experiment are acquired on site, normal process data in two modes are extracted for model training through surveying operation logs and observing data distribution, and meanwhile a group of fault data sets are selected for a fault detection experiment in each mode. The comparison method comprises a traditional multi-mode fault detection method GMM-PCA and a deep learning-based method SAE, MAE. The parameters of all models in the experiment were adjusted to determine by achieving a FAR of less than 5% in the normal dataset. The failure detection rates of the respective methods are shown in the following table.
TABLE 1 results of fault detection in one stage furnace procedure
Figure BDA0003355204050000075
As can be seen from the results shown in table 1, compared with the other three methods, the performance of the mixed sequence network is significantly improved, and the failure detection rate in each mode is improved by more than 10%.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and although the invention has been described in detail with reference to the foregoing examples, it will be apparent to those skilled in the art that various changes in the form and details of the embodiments may be made and equivalents may be substituted for elements thereof. All modifications, equivalents and the like which come within the spirit and principle of the invention are intended to be included within the scope of the invention.

Claims (4)

1. A method for monitoring a section of furnace process based on a mixed sequence network is characterized by comprising the following steps:
s1: constructing a process monitoring model based on a mixed sequence network, and performing characteristic mining on a first-stage furnace process;
the process monitoring model comprises an encoder, a modal identification network and a decoder; the encoder is RNN and is used for mining dynamic characteristics in the process; after the hidden layer output at the last moment of the encoder is connected to the mode identification network, outputting a group of weights through a softmax layer after passing through the hidden layer, wherein the weights are used for indicating the mode to which the current sequence sample belongs; the decoder is also connected with the hidden layer output at the last moment of the encoder and is used for reconstructing a sample at the last moment in the input sequence; the decoder consists of a plurality of single-layer neural networks, and the final reconstruction value is obtained after the weighted summation of the output of each neural network is carried out through the weight output by the mode identification network;
s2: collecting process data under a normal working condition of a first-stage furnace process, constructing a data set, setting a sequence length L, serializing the data set, and taking the obtained sequence data set as a training data set X for model training; wherein the nth input sequence is
Figure FDA0003355204040000011
S3: inputting a training data set X into a process monitoring model based on a mixed sequence network, carrying out forward propagation to obtain a reconstructed value, and minimizing a loss function by an iterative method until model parameters are converged or the maximum iteration times is reached to obtain a trained process monitoring model;
s4: calculating a detection index WSPE by using training data, and calculating a control limit con by using a kernel density estimation methodwspe
S5: an input sequence with the required length L is constructed by utilizing a sample x and samples at the first L-1 moments of the online detection of a section of furnace, and is substituted into the process monitoring model trained in S4 to obtain the reconstructed output of the decoder of x and the output p [ p ] of the modal identification network1 p2…pK](ii) a Denote the reconstructed output of the ith sub-decoder as
Figure FDA0003355204040000012
S6: computing detection index WSPE by using online samples and reconstructed output thereofoAnd the detection index is compared with the control limit conwspeMaking a comparison when WSPEo≤conwspeThen, the online sample is a normal sample; when WSPE is usedo>conwspeIf so, the current sample is regarded as a fault sample, and the fault sample is further subjected to fault identification; let the online sample be x ═ x1 x2…xm]The reconstructed value of the i-th decoder after model replacement is
Figure FDA0003355204040000013
The method for calculating the contribution index of the jth variable comprises the following steps:
Figure FDA0003355204040000014
s7: and considering the variable with higher contribution degree as a fault variable according to needs.
2. The method for monitoring a process of a furnace based on a hybrid sequence network as claimed in claim 1, wherein the step S3 is implemented by the following sub-steps:
(1) mixing XnCarry-over to the Process monitoring model, X, constructed at S1nForward propagation process to obtain hidden layer output of RNN
Figure FDA0003355204040000021
Wherein, U iseRepresents the weights that map the inputs to the hidden layer features of the RNN,
Figure FDA0003355204040000022
m is the variable number of the input sample; weRepresents the weight that maps the hidden layer output at time t-1 in the RNN to the hidden layer output at time t in the RNN,
Figure FDA0003355204040000023
hethe number of nodes of the RNN hidden layer;
Figure FDA0003355204040000024
and
Figure FDA0003355204040000025
respectively representing hidden layer outputs at time t and time t-1,
Figure FDA0003355204040000026
representing the input samples at time t, f (×) representing the non-linear activation function in the RNN;
(2)Xnobtaining characteristic output through L times of forward mapping
Figure FDA0003355204040000027
The characteristic output of RNN is obtained by forward mapping of modal identification network
Figure FDA0003355204040000028
Is marked as
Figure FDA0003355204040000029
K is the number of decoders; wherein h ismIdentifying the number of nodes of the network hidden layer for the mode; wm、bmRespectively representing the weights and biases that map the input to the hidden layer features of the modality-recognition network,
Figure FDA00033552040400000210
Figure FDA00033552040400000211
Wp、bprespectively representing weights and biases for mapping hidden layer features of the modality recognition network to the output of the modality recognition network,
Figure FDA00033552040400000212
(3) the characteristic output of RNN is mapped by the i-th decoder to obtain
Figure FDA00033552040400000213
Wherein σ () is a nonlinear activation function;
Figure FDA00033552040400000214
respectively representing the weights and offsets of the hidden layer features mapping the input to the ith decoder,
Figure FDA00033552040400000215
Figure FDA00033552040400000216
respectively representing weights and offsets that map the hidden layer features of the ith decoder to the output of the ith decoder,
Figure FDA00033552040400000217
hdthe number of nodes of a network hidden layer of a decoder;
during training, the loss function is defined as:
Figure FDA00033552040400000218
wherein N is the number of sequences used to train the model, alpha and beta are adjustable hyper-parameters,
Figure FDA00033552040400000219
and Lentr2In order to obtain more accurate modal identification precision by outputting the obtained information entropy through the modal identification network and simultaneously prevent the model from collapsing to a single mode in the training process and falling into a local optimal solution, the calculation method is shown as the following formulas:
Figure FDA0003355204040000031
Figure FDA0003355204040000032
3. the method for monitoring the process of the first section of the furnace based on the mixed sequence network as claimed in claim 1, wherein the detection index WSPE of S4 is calculated as follows:
Figure FDA0003355204040000033
the detection index WSPE in S5o
Figure FDA0003355204040000034
4. The method for monitoring a section of furnace process based on the mixed sequence network as claimed in claim 1, wherein the loss function is minimized by a gradient descent method in the step S3.
CN202111348802.8A 2021-11-15 2021-11-15 Method for monitoring one-stage furnace process based on mixed sequence network Pending CN114021469A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111348802.8A CN114021469A (en) 2021-11-15 2021-11-15 Method for monitoring one-stage furnace process based on mixed sequence network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111348802.8A CN114021469A (en) 2021-11-15 2021-11-15 Method for monitoring one-stage furnace process based on mixed sequence network

Publications (1)

Publication Number Publication Date
CN114021469A true CN114021469A (en) 2022-02-08

Family

ID=80064227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111348802.8A Pending CN114021469A (en) 2021-11-15 2021-11-15 Method for monitoring one-stage furnace process based on mixed sequence network

Country Status (1)

Country Link
CN (1) CN114021469A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114854920A (en) * 2022-05-06 2022-08-05 浙江大学 Blast furnace abnormity monitoring method of GRU self-encoder embedded with Gaussian mixture model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114854920A (en) * 2022-05-06 2022-08-05 浙江大学 Blast furnace abnormity monitoring method of GRU self-encoder embedded with Gaussian mixture model

Similar Documents

Publication Publication Date Title
CN108875771B (en) Fault classification model and method based on sparse Gaussian Bernoulli limited Boltzmann machine and recurrent neural network
CN111079836B (en) Process data fault classification method based on pseudo label method and weak supervised learning
CN111860982A (en) Wind power plant short-term wind power prediction method based on VMD-FCM-GRU
CN111046961B (en) Fault classification method based on bidirectional long-time and short-time memory unit and capsule network
Pan et al. Imputation of missing values in time series using an adaptive-learned median-filled deep autoencoder
CN113011085A (en) Equipment digital twin modeling method and system
CN114254695B (en) Spacecraft telemetry data self-adaptive anomaly detection method and device
CN111598187A (en) Progressive integrated classification method based on kernel width learning system
CN115510975A (en) Multivariable time sequence abnormality detection method and system based on parallel Transomer-GRU
CN115561005A (en) Chemical process fault diagnosis method based on EEMD decomposition and lightweight neural network
CN114021469A (en) Method for monitoring one-stage furnace process based on mixed sequence network
CN113836783B (en) Digital regression model modeling method for main beam temperature-induced deflection monitoring reference value of cable-stayed bridge
CN114897103A (en) Industrial process fault diagnosis method based on neighbor component loss optimization multi-scale convolutional neural network
He et al. A faster dynamic feature extractor and its application to industrial quality prediction
Zhou et al. A novel algorithm system for wind power prediction based on RANSAC data screening and Seq2Seq-Attention-BiGRU model
CN116894180B (en) Product manufacturing quality prediction method based on different composition attention network
Yao et al. Model-based deep transfer learning method to fault detection and diagnosis in nuclear power plants
CN114169091A (en) Method for establishing prediction model of residual life of engineering mechanical part and prediction method
CN112731890A (en) Power plant equipment fault detection method and device
CN112232570A (en) Forward active total electric quantity prediction method and device and readable storage medium
CN117034808A (en) Natural gas pipe network pressure estimation method based on graph attention network
CN115963788A (en) Multi-sampling-rate industrial process key quality index online prediction method
CN115630582A (en) Multi-sliding-window model fused soft rock tunnel surrounding rock deformation prediction method and equipment
CN112418267B (en) Motor fault diagnosis method based on multi-scale visual view and deep learning
CN115362454A (en) Layered machine learning method for industrial plant machine learning system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination