CN108416439B - Oil refining process product prediction method and system based on variable weighted deep learning - Google Patents

Oil refining process product prediction method and system based on variable weighted deep learning Download PDF

Info

Publication number
CN108416439B
CN108416439B CN201810136589.6A CN201810136589A CN108416439B CN 108416439 B CN108416439 B CN 108416439B CN 201810136589 A CN201810136589 A CN 201810136589A CN 108416439 B CN108416439 B CN 108416439B
Authority
CN
China
Prior art keywords
variable
encoder
weighted
self
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810136589.6A
Other languages
Chinese (zh)
Other versions
CN108416439A (en
Inventor
袁小锋
王雅琳
阳春华
桂卫华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201810136589.6A priority Critical patent/CN108416439B/en
Publication of CN108416439A publication Critical patent/CN108416439A/en
Application granted granted Critical
Publication of CN108416439B publication Critical patent/CN108416439B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention provides an oil refining process product prediction method and system based on variable weighted deep learning, wherein the method comprises the following steps: acquiring a process variable in the debutanizer process, and taking the process variable as the input of a deep learning model based on a trained deep learning model to obtain a product quality predicted value; when the deep learning model is trained, in every two adjacent variable weighting auto-encoders, the hidden layer feature data of the variable weighting auto-encoder arranged in front is used as the input variable of the variable weighting auto-encoder arranged behind, and the variable weighting auto-encoder arranged behind is trained. A plurality of weighted self-coders are stacked to form a depth network model, so that the depth output related characteristics from a low level to a high level can be gradually obtained, the quality index related characteristics are enhanced, an accurate predicted value is provided for the product quality, and the method has the advantages of high prediction precision, good generalization and the like.

Description

Oil refining process product prediction method and system based on variable weighted deep learning
Technical Field
The invention relates to the technical field of chemical industry, in particular to an oil refining process product prediction method and system based on variable weighted deep learning.
Background
Oil refining production is a complex process industrial process with multiple raw materials, multiple devices, multiple processes and multiple products. In oil refining production, crude oil from different oil fields is mixed uniformly and then is subjected to fine production by primary processing equipment such as a primary distillation tower, an atmospheric and vacuum distillation tower and the like, secondary processing equipment such as catalytic cracking, hydrocracking, delayed coking and the like, and tertiary processing equipment such as catalytic hydrogenation, catalytic reforming, hydrofining and the like, so that petroleum products such as multi-brand gasoline, diesel oil, aviation kerosene, fuel oil and the like, a plurality of intermediate petrochemical products such as benzene, toluene, mixed xylene, o-xylene, propylene, polypropylene and the like, and other products such as liquefied gas, petroleum coke, asphalt, urea and the like are finally obtained. The actual process of oil refining production in China has the problems of long process, complex connection relation, numerous reaction devices and operation variables, large crude oil variety and quality fluctuation, frequent adjustment of processing schemes and the like, production indexes are in complex distribution in each dimension of time, space and system levels, and the real-time effect of global control on optimizing operation performance is difficult to master in time. In order to realize the real-time control optimization of oil refining production, the production process needs to be subjected to real-time product quality detection. Due to the harsh measurement environment, the expensive measurement instrument, the measurement hysteresis and the like, the quality of key products in the oil refining production process is difficult to measure in real time. For this reason, soft-measurement techniques allow real-time prediction and estimation of key product quality by building mathematical models of easily measurable process variables and difficult-to-detect quality variables. With the complication and large-scale production of the oil refining production process, the characteristics and state changes of the complex process cannot be accurately described by a mechanism analysis method and traditional data by using a soft measurement method, so that the real-time product quality analysis and detection precision is low, and further, the error of process control optimization decision is caused. With the development of computers and information technology, the oil refining production process accumulates a great deal of operation data, and the data contains rich production process information and knowledge. For this purpose, the deep learning model learns the extraction from the low-level concrete features to the high-level abstract features from the process data through the multi-layer neural network structure and is used for output quality prediction modeling. However, the existing deep learning model only focuses on the feature representation of the process data, ignores the feature extraction related to the output quality index, and cannot ensure the correlation between the extracted feature and the quality index, so that satisfactory prediction precision cannot be obtained.
Disclosure of Invention
The invention provides an oil refining process product prediction method and an oil refining process product prediction system based on variable weighted deep learning, which overcome the problems or at least partially solve the problems, and solve the problems that in the prior art, a deep learning model only focuses on the characteristic representation of process data and ignores the characteristic extraction related to output quality indexes, so that the correlation between the extracted characteristics and the quality indexes cannot be ensured.
According to an aspect of the present invention, there is provided a method for predicting product quality in an oil refinery process, comprising:
acquiring a process variable in an oil refining production process, and taking the process variable as the input of a deep learning model based on a trained deep learning model to obtain a product quality predicted value;
when the deep learning model is trained, in every two adjacent variable weighting auto-encoders, the hidden layer feature data of the variable weighting auto-encoder arranged in front is used as the input variable of the variable weighting auto-encoder arranged in back, and the variable weighting auto-encoder arranged in back is trained.
Preferably, the deep learning model comprises a first variable weighted self-encoder, a second variable weighted self-encoder and a third variable weighted self-encoder, and when the deep learning model is trained, the input variable of the first variable weighted self-encoder is a process variable in the oil refining production process; the input variable of the second variable weighted self-encoder is the trained hidden layer characteristic data of the first variable weighted self-encoder; and the input variable weighted by the third variable is the trained hidden layer characteristic data of the second variable weighted self-encoder.
Preferably, before obtaining the process variable in the oil refining production process, training a deep learning model is further included:
acquiring a process variable and a quality variable in oil refining production historical data, taking the process variable as an input variable, calculating a first correlation coefficient of the input variable and the quality variable, establishing a first weighted objective function based on the first correlation coefficient, and training a first variable weighted self-encoder;
taking the trained hidden layer feature data of the first variable weighted self-encoder as an input variable, calculating a second correlation coefficient of the input variable and the quality variable, establishing a second weighted objective function based on the second correlation coefficient, and training the second variable weighted self-encoder;
taking the characteristic data of the hidden layer of the trained second variable weighted self-encoder as an input variable, calculating a third correlation coefficient of the input variable and the quality variable, establishing a third weighted objective function based on the third phase relation number, and training the third variable weighted self-encoder;
and calculating the characteristic data of the hidden layer of the third variable weighted self-encoder, and connecting the hidden layer of the third variable weighted self-encoder with the final output layer of the deep learning model.
Preferably, the obtaining of the quality variable and the process variable in the oil refining production process specifically includes:
acquiring a process variable x in a set time periodi,jAnd a mass variable yiAnd wherein i is 1,2, …, N, j is 1,2, …, m, the process variable and the quality variable are taken as sample data, and the process variable and the quality variable are normalized:
Figure GDA0002242255390000031
in the formula, xmin,jAnd xmax,jRespectively representing the minimum and maximum values of the j-th process variable, yminAnd ymaxRepresenting the minimum and maximum values of the quality variable, respectively.
Preferably, the training of the first variable weighted auto-encoder specifically includes: training the first variable weighted self-encoder under a first weighted target function through a back propagation algorithm to obtain parameters of the first variable weighted self-encoder and hidden layer feature data;
training the second variable weighted auto-encoder specifically comprises: training the second variable weighted self-encoder under a second weighted target function through a back propagation algorithm to obtain parameters of the second variable weighted self-encoder and hidden layer feature data;
training the third variable weighted auto-encoder specifically comprises: and training the third variable weighted self-encoder under a third weighted target function through a back propagation algorithm to obtain parameters of the third variable weighted self-encoder and hidden layer characteristic data.
Preferably, the training of the third variable weighted auto-encoder further includes:
and constructing a convergence target function based on the quality variable output by the final output layer, and adjusting the deep learning model through the convergence target function until a preset convergence condition is met.
Preferably, the convergence objective function is:
Figure GDA0002242255390000041
in the formula, N is the number of quality variable samples output by the final output layer, yiAs a quality variable in the debutanizer historical data,
Figure GDA0002242255390000042
is the quality variable of the final output layer output.
A product quality prediction system in an oil refining production process comprises:
the data acquisition module is used for acquiring process variables needing to be predicted in the oil refining production process;
and the prediction module is used for training a deep learning model and predicting the quality of the oil refining production process product, the deep learning model comprises at least three variable weighting self-encoders, and when the deep learning model is trained, in every two adjacent variable weighting self-encoders, the hidden layer characteristic data of the variable weighting self-encoder arranged in front is used as the input variable of the variable weighting self-encoder arranged in the back, and the variable weighting self-encoder arranged in the back is trained.
An oil refining production process product quality prediction device, comprising:
at least one processor, at least one memory, a communication interface, and a bus; wherein the content of the first and second substances,
the processor, the memory and the communication interface complete mutual communication through the bus;
the communication interface is used for information transmission between the test equipment and the communication equipment of the display device;
the memory stores program instructions executable by the processor, and the processor calls the program instructions to execute the product quality prediction method of the oil refining production process.
A computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform the debutanizer product quality prediction method as described above.
The invention provides an oil refining process product prediction method and system based on variable weighted deep learning, which can extract the characteristics related to output prediction in process variables through a variable weighted self-encoder; a plurality of weighted self-coders are stacked to form a depth network model, so that the depth output related characteristics from a low level to a high level can be gradually obtained, the quality index related characteristics are enhanced, an accurate predicted value is provided for the product quality, and the method has the advantages of high prediction precision, good generalization and the like.
Drawings
FIG. 1 is a schematic diagram of a method for predicting product quality in an oil refinery process according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a deep learning model for predicting butane concentration in oil refinery production according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating the convergence speed results of three neural networks according to an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
As shown in fig. 1, a method for predicting the product quality of an oil refining production process based on deep learning of variables is shown, which comprises:
acquiring a process variable in an oil refining production process, and taking the process variable as the input of a deep learning model based on a trained deep learning model to obtain a product quality predicted value;
when the deep learning model is trained, in every two adjacent variable weighting auto-encoders, the hidden layer feature data of the variable weighting auto-encoder arranged in front is used as the input variable of the variable weighting auto-encoder arranged in back, and the variable weighting auto-encoder arranged in back is trained.
Specifically, in this embodiment, the deep learning model includes a first variable weighted auto-encoder, a second variable weighted auto-encoder, and a third variable weighted auto-encoder, and in the process of training the deep learning model, an input variable of the first variable weighted auto-encoder is a process variable in the oil refining production process, and an output variable is a quality variable in the oil refining production process; the input variable of the second variable weighting self-encoder is the trained hidden layer characteristic data of the first variable weighting self-encoder, and the output variable is the quality variable in the oil refining production process; in this embodiment, each of the autoencoders (the first variable weighted autoencoder, the second variable weighted autoencoder, or the first variable weighted autoencoder) calculates a correlation coefficient between an input variable and an output variable during a training process, and establishes a corresponding weighted target function based on the correlation coefficient, thereby training the autoencoder under the weighted target function, and stacking a plurality of weighted autoencoders into a deep learning model, so that deep output related features from a low level to a high level can be obtained step by step, the related features of quality indexes are enhanced, and an accurate predicted value is provided for product quality.
In the embodiment, by constructing the deep learning model based on the variable weighted stack self-encoder, specifically, the network structure of the deep learning model is m —, m —>m1—>m2—>m3—>1. Wherein m is the number of neurons in the input layer of the whole network; m is1Weighting the number of self-encoder hidden layer neurons, m, for a first variable2Weighting the number of autocoder hidden layer neurons, m, for the second variable3The number of self-encoder crypt layer neurons is weighted for the third variable, and 1 is the final quality variable output layer neuron number. The weighting factor and bias term from the input layer to the hidden layer of the respective encoder are denoted as { W }1,b1}、{W2,b2And { W }3,b3H, corresponding to an excitation function of f1、f2And f3. Meanwhile, the weight coefficients and bias terms of the hidden layer of the respective encoder to its output layer (reconstructed input layer) are { W'1,b'1}、{W'2,b'2And { W'3,b'3F 'corresponding excitation function'1、f'2And f'3. The third weighting coefficient and bias term from the encoder hidden layer to the quality variable output layer is denoted as { W, b }, and the corresponding activation function is f.
In this embodiment, before obtaining the process variable in the oil refining production process, training a deep learning model includes specifically:
acquiring process variables and quality variables in a set time period, recording the quality variables as y, recording m process variables as x ∈ Rm. Assuming that N sample data of process variables and product quality variables in a certain production time are obtained through data acquisition and analysis and are respectively recorded as xiAnd yi,. Wherein x isi=[xi,1,xi,2,...,xi,m]T1, 2.., N, taking the process variable and the quality variable as sample data, and normalizing the process variable and the quality variable:
Figure GDA0002242255390000071
Figure GDA0002242255390000072
in the formula, xmin,jAnd xmax,jRespectively representing the minimum and maximum values of the j-th process variable, yminAnd ymaxRepresenting the minimum and maximum values of the quality variable, respectively.
Inputting training samples into data xi1,2, N as input to a first variable weighted auto-encoder, the first variable weighted auto-encoder implicit layer data is denoted as
Figure GDA0002242255390000073
The reconstructed data corresponding to the output layer is
Figure GDA0002242255390000074
The first variable weighted self-encoder network parameter set is denoted as theta1={W1,b1,W'1,b'1}. There is thus a relationship between the various network layers that can be expressed as:
Figure GDA0002242255390000075
Figure GDA0002242255390000076
the output layer of the first variable weighted auto-encoder can be represented as an input layer function:
Figure GDA0002242255390000077
taking the process variables as input variables, and calculating first correlation coefficients of the input variables and the quality variables; in order to output relevant features from the original input data extraction process, a first weighted objective function is proposed in this embodiment to train the network. First, each input variable x is calculated(j)Correlation coefficient with quality variable y
Figure GDA0002242255390000078
Figure GDA0002242255390000079
Figure GDA0002242255390000082
Figure GDA0002242255390000083
If it is
Figure GDA0002242255390000084
The larger value indicates that the variable and the quality variable have stronger correlation, so that when the feature is represented, the feature can better reconstruct the data of the variable dimension; and vice versa. For this purpose, in this embodiment, a first variable weighting objective function is constructed by using the absolute value of the first correlation coefficient as the weighting coefficient of the input variable:
in the formula,. DELTA.1Is composed of
Figure GDA0002242255390000086
An m x m diagonal matrix formed by elements is used for training the first variable weighting self-encoder through a back propagation algorithm to obtain a network parameter theta of the first variable weighting self-encoder1={W1,b1,W'1,b'1And calculating to obtain hidden layer characteristic data of the first variable weighting self-encoder
Figure GDA00022422553900000811
In this embodiment, the first variable is weighted from the hidden layer feature data of the encoder
Figure GDA0002242255390000087
The input data from the encoder is weighted as a second variable. The second variable weighted auto-encoder hidden layer feature data is recorded as
Figure GDA0002242255390000088
Corresponding output layer reconstructs the input layer, and the reconstructed data can be recorded as
Figure GDA0002242255390000089
The parameter of the second variable weighted self-encoder is marked as theta2={W2,b2,W'2,b'2}。
The training mode of the second variable weighting self-encoder is similar to the training mode of the first variable weighting self-encoder, the hidden layer feature data is used as the input variable of the second variable weighting self-encoder, and the second correlation number of the input variable and the quality variable of the second variable weighting self-encoder is calculated
Figure GDA00022422553900000810
Taking the absolute value of the second correlation coefficient as a weight coefficient of the input variable to construct a second variable weighting objective function;
Figure GDA0002242255390000091
Δ2is composed of
Figure GDA0002242255390000092
M of successive elements1×m1Diagonal matrix, { W2,b2-weighting the weight coefficients and bias terms from the input layer to the hidden layer of the encoder for said second variable, { W2',b'2The second variable weights the weighting coefficients and bias terms from the hidden layer to the output layer of the encoder.
Training the second variable weighted self-encoder through a back propagation algorithm to obtain the second variable weighted self-encoderNetwork parameter theta of device2={W2,b2,W'2,b'2And calculating to obtain hidden layer characteristic data of the second variable weighting self-encoder
Figure GDA0002242255390000093
In this embodiment, the training mode of the third variable weighted self-encoder is similar to the training modes of the first variable weighted self-encoder and the second variable weighted self-encoder, and after the training of the second variable weighted self-encoder is completed, the hidden layer feature data of the second variable weighted self-encoder is calculated
Figure GDA0002242255390000094
And takes it as the input variable of the third variable weighted auto-encoder. The third variable weighted self-encoder hidden layer feature data is recorded as
Figure GDA0002242255390000095
Corresponding output layer reconstructs the input layer, and the reconstructed data can be recorded as
Figure GDA0002242255390000096
Calculating a third correlation coefficient of the input variable and the quality variable of the third variable weighted self-encoder, taking the absolute value of the third correlation coefficient as the weight coefficient of the input variable, constructing a third variable weighted objective function, and training the third variable weighted self-encoder to obtain a network parameter theta of the third variable weighted self-encoder3={W3,b3,W'3,b'3And calculating to obtain hidden layer characteristic data of the third variable weighted self-encoder
Figure GDA0002242255390000097
In this embodiment, after the pre-training of the third variable weighted auto-encoder is completed, the final output layer is connected to the hidden layer, and the output layer data is represented by quality variable data yi(i-1, 2, …, N) byThe following objective function is used for fine-tuning the parameter theta ═ W of the whole depth learning model based on the stack self-encoder1,b1,W2,b2,W3,b3W, b until a convergence condition is satisfied.
Figure GDA0002242255390000098
Wherein the content of the first and second substances,
Figure GDA0002242255390000099
is an estimated value of the output quality index obtained by the network forward algorithm.
Of course, in this embodiment, there are also more cases of multiple variable weighted encoders, for example, when there are multiple variable weighted encoders, it is necessary to construct more levels of variable weighted deep learning models, and the network structure of the deep learning models is m —, which is>m1—>m2—>m3—…>mk—>1. Where m is the number of neurons in the input layer of the entire network, mkWeighting the number of the neural elements of the implicit layer of the self-encoder for the kth variable; the variable weighted objective function of each variable weighted auto-encoder is:
Figure GDA0002242255390000101
in the formula, k represents the kth variable weighting self-encoder in the deep learning model, and the weight coefficient and the bias term corresponding to the input layer to the hidden layer are { W }k,bkH, an excitation function fkThe weight coefficient and bias term of the hidden layer to its output layer (reconstructed input layer) is { W }k’,bk' } excitation function fk’。
The embodiment also shows that the method for predicting the quality of the product in the oil refining production process is applied to the product quality prediction in the debutanizer process, and comprises the following steps:
based on the requirement of the production process, the concentration of butane at the bottom of the debutanizer is selected as an output variable y, and the output variable y is selected from the process through mechanism analysisThe 13 process variables with larger influence on the butane concentration are taken as input variables of the deep learning model and are marked as x(1),x(2),...,x(13). 1000 data samples of the debutanizer process were extracted for use as training data, while all variables were normalized.
Constructing a network structure of a deep learning model of a variable weighted stack self-encoder, wherein the network structure in the embodiment is 13 —>10—>7—>4—>1. Namely, the original 13-dimensional process variable is sequentially reduced to 10-dimensional, 7-dimensional and 4-dimensional through three self-encoders and finally connected with a one-dimensional output quality variable y. The weighting factor and bias term from the input layer to the hidden layer of the respective encoder are denoted as { W }1,b1}、{W2,b2And { W }3,b3H, corresponding to an excitation function of f1、f2And f3Sigmoid functions are used. Meanwhile, the weight coefficients and bias terms of the hidden layer of the respective encoder to its output layer (reconstructed input layer) are { W'1,b'1}、{W'2b'2And { W'3,b'3F 'corresponding excitation function'1、f'2And f'3Also a Sigmoid function. And the weight coefficient and the bias term from the hidden layer of the encoder to the quality variable output layer are marked as { W, b }, and the corresponding activation function is f and is a Sigmoid function.
Inputting training samples into data xi1, 2., 1000 as the first variable weighted self-encoder input, noting that the implicit layer data is
Figure GDA0002242255390000111
The reconstructed data corresponding to the output layer isLet network parameter set be θ1={W1,b1,W'1,b'1}. There is thus a relationship between the various network layers that can be expressed as:
Figure GDA0002242255390000113
Figure GDA0002242255390000114
the output layer of the self-encoder can be expressed as an input layer function:
Figure GDA0002242255390000115
and pre-training the layer of self-encoder network can be completed by utilizing the training data. In order to extract the characteristics related to the process output from the raw input data, a variable weighting objective function is proposed in this embodiment to train the network. First, each input variable x is calculated(j)First correlation coefficient with quality variable y
Figure GDA0002242255390000116
Wherein the content of the first and second substances,
Figure GDA0002242255390000118
Figure GDA0002242255390000119
Figure GDA00022422553900001110
if it is
Figure GDA00022422553900001111
The larger value indicates that the variable and the quality variable have stronger correlation, so that when the feature is represented, the feature can better reconstruct the data of the variable dimension; and vice versa. For this purpose, a first weighted objective function is proposed as follows:
Figure GDA00022422553900001112
wherein, Delta1Is composed ofA 13 x 13 diagonal matrix of elements. The first variable weighting self-encoder under the objective function can be trained by adopting a back propagation algorithm, so that a network parameter theta is obtained1={W1,b1,W'1,b'1And calculating to obtain first hidden layer characteristic data related to the first hidden layer output
Figure GDA0002242255390000121
Weighting first variable from first hidden layer feature data of encoderThe input data from the encoder is weighted as a second variable. The second variably weighted autocorrelation encoder has a second hidden layer signature of
Figure GDA0002242255390000123
Corresponding to the output layer, the input layer is reconstructed, and the reconstructed data can be recorded as
Figure GDA0002242255390000124
The parameter of the second variable weighted self-encoder is marked as theta2={W2,b2,W'2,b'2}。
The second variable weighted auto-encoder is trained in a manner similar to the first variable weighted auto-encoder. Firstly, calculating a second correlation number between each variable of the input layer of the second variable weighted self-encoder and two sides y of the quality variable
Figure GDA0002242255390000125
And constructing a second weighted objective function as follows:
Figure GDA0002242255390000126
wherein, Delta2Is composed ofA 10 x 10 diagonal matrix of elements in sequence. Similarly, a second variable weighted self-encoder under a second weighted objective function can be trained to obtain a network parameter θ by using a back propagation algorithm2={W2,b2,W'2,b'2And calculating to obtain second hidden layer characteristic data related to the output of a second hidden layer of the first integral network
Figure GDA0002242255390000128
A training method similar to the first variable weighted self-encoder and the second variable weighted self-encoder is adopted to
Figure GDA0002242255390000129
As an input variable of the third variable weighting self-encoder, training the third variable weighting self-encoder by constructing a third weighting objective function to obtain a network parameter theta3={W3,b3,W'3,b'3And third hidden layer feature data
After the pre-training of the third variable weighting self-encoder is finished, the final output layer is connected to the hidden layer, and the data of the output layer is composed of quality variable data y i1, 2., 1000, the parameters θ of the entire stacked self-encoder depth learning model are fine-tuned by the following objective function ═ W1,b1,W2,b2,W3,b3W, b until a convergence condition is satisfied.
Wherein the content of the first and second substances,is an estimate of the output quality variable obtained by the network forward algorithm.
And predicting the output quality variable of the new test sample by using the trained deep learning model. 1400 sets of process variable data samples are collected in the debutanizer process and substituted into the deep learning model to predict each set of output quality variables, and the prediction results are respectively shown in fig. 2. As can be seen from the figure, better prediction effect is obtained by using the variable weighting stack self-encoder deep learning model.
As shown in table 1, the root mean square error and the correlation coefficient of the prediction are obtained for the three models of the conventional multi-layer Neural Network (NN), the Stacked AutoEncoder (SAE) and the Variable-Weighted Stacked AutoEncoder (VW-SAE) proposed in this embodiment.
Figure GDA0002242255390000131
It can be seen from the table that the VW-SAE provided by the present invention achieves the best prediction accuracy, and the accuracy of the method provided by the present invention is verified. Also, it can be seen from comparing the convergence rates of the three neural networks shown in FIG. 3 that VW-SAE has faster convergence.
The embodiment also provides an oil refining process product quality prediction device, which comprises:
at least one processor, at least one memory, a communication interface, and a bus; wherein the content of the first and second substances,
the processor, the memory and the communication interface complete mutual communication through the bus;
the communication interface is used for information transmission between the test equipment and the communication equipment of the display device;
the memory stores program instructions executable by the processor, and the processor calls the program instructions to execute the method for predicting the quality of the oil refining process product provided by the above method embodiments, for example, the method includes:
acquiring a process variable in an oil refining process, and taking the process variable as the input of a deep learning model based on a trained deep learning model to obtain a product quality predicted value;
the deep learning model comprises at least three variable weighting self-encoders, and when the deep learning model is trained, in every two adjacent variable weighting self-encoders, the hidden layer feature data of the variable weighting self-encoder arranged in front is used as the input variable of the variable weighting self-encoder arranged behind, and the variable weighting self-encoder arranged behind is trained.
The present embodiment also discloses a computer program product, which includes a computer program stored on a non-transitory computer readable storage medium, the computer program includes program instructions, and when the program instructions are executed by a computer, the computer can execute the method for predicting the quality of the product in the oil refining process provided by the above-mentioned embodiments of the method, for example, the method includes:
acquiring a process variable in an oil refining process, and taking the process variable as the input of a deep learning model based on a trained deep learning model to obtain a product quality predicted value;
when the deep learning model is trained, in every two adjacent variable weighting auto-encoders, the hidden layer feature data of the variable weighting auto-encoder arranged in front is used as the input variable of the variable weighting auto-encoder arranged behind, and the variable weighting auto-encoder arranged behind is trained.
The present embodiment also provides a non-transitory computer-readable storage medium, which stores computer instructions, where the computer instructions cause the computer to execute the method for predicting the quality of a product in a refining process provided by the above method embodiments, for example, the method includes:
acquiring a process variable in an oil refining process, and taking the process variable as the input of a deep learning model based on a trained deep learning model to obtain a product quality predicted value;
when the deep learning model is trained, in every two adjacent variable weighting auto-encoders, the hidden layer feature data of the variable weighting auto-encoder arranged in front is used as the input variable of the variable weighting auto-encoder arranged behind, and the variable weighting auto-encoder arranged behind is trained.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The above-described embodiments of the test equipment and the like of the display device are merely illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, the method of the present invention is only a preferred embodiment and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A method for predicting a product in a refining process, comprising:
acquiring a process variable in an oil refining production process, and taking the process variable as the input of a deep learning model based on a trained deep learning model to obtain a product quality predicted value;
when the deep learning model is trained, in every two adjacent variable weighting auto-encoders, the hidden layer feature data of the variable weighting auto-encoder arranged in front is used as the input variable of the variable weighting auto-encoder arranged behind, and the variable weighting auto-encoder arranged behind is trained;
the deep learning model comprises a first variable weighting self-encoder, a second variable weighting self-encoder and a third variable weighting self-encoder, and when the deep learning model is trained, the input variable of the first variable weighting self-encoder is a process variable in the oil refining production process; the input variable of the second variable weighted self-encoder is the trained hidden layer characteristic data of the first variable weighted self-encoder; the input variable weighted by the third variable is the trained hidden layer feature data of the second variable weighted self-encoder;
the method also comprises the following steps of training a deep learning model before acquiring the process variables in the oil refining production process:
acquiring a process variable and a quality variable in historical data of a debutanizer, taking the process variable as an input variable, calculating a first correlation coefficient of the input variable and the quality variable, establishing a first weighted objective function based on the first correlation coefficient, and training a first variable weighted self-encoder;
wherein the first weighted objective function is:
Figure FDA0002256043500000011
in the formula,. DELTA.1Is composed of
Figure FDA0002256043500000012
An m x m diagonal matrix of elements, m being the number of neurons in the input layer of the entire network, N being the number of training samples, { W1,b1-weighting the weight coefficients and bias terms from the input layer to the hidden layer of the encoder, { W'1,b'1-weighting the weight coefficients and bias terms from the hidden layer of the encoder to its output layer for said first variable, xiThe data is input for the ith training sample,
Figure FDA0002256043500000021
is xiReconstruction data, x, corresponding to the output layeri,jFor the jth variable value of the ith training sample,
Figure FDA0002256043500000022
is xi,jThe reconstructed data corresponding to the output layer is,
Figure FDA0002256043500000023
is a first correlation coefficient between the jth input variable and the quality variable y;
taking the trained hidden layer feature data of the first variable weighted self-encoder as an input variable, calculating a second correlation coefficient of the input variable and a quality variable, establishing a second weighted objective function based on the second correlation coefficient, and training the second variable weighted self-encoder;
taking the characteristic data of the hidden layer of the trained second variable weighted self-encoder as an input variable, calculating a third correlation coefficient of the input variable and a quality variable, establishing a third weighted objective function based on the third phase relation number, and training the third variable weighted self-encoder;
and calculating the characteristic data of the hidden layer of the third variable weighted self-encoder, and connecting the hidden layer of the third variable weighted self-encoder with the final output layer of the deep learning model.
2. The method of claim 1, wherein obtaining quality variables and process variables in the refinery production process specifically comprises:
acquiring a process variable x in a set time periodi,jAnd a mass variable yiAnd in which i is 1,2, …, N, j is 1,2, …, m, the process variable and the quality variable are taken as sample data, and the process variable and the quality variable are normalized:
Figure FDA0002256043500000025
in the formula, xmin,jAnd xmax,jRespectively representing the minimum and maximum values of the j-th process variable, yminAnd ymaxRepresenting the minimum and maximum values of the quality variable, respectively.
3. The method of claim 1, wherein training the first variable weighted auto-encoder specifically comprises: training the first variable weighted self-encoder under a first weighted target function through a back propagation algorithm to obtain parameters of the first variable weighted self-encoder and hidden layer feature data;
training the second variable weighted auto-encoder specifically comprises: training the second variable weighted self-encoder under a second weighted target function through a back propagation algorithm to obtain parameters of the second variable weighted self-encoder and hidden layer feature data;
training the third variable weighted auto-encoder specifically comprises: and training the third variable weighted self-encoder under a third weighted target function through a back propagation algorithm to obtain parameters of the third variable weighted self-encoder and hidden layer characteristic data.
4. The method of claim 1, wherein training the third variable weighted auto-encoder further comprises:
and constructing a convergence target function based on the quality variable output by the final output layer, and carrying out parameter fine adjustment on the deep learning model through the convergence target function until a preset convergence condition is met.
5. The method of claim 4, wherein the converging objective function is:
Figure FDA0002256043500000031
in the formula, N is the number of quality variable samples output by the final output layer, yiAs a quality variable in the debutanizer historical data,
Figure FDA0002256043500000032
is the quality variable of the final output layer output.
6. A product quality prediction system for a refining process, comprising:
the data acquisition module is used for acquiring process variable data in the oil refining production process needing prediction;
the prediction module is used for training a deep learning model and predicting the quality of the oil refining process product, the deep learning model comprises at least three variable weighting self-encoders, and when the deep learning model is trained, in every two adjacent variable weighting self-encoders, the hidden layer characteristic data of the variable weighting self-encoder arranged in front is used as the input variable of the variable weighting self-encoder arranged in the back, and the variable weighting self-encoder arranged in the back is trained;
the deep learning model comprises a first variable weighting self-encoder, a second variable weighting self-encoder and a third variable weighting self-encoder, and when the deep learning model is trained, the input variable of the first variable weighting self-encoder is a process variable in the oil refining production process; the input variable of the second variable weighted self-encoder is the trained hidden layer characteristic data of the first variable weighted self-encoder; the input variable weighted by the third variable is the trained hidden layer feature data of the second variable weighted self-encoder;
the method also comprises the following steps of training a deep learning model before acquiring the process variables in the oil refining production process:
acquiring a process variable and a quality variable in historical data of a debutanizer, taking the process variable as an input variable, calculating a first correlation coefficient of the input variable and the quality variable, establishing a first weighted objective function based on the first correlation coefficient, and training a first variable weighted self-encoder;
wherein the first weighted objective function is:
Figure FDA0002256043500000041
in the formula,. DELTA.1Is composed of
Figure FDA0002256043500000042
An m x m diagonal matrix of elements, m being the number of neurons in the input layer of the entire network, N being the number of training samples, { W1,b1-weighting the weight coefficients and bias terms from the input layer to the hidden layer of the encoder, { W'1,b'1-weighting the weight coefficients and bias terms from the hidden layer of the encoder to its output layer for said first variable, xiThe data is input for the ith training sample,
Figure FDA0002256043500000043
is xiReconstruction data, x, corresponding to the output layeri,jFor the jth variable value of the ith training sample,
Figure FDA0002256043500000044
is xi,jThe reconstructed data corresponding to the output layer is,
Figure FDA0002256043500000051
is a first correlation coefficient between the jth input variable and the quality variable y;
taking the trained hidden layer feature data of the first variable weighted self-encoder as an input variable, calculating a second correlation coefficient of the input variable and a quality variable, establishing a second weighted objective function based on the second correlation coefficient, and training the second variable weighted self-encoder;
taking the characteristic data of the hidden layer of the trained second variable weighted self-encoder as an input variable, calculating a third correlation coefficient of the input variable and a quality variable, establishing a third weighted objective function based on the third phase relation number, and training the third variable weighted self-encoder;
and calculating the characteristic data of the hidden layer of the third variable weighted self-encoder, and connecting the hidden layer of the third variable weighted self-encoder with the final output layer of the deep learning model.
7. An oil refining production process product quality prediction device, comprising:
at least one processor, at least one memory, a communication interface, and a bus; wherein the content of the first and second substances,
the processor, the memory and the communication interface complete mutual communication through the bus;
the communication interface is used for information transmission between the prediction equipment and communication equipment of the display device;
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1 to 5.
8. A non-transitory computer-readable storage medium storing a computer program that causes a computer to perform the method according to any one of claims 1 to 5.
CN201810136589.6A 2018-02-09 2018-02-09 Oil refining process product prediction method and system based on variable weighted deep learning Active CN108416439B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810136589.6A CN108416439B (en) 2018-02-09 2018-02-09 Oil refining process product prediction method and system based on variable weighted deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810136589.6A CN108416439B (en) 2018-02-09 2018-02-09 Oil refining process product prediction method and system based on variable weighted deep learning

Publications (2)

Publication Number Publication Date
CN108416439A CN108416439A (en) 2018-08-17
CN108416439B true CN108416439B (en) 2020-01-03

Family

ID=63128190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810136589.6A Active CN108416439B (en) 2018-02-09 2018-02-09 Oil refining process product prediction method and system based on variable weighted deep learning

Country Status (1)

Country Link
CN (1) CN108416439B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110096810B (en) * 2019-05-05 2020-03-17 中南大学 Industrial process soft measurement method based on layer-by-layer data expansion deep learning
CN111241688B (en) * 2020-01-15 2023-08-25 北京百度网讯科技有限公司 Method and device for monitoring composite production process
CN111914477B (en) * 2020-06-23 2022-04-19 宁波大学 Real-time monitoring method for butane concentration of product at bottom of debutanizer based on SAE
CN112149355B (en) * 2020-09-27 2023-08-22 浙江科技学院 Soft measurement method based on semi-supervised dynamic feedback stack noise reduction self-encoder model
CN112989635B (en) * 2021-04-22 2022-05-06 昆明理工大学 Integrated learning soft measurement modeling method based on self-encoder diversity generation mechanism

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101201331B (en) * 2007-11-28 2012-02-08 华东理工大学 Soft measuring method for on-line determining petroleum naphtha quality index on top of primary tower
CN103544392B (en) * 2013-10-23 2016-08-24 电子科技大学 Medical science Gas Distinguishing Method based on degree of depth study
CN104463327A (en) * 2014-10-27 2015-03-25 中国石油大学(北京) Method for predicting catalytic cracking coke yield

Also Published As

Publication number Publication date
CN108416439A (en) 2018-08-17

Similar Documents

Publication Publication Date Title
CN108416439B (en) Oil refining process product prediction method and system based on variable weighted deep learning
Costa et al. Application of artificial neural networks in a history matching process
CN109284866B (en) Commodity order prediction method and device, storage medium and terminal
Wang et al. A two‐layer ensemble learning framework for data‐driven soft sensor of the diesel attributes in an industrial hydrocracking process
CN111339712A (en) Method for predicting residual life of proton exchange membrane fuel cell
CN111127246A (en) Intelligent prediction method for transmission line engineering cost
CN108334943A (en) The semi-supervised soft-measuring modeling method of industrial process based on Active Learning neural network model
CN115438732A (en) Cross-domain recommendation method for cold start user based on classification preference migration
CN111143981B (en) Virtual test model verification system and method
Burrows et al. Simulation data mining for supporting bridge design
CN110633859B (en) Hydrologic sequence prediction method integrated by two-stage decomposition
CN108304674A (en) A kind of railway prediction of soft roadbed settlement method based on BP neural network
CN104503420A (en) Non-linear process industry fault prediction method based on novel FDE-ELM and EFSM
CN113947262A (en) Knowledge tracking method based on different composition learning fusion learning participation state
CN110782083B (en) Aero-engine standby demand prediction method based on deep Croston method
CN110378035A (en) It is a kind of that soft-measuring modeling method is hydrocracked based on deep learning
CN116522912B (en) Training method, device, medium and equipment for package design language model
CN114004346A (en) Soft measurement modeling method based on gating stacking isomorphic self-encoder and storage medium
CN114880767B (en) Aero-engine residual service life prediction method based on attention mechanism Dense-GRU network
CN116029434A (en) Method and system for predicting hydrogen content in raw oil and heavy fraction oil
CN114239397A (en) Soft measurement modeling method based on dynamic feature extraction and local weighted deep learning
CN113988311A (en) Quality variable prediction method, quality variable prediction device, terminal and storage medium
CN114372618A (en) Student score prediction method and system, computer equipment and storage medium
Pei Construction of a legal system of corporate social responsibility based on big data analysis technology
CN101685506A (en) Expert diagnosis decision method of inorganic waste water processing scheme

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant