CN114492988A - Method and device for predicting product yield in catalytic cracking process - Google Patents

Method and device for predicting product yield in catalytic cracking process Download PDF

Info

Publication number
CN114492988A
CN114492988A CN202210082237.3A CN202210082237A CN114492988A CN 114492988 A CN114492988 A CN 114492988A CN 202210082237 A CN202210082237 A CN 202210082237A CN 114492988 A CN114492988 A CN 114492988A
Authority
CN
China
Prior art keywords
lag
data set
data
time
production data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210082237.3A
Other languages
Chinese (zh)
Inventor
钟伟民
隆建
杜文莉
钱锋
杨明磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China University of Science and Technology
Original Assignee
East China University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China University of Science and Technology filed Critical East China University of Science and Technology
Priority to CN202210082237.3A priority Critical patent/CN114492988A/en
Publication of CN114492988A publication Critical patent/CN114492988A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • CCHEMISTRY; METALLURGY
    • C10PETROLEUM, GAS OR COKE INDUSTRIES; TECHNICAL GASES CONTAINING CARBON MONOXIDE; FUELS; LUBRICANTS; PEAT
    • C10GCRACKING HYDROCARBON OILS; PRODUCTION OF LIQUID HYDROCARBON MIXTURES, e.g. BY DESTRUCTIVE HYDROGENATION, OLIGOMERISATION, POLYMERISATION; RECOVERY OF HYDROCARBON OILS FROM OIL-SHALE, OIL-SAND, OR GASES; REFINING MIXTURES MAINLY CONSISTING OF HYDROCARBONS; REFORMING OF NAPHTHA; MINERAL WAXES
    • C10G11/00Catalytic cracking, in the absence of hydrogen, of hydrocarbon oils
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24568Data stream processing; Continuous queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2474Sequence data queries, e.g. querying versioned data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • CCHEMISTRY; METALLURGY
    • C10PETROLEUM, GAS OR COKE INDUSTRIES; TECHNICAL GASES CONTAINING CARBON MONOXIDE; FUELS; LUBRICANTS; PEAT
    • C10GCRACKING HYDROCARBON OILS; PRODUCTION OF LIQUID HYDROCARBON MIXTURES, e.g. BY DESTRUCTIVE HYDROGENATION, OLIGOMERISATION, POLYMERISATION; RECOVERY OF HYDROCARBON OILS FROM OIL-SHALE, OIL-SAND, OR GASES; REFINING MIXTURES MAINLY CONSISTING OF HYDROCARBONS; REFORMING OF NAPHTHA; MINERAL WAXES
    • C10G2400/00Products obtained by processes covered by groups C10G9/00 - C10G69/14
    • C10G2400/02Gasoline
    • CCHEMISTRY; METALLURGY
    • C10PETROLEUM, GAS OR COKE INDUSTRIES; TECHNICAL GASES CONTAINING CARBON MONOXIDE; FUELS; LUBRICANTS; PEAT
    • C10GCRACKING HYDROCARBON OILS; PRODUCTION OF LIQUID HYDROCARBON MIXTURES, e.g. BY DESTRUCTIVE HYDROGENATION, OLIGOMERISATION, POLYMERISATION; RECOVERY OF HYDROCARBON OILS FROM OIL-SHALE, OIL-SAND, OR GASES; REFINING MIXTURES MAINLY CONSISTING OF HYDROCARBONS; REFORMING OF NAPHTHA; MINERAL WAXES
    • C10G2400/00Products obtained by processes covered by groups C10G9/00 - C10G69/14
    • C10G2400/04Diesel oil
    • CCHEMISTRY; METALLURGY
    • C10PETROLEUM, GAS OR COKE INDUSTRIES; TECHNICAL GASES CONTAINING CARBON MONOXIDE; FUELS; LUBRICANTS; PEAT
    • C10GCRACKING HYDROCARBON OILS; PRODUCTION OF LIQUID HYDROCARBON MIXTURES, e.g. BY DESTRUCTIVE HYDROGENATION, OLIGOMERISATION, POLYMERISATION; RECOVERY OF HYDROCARBON OILS FROM OIL-SHALE, OIL-SAND, OR GASES; REFINING MIXTURES MAINLY CONSISTING OF HYDROCARBONS; REFORMING OF NAPHTHA; MINERAL WAXES
    • C10G2400/00Products obtained by processes covered by groups C10G9/00 - C10G69/14
    • C10G2400/26Fuel gas

Abstract

The invention provides a method and a device for predicting the product yield of a catalytic cracking process, and a computer readable storage medium. The prediction method comprises the following steps: acquiring production data of a catalytic cracking unit in continuous time to construct a production data set; generating a lag time data set of the production data set from a lag time window determined via time series analysis, wherein the lag time window indicates a lag time of at least one of the production data in the production data set; and performing feature fusion on the lag data set to predict the product yield of the catalytic cracking unit. By performing these steps, the prediction method can employ a lag time window determined through time series analysis to make lag adjustments, thereby making full use of the correlation in the time series of the production data to improve the prediction accuracy of the product yield.

Description

Method and device for predicting product yield in catalytic cracking process
Technical Field
The invention belongs to the technical field of petrochemical industry and information science, and particularly relates to a method for predicting the product yield of a catalytic cracking process, a device for predicting the product yield of the catalytic cracking process and a computer-readable storage medium.
Background
The catalytic cracking process is usually very complex, large and has a long process flow, and is a complex industrial system with typical nonlinearity, time-varying property, coupling property, strong time lag and multiple scales. Therefore, in the chemical industry, modeling the catalytic cracking process is often very difficult.
In the prior art, the reaction process is generally described in detail by a person skilled in the art using a "lumped" method. However, because of the process operation of the catalyst regeneration cycle in the catalytic cracking unit, the energy requirements in different parts are different (for example, a strong endothermic reaction occurs in the reactor and an exothermic reaction occurs in the regenerator), and in addition, the efficiency of the enterprise for energy integration and optimization is gradually improved, and the process operation is more flexible, so that the association and coupling conditions between the variables are quite complicated. Thus, no one has been able to fully model complex feedstocks and conversion processes in a molecular perspective.
Compared with the 'lumped' method, the data driving model does not need to explain the internal mechanism of the system from the mechanism, but focuses on identifying the system parameters by adopting historical data after selecting a type of model. This makes the model easier to determine and select the model type and more convenient to use. However, the conventional catalytic cracking data-driven modeling generally performs hysteresis adjustment on production process data based on manual experience, seriously ignores the correlation in a data time sequence, often causes poor generalization performance of a model, and has the problem that the yield of a product cannot be accurately predicted.
In order to overcome the above defects in the prior art, a prediction technology for the catalytic cracking process is urgently needed in the art, and the prediction accuracy of the product yield is improved by fully utilizing the correlation in the time series of the production data.
Disclosure of Invention
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
In order to overcome the above-mentioned drawbacks of the prior art, the present invention provides a method for predicting the product yield of a catalytic cracking process, a device for predicting the product yield of a catalytic cracking process, and a computer-readable storage medium, which can perform a lag adjustment using a lag time window determined by time series analysis, thereby making full use of the correlation in the time series of production data to improve the accuracy of the prediction of the product yield.
Specifically, the first aspect of the present invention provides a method for predicting the product yield of the above catalytic cracking process, comprising the steps of: acquiring production data of a catalytic cracking unit in continuous time to construct a production data set; generating a lag time data set of the production data set from a lag time window determined via time series analysis, wherein the lag time window indicates a lag time of at least one of the production data in the production data set; and performing feature fusion on the lag data set to predict the product yield of the catalytic cracking unit.
Further, in some embodiments of the invention, prior to constructing the production dataset, the prediction method further comprises the steps of: and preprocessing the acquired production data, wherein the preprocessing comprises removing abnormal points, standardizing data and/or filling vacancy values.
Further, in some embodiments of the present invention, the step of obtaining production data of the catalytic cracking unit at a continuous time to construct a production data set comprises: continuously acquiring multiple groups of production data according to a preset time interval, wherein each group of production data comprises data of multiple variable dimensions at the same moment; and constructing the production data set according to the time dimension of the acquisition time and the variable dimension.
Further, in some embodiments of the invention, the step of generating a lag data set of the production data set from a lag time window determined via time series analysis comprises: adjusting a time dimension of at least one of the production data in the production data set according to a lag time indicated by the lag time window to generate the lag data set.
Further, in some embodiments of the invention, the catalytic cracking process involves a plurality of cracked products. The production data includes variable dimensions for each of the cracked products. One for each of said cracked products. The step of generating a lag data set of the production data set via time series analysis of the determined lag time window further comprises: adjusting a time dimension of production data in the production data set for each corresponding cracked product, respectively, according to a lag time indicated by each of the lag time windows to generate the lag data set.
Further, in some embodiments of the present invention, the plurality of cracked products includes, but is not limited to, at least one of diesel, slurry oil, gasoline, liquefied gas, dry gas, and sour gas.
Further, in some embodiments of the present invention, the step of feature fusing the lag data set to predict the product yield of the catalytic cracking unit comprises: inputting the lag data set into a pre-trained convolutional neural network, performing feature fusion on the lag data set through the convolutional neural network, and mapping to obtain the product yield of each cracked product.
Further, in some embodiments of the present invention, before generating a lag dataset of the production dataset according to a lag time window determined via time series analysis, the prediction method further comprises the steps of: collecting production data and actual product yield data of the catalytic cracking unit in a plurality of continuous time periods to construct a plurality of training sample sets; and constructing a self-adaptive weight long-short term memory network to be trained, and performing time sequence analysis on each training sample set to determine the lag time window.
Further, in some embodiments of the present invention, the step of performing a time series analysis on each of the training sample sets to determine the lag time window comprises: determining a plurality of candidate time windows, and respectively generating a plurality of candidate data sets of the training sample set according to the candidate time windows, wherein each candidate time window indicates the lag time of one candidate; performing time series analysis on each candidate data set through the adaptive weight long-short term memory network to respectively determine root mean square error of each candidate data set; and determining the lag time window according to the candidate time window with the minimum root mean square error.
Further, in some embodiments of the present invention, the adaptive weighted long short term memory network includes forward-term LSTM units and backward-term LSTM units. The step of performing a time series analysis on each of the candidate data sets via the adaptive weighted long-short term memory network to determine a root mean square error of each of the candidate data sets, respectively, comprises: performing forward time series analysis on the candidate data set via the forward extrapolation LSTM unit to determine a forward analysis result; performing backward time series analysis on the candidate data set via the backward prediction LSTM unit to determine a backward analysis result; according to preset weighted super parameters, carrying out weighted summation on the forward analysis result and the backward analysis result to determine product yield calculation data of the candidate data set; and determining the root mean square error of the candidate data set according to the product yield calculation data and the product yield actual data of the candidate data set.
Further, in some embodiments of the invention, the feature fusion is performed based on a pre-trained convolutional neural network. The step of training the convolutional neural network comprises: constructing a convolutional neural network to be trained; determining a lag sample set of each of the training sample sets according to the lag time window; and training the convolutional neural network by taking variable dimensional data of the production data in each lag sample set as input and taking corresponding yield dimensional data as an output standard value so as to enable the convolutional neural network to have the function of predicting the product yield according to the input production data.
In addition, the second aspect of the present invention provides a product yield prediction device for the above catalytic cracking process, which comprises a memory and a processor. The processor is connected to the memory and configured to implement the method for predicting the product yield of the above catalytic cracking process provided by the first aspect of the present invention.
Furthermore, a third aspect of the present invention provides the above computer-readable storage medium, on which computer instructions are stored. The computer instructions, when executed by a processor, implement the method of predicting the product yield of the above catalytic cracking process provided by the first aspect of the invention.
Drawings
The above features and advantages of the present disclosure will be better understood upon reading the detailed description of embodiments of the disclosure in conjunction with the following drawings. In the drawings, components are not necessarily drawn to scale, and components having similar relative characteristics or features may have the same or similar reference numerals.
Fig. 1 illustrates a flow diagram of an offline training phase of a product yield prediction method provided in accordance with some embodiments of the present invention.
FIG. 2 illustrates a flow diagram for determining a lag time window provided in accordance with some embodiments of the present invention.
Fig. 3A-3C illustrate schematic flow diagrams for training convolutional neural networks provided in accordance with some embodiments of the present invention.
Fig. 4 illustrates a schematic structural diagram of an adaptive weighted long short term memory network provided according to some embodiments of the present invention.
FIG. 5 illustrates a schematic flow diagram for training a convolutional neural network provided in accordance with some embodiments of the present invention.
Figure 6 illustrates a comparative schematic of predicted results for various product yields provided according to some embodiments of the invention.
Figure 7 illustrates a graph of the variation of root mean square error of gasoline product yields provided in accordance with some embodiments of the invention.
Fig. 8 illustrates a flow diagram of an on-line prediction stage of a product yield prediction method provided in accordance with some embodiments of the invention.
Detailed Description
The following description is given by way of example of the present invention and other advantages and features of the present invention will become apparent to those skilled in the art from the following detailed description. While the invention will be described in connection with the preferred embodiments, there is no intent to limit its features to those embodiments. On the contrary, the invention is described in connection with the embodiments for the purpose of covering alternatives or modifications that may be extended based on the claims of the present invention. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The invention may be practiced without these particulars. Moreover, some of the specific details have been left out of the description in order to avoid obscuring or obscuring the focus of the present invention.
As mentioned above, no one has been able to fully model complex feedstocks and conversion processes in a molecular perspective using a "lumped" approach. In addition, the conventional catalytic cracking data-driven modeling generally carries out lag adjustment on production process data based on manual experience, seriously ignores the correlation in a data time sequence, often causes poor generalization performance of a model, and has the problem that the product yield cannot be accurately predicted.
In order to overcome the above-mentioned drawbacks of the prior art, the present invention provides a method for predicting the product yield of a catalytic cracking process, an apparatus for predicting the product yield of a catalytic cracking process, and a computer-readable storage medium, which make full use of the correlation in the time series of the production data by performing a lag adjustment using a lag time window determined through time series analysis to improve the accuracy of the prediction of the product yield.
In some non-limiting embodiments, the method for predicting the product yield of the catalytic cracking process provided by the first aspect of the present invention may be implemented by the device for predicting the product yield of the catalytic cracking process provided by the second aspect of the present invention. Specifically, the prediction device may be configured with a memory and a processor. The memory includes, but is not limited to, the above-described computer-readable storage medium provided by the third aspect of the present invention having computer instructions stored thereon. The processor is coupled to the memory and configured to execute the computer instructions stored on the memory to implement the method for predicting the product yield of the catalytic cracking process as provided by the first aspect of the invention.
Further, the method for predicting the product yield of the catalytic cracking process provided by the first aspect of the present invention can be implemented by dividing into two stages, i.e., offline training and online prediction. Correspondingly, the product yield prediction device of the catalytic cracking process provided by the second aspect of the present invention can also be divided into an offline training device and an online prediction device, which are used for respectively executing the steps of the online offline training stage and the online prediction stage of the prediction method. It is understood that the offline training device and the online prediction device may be configured on the same electronic device and share the same memory and processor, or may be configured on different electronic devices and have independent memories and processors.
The working principle of the offline training device will first be described below in connection with some steps of the offline training phase. The offline training device includes, but is not limited to, a Personal Computer (PC) of a trainer, a workstation (Work Station), and/or a cloud server based on the internet of things of the chemical industry. It will be appreciated by those skilled in the art that these off-line training steps are merely provided as non-limiting embodiments of the present invention, which are intended to clearly illustrate the broad concepts of the present invention and to provide specific embodiments for the convenience of the public, and are not intended to limit the overall function or the overall operation of the off-line training apparatus. Likewise, the offline training device is also only a non-limiting embodiment of the present invention, and does not limit the subject of performing the offline training steps.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating an offline training phase of a product yield prediction method according to some embodiments of the present invention.
As shown in fig. 1, in the off-line training phase of the product yield prediction method of the catalytic cracking process, the off-line training device may first collect production data and actual product yield data of the catalytic cracking device in a plurality of consecutive time periods to construct a plurality of training sample sets.
Taking a catalytic cracking unit of a certain refinery as an example, an off-line training unit can first continuously collect 40 original variables of the catalytic cracking unit, such as feed flow and properties, regenerated catalyst properties, reactor operating conditions, stabilizer operating conditions, desorber operating conditions, product mass flow and the like, between 10 months 1 and 10 months 15 and 2021, according to a one-minute time interval, for a total of 15 days. The 40 original variables relate to the various cracked products of the catalytic cracker including, but not limited to, at least one of diesel, slurry oil, gasoline, liquefied gas, dry gas and sour gas.
In some embodiments, the 40 original variables are as follows: 1.0MPa steam to two reverse superheaters, overflow hopper fluidization steam flow, cold wax oil inlet device flow, hot wax oil inlet device flow, residual oil inlet device flow, dirty oil inlet device flow, coke burning tank main air inlet flow, pressurized air flow to two dense stages, raw material atomization steam flow, pre-lifting steam flow, reaction fresh feed flow, raw gasoline to T301 flow, refining oil return to reaction flow, T201 top circulation reflux flow, T201 first middle section reflux flow, slurry upper return tower flow, slurry lower return tower flow, T202 bottom stripping steam flow, stable tower top cold reflux flow, E3204 reboiler steam inlet flow, absorption oil flow, atomization steam pressure, regenerator pressure, 1.0MPa superheated steam pressure, two reverse upper temperature, regeneration dilute phase upper temperature, T201 first middle section reflux oil temperature, T201 light diesel oil upper extraction port extraction temperature, raw material preheating temperature, T302 bottom reboiler gas phase return tower temperature, The gas phase return tower temperature of the reboiler at the bottom of the T304, the dense phase temperature of the coke burning tank and the pressure drop of the spent slide valve.
Further, in some embodiments, after obtaining the raw production data of the catalytic cracking unit with multiple dimensions, the offline training device may preferably perform preprocessing operations such as outlier elimination, data normalization, and gap filling, so as to eliminate erroneous data, noise, outlier, and perform some necessary conversion.
Specifically, if a sample has missing data, the offline training device can remove the sample from the data set, and eliminate dirty data caused by device shutdown or network technology failure in the catalytic cracking process, so as to improve the accuracy of the training result. In addition, the offline training device can also adopt a 3Sigma standard to remove outliers in the data set and eliminate data with serious measurement errors in the data set, so that the accuracy of a training result is improved. In addition, under the condition that the sample data exceeds the upper limit and the lower limit of the variable, the off-line training device can automatically replace and fill the sample data into the corresponding upper limit value or lower limit value, the influence of the abnormal value on the training result is reduced, and the accuracy of the training result is improved.
In addition, the offline training device can also follow the formula:
Figure BDA0003486468390000071
and calculating the product yield of various catalytic cracking products such as diesel oil, slurry oil, gasoline, liquefied gas, dry gas, acid gas and the like to be used as an output standard value of the training model.
Further, after the product yield is calculated, the off-line training device can also remove variable data, such as the relative density of the gas product, the product flow and the like, which are irrelevant to the predicted product yield from the data set, so that the training efficiency of the model is improved.
Further, since the variables in the catalytic cracking process have different physical meanings and significant magnitude differences, the offline training device can also perform normalization operations on each raw production data in order to form a process of data set with sample labelsAccording to the formula
Figure BDA0003486468390000072
Normalizing each raw production data to the interval [0, 1]]. Here, the normalized parameters (including x) of the input variables areminAnd xmax) Is marked as npiNormalized parameter of the output variable is denoted as npo. In the model test phase, the input variables in the test sample are dependent on the parameter npiNormalization is carried out, and the output variable given by the model is according to the parameter npoInverse normalization is performed, i.e. X ═ xx (X)max-xmin)+xminTo restore the actual values of each raw production data. Therefore, the method can fully mine and utilize the data with different physical meanings and obvious order difference for modeling, thereby obviously improving the quantity, the utilization rate and the richness of the process data and improving the accuracy of the training result.
Further, in order to avoid accidental errors, the offline training device may select five sub-data sets with stable reaction temperature changes from the data set according to the fluctuation of the reaction temperature, and perform time series analysis on each sub-data set respectively to investigate the time lag investigation result of each sub-data set respectively. Then, the offline training device may average the time series analysis results of each sub data set to serve as the time lag observation result of the catalytic cracking unit.
In summary, 6424 samples can be obtained by the offline training device after the preprocessing operations such as outlier rejection, data normalization, gap filling, and independent variable data removal. Then, the offline training device can fill each production data into a matrix of t × (m + n) according to the time dimension t of the acquisition time and the variable dimension m + n of each production data and product yield actual data to construct a sample data set:
Figure BDA0003486468390000081
wherein x represents the input variable of production data, y represents the standard output variable of the actual yield of the product, t is the time dimension serial number, mIs the input variable number and n is the output variable number.
In some embodiments, the sample data set may include input data for 33 production variable dimensions, and standard output data for 6 product yield dimensions, i.e., m + n 39. In some embodiments, the offline training device may take the first 60% of sample data as training samples, the middle 10% of sample data as verification samples (without directly participating in training), and the last 30% of sample data as test samples in chronological order.
Referring to fig. 1 and 2 in combination, fig. 2 is a flow chart illustrating a process for determining a lag time window according to some embodiments of the present invention.
As shown in fig. 1 and fig. 2, after constructing a plurality of training sample sets, the offline training apparatus may construct an adaptive weighted long-short term memory network to be trained, and perform time series analysis on each training sample set to determine the lag time window Δ T.
Specifically, a Long short-term memory (LSTM) Network is an improvement of a Recurrent Neural Network (RNN) Network, has the ability to learn Long-term dependence, and can avoid the Long-term dependence problem. In contrast to traditional RNNs, long-term memory of information is actually an inherent ability of LSTMs. All the construction of the recurrent neural network is to repeat the modules of the neural network in a chain. The repeating module will have a very simple structure, LSTM also has this chain structure, but the structure of the repeating module is different. I.e. not a single neural network layer, but four, and interact in a very specific way. The LSTM adds a new memory cell state to each memory neuron in its network to reduce the information loss rate, while its three gate structures (i.e., forgetting gate, input gate and output gate) are used to selectively memorize the correction parameters fed back by the error function when the gradient is decreasing. The LSTM skillfully avoids the problem that the RNN is easy to generate gradient disappearance or gradient explosion in the long-time sequence learning process by introducing controllable self-circulation, and has obvious effect on processing tasks with time sequence delay and long intervals.
The forgetting degree of the input information of the previous unit is controlled by the forgetting door, and valuable parts in the history information are screened and reserved. The specific calculation formula is as follows:
ft=σ(Whfht-1+WxfXt+bf)
in the formula: f. oftRepresenting a forgetting threshold; whfAnd ht-1Respectively representing the output and the weight matrix of the previous moment of the hidden layer; xtAnd WxfRespectively representing the input and the weight matrix of the hidden layer at the current moment; bfRepresenting a vector of bias values; sigma stands for the standard sigmoid activation function.
The amount of information flowing into the unit is controlled by the input gate, and higher weights are assigned to more valuable information as much as possible, thereby updating the unit state. The specific calculation formula is as follows:
it=σ(Whiht-1+WxiXt+bi)
Figure BDA0003486468390000091
Figure BDA0003486468390000092
in the formula: i.e. itRepresenting an input threshold; ctRepresenting a unit state vector of a hidden layer at the time t;
Figure BDA0003486468390000093
representing the current input to the cell state; whiAnd WhcA weight matrix representing a previous time instant; wxiAnd WxcA weight matrix representing a current time; biAnd bcRepresenting a vector of bias values; tan h (g) represents the hyperbolic tangent activation function.
The final output information is determined by the output gate. The specific calculation formula is as follows:
ot=σ(Whaht-1+WxoXt+bo)
ht=ottanh(Ct)
in the formula: otRepresenting an output threshold; whaAnd WxoRespectively representing the weight matrixes of the previous moment and the current moment; boRepresenting a vector of offset values.
Considering that the catalytic cracking process needs reaction time, the operation influence of a certain current operation variable needs a period of time to influence the whole production process, the catalytic cracking process usually has strong time lag, and the process data naturally has time sequence. Therefore, the long-term and short-term memory network is adopted to analyze the time series of the production data in the catalytic cracking process, and the method can fully mine and utilize the correlation in the time series of the production data, thereby improving the prediction accuracy of the product yield.
In determining the lag time window, the offline training device may first determine a plurality of candidate time windows Δ TiWherein each candidate time window indicates a lag time of one candidate. Thereafter, the offline training device may be configured to determine the time window Δ T for each candidate time windowiAnd respectively adjusting the time dimension of at least one corresponding production data in the training sample set to generate a plurality of candidate data sets of the training sample set. Then, the offline training device can adopt an error back propagation algorithm to repeatedly train the model for multiple times so as to achieve the expected model index, and through the self-adaptive weight long-short term memory network, time sequence analysis is carried out on each candidate data set, and the root mean square error of each candidate data set is respectively determined through input and output regression. Further, in order to better extract the correlation of the time series, the same network structure parameters are adopted for each time series subdata set under different reaction temperatures of catalytic cracking under the time lag investigation, an adaptive weight long-short term memory network model is operated for multiple times aiming at a plurality of data sets, and the time series analysis results are averaged to determine the root mean square error. Finally, the offline training device may determine the lag time window Δ for each production data of the catalytic cracking unit based on the candidate time window with the smallest root mean square errorT。
Further, for an application scenario involving production data of various cracked products such as diesel, slurry oil, gasoline, liquefied gas, dry gas, acid gas, etc., in consideration of characteristics of different time lag characteristics of various cracked products, the offline training device can be matched with different lag time windows, i.e., Δ T, according to different cracked productsi={Δti1,Δti2,Δti3,Δti4,Δti5,Δti6}. Correspondingly, the offline training device may be configured to determine the candidate time windows Δ TiEach lag time element Δ t in (1)ijAdjusting the time dimension of the production data of each corresponding cracked product j in the training sample set respectively to generate a plurality of candidate data sets of each training sample set
Figure BDA0003486468390000101
Then, the offline training device may train the model multiple times using the error back propagation algorithm as described above to achieve the desired model index, and perform time series analysis on each candidate data set via the adaptive weight long-short term memory network to determine the root mean square error of each candidate data set, respectively. Finally, the offline training device may determine the lag time window Δ T for each production data of the catalytic cracking unit based on the candidate time window with the smallest root mean square error.
Referring to fig. 3A-3C, fig. 3A-3C show graphs of root mean square error for various cracked products with respect to different lag time windows, provided in accordance with some embodiments of the present invention. As shown in fig. 3A-3C, in some embodiments of the present invention, the dry gas and the acid gas have similar time lag characteristics, both of which achieve a minimum root mean square error in a 20 minute lag time window. Gasoline and liquefied gas have similar time lag characteristics, both reaching a minimum root mean square error in a 30 minute lag time window. Diesel and slurry have similar time lag characteristics, both reaching a minimum root mean square error in a 10 minute lag time window. That is, in this embodiment, Δ T i10,10,20,20,30, one lag time for each cracked productWindow Δ tij
Referring further to fig. 4, fig. 4 is a schematic diagram illustrating an architecture of an adaptive weighted long short term memory network according to some embodiments of the invention.
As shown in fig. 4, in some embodiments of the present invention, a forward-dead-reckoning LSTM unit and a backward-dead-reckoning LSTM unit may preferably be included in the adaptive weight long-short term memory network. The forward estimation LSTM unit and the backward estimation LSTM unit can have model depths of 3 layers, wherein the neuron numbers of the 3 layers of hidden layers are respectively 50, 100 and 200, and the output layer is set as a full connection layer.
In the process of training the forward-direction calculation LSTM unit and the backward-direction calculation LSTM unit, the offline training device can firstly select relu as an activation function of the forward-direction calculation LSTM unit and the backward-direction calculation LSTM unit, set the number of training iterations to be 50, set the batch size of batch training to be 64, and train the model for multiple times to reach the expected model index. The batch size indicates the amount of data entered for each batch. Further, to prevent the model from overfitting, the offline training device may also introduce a dropout parameter and set this parameter to 0.2.
In the process of determining the lag time window, the offline training device can not only perform forward time series analysis on the candidate data set through the forward estimation LSTM unit to determine the forward analysis result htIt is also possible to perform backward time series analysis on the candidate data set via the backward estimation LSTM unit to determine a backward analysis result h't. The offline training device may then apply forward analysis h according to a predetermined weighted super-parameter (e.g., 0.5)tAnd backward analysis result h'tWeighted summation, i.e. ot=g(ωFront sidehtRear endh′t) To determine product yield calculation data for the candidate data set. Then, the offline training device can calculate the data o according to the product yieldtAnd actual product yield data of the candidate data set, determining the root mean square error of the candidate data set, and determining the lag of each production data of the catalytic cracking unit according to the candidate time window with the minimum root mean square errorThe time window deltat.
Compared with the traditional long-short term memory network which can only process data in sequence and can only obtain the preorder time sequence information for processing the production data of the catalytic cracking unit but can not obtain the postorder time sequence information of the production data, the invention improves the LSTM network by configuring the forward calculation LSTM unit and the backward calculation LSTM unit, and automatically adjusts the output weights of the two networks according to the output result, thereby forming the RNN model of positive and negative bidirectional time sequence. Thus, the adaptive weight long short term memory network employed in the present invention can take into account not only past production data information but also future data information, and therefore, can more efficiently extract effective information useful for predicting the product yield from the catalytic cracking apparatus.
Referring to fig. 1 and 5 in combination, fig. 5 is a schematic flow chart illustrating training of a convolutional neural network according to some embodiments of the present invention.
As shown in fig. 1 and 5, after determining the lag time window Δ T of each production data, the offline training device may adjust the time dimension of the production data for each corresponding cracked product j in the training sample set according to the lag time window Δ T to determine the lag sample set of each training sample set. Then, the offline training device can construct a convolutional neural network to be trained, and train the convolutional neural network according to each lag sample set, so that the convolutional neural network has the function of predicting the product yield according to the input production data.
Convolutional Neural Networks (CNNs) are a Neural network having a multi-layer structure, where each layer is formed by a plurality of two-dimensional planes, and each two-dimensional plane is formed by a plurality of independent neurons, and is used for processing two-dimensional data. The structure of the convolutional neural network is influenced to some extent by the earlier proposed time-delayed neural network. The time delay neural network can process time series signals such as voice and the like, and share weight is adopted in the time dimension so as to reduce the calculation complexity in the training process. Convolutional neural networks typically contain three types of network layers, namely convolutional layers, pooling layers (Pool), and fully-connected layers. A convolutional layer consists of different feature maps, each neuron in a feature map being connected to a local region in the previous layer by a set of weights. All neurons in one signature share the same convolution kernel, and different signatures in one convolution layer have different convolution kernels. Convolutional layers are important components of convolutional neural networks, and the above mentioned local connection and weight sharing are the biggest features of convolutional layers. There are some inherent fixed characteristics in an image, and the statistical characteristics may be the same for some region as for other regions. The convolution operation is actually a process of extracting local features, and the randomly selected local features are used as a filter to scan the whole receptive field, so that different feature activation values at all positions are obtained. For the neuron nodes in the same feature map of the convolutional layer, local feature extraction is carried out on different positions in the feature map of the previous layer, and for a single neuron node, the position of local feature extraction carried out on the feature map of the previous layer is kept unchanged.
The pooling layer is typically located behind the convolutional layer, with each neuron in the pooling layer connected to some local region in the previous layer. Typical pooling operations include max-pooling and average-pooling, respectively, of the maximum and average values of all neurons in a certain local area in the upper layer, where there are no trainable parameters in the max or average-pooling layer. When a convolutional neural network contains several convolutional and pooling layers, it will usually eventually contain one or more fully-connected layers for synthesizing the features learned by the front partial network. As the name implies, each neuron in the fully connected layer is connected to all neurons in the previous layer.
In some embodiments, the convolutional neural network employed by the present invention is configured as a structure of convolutional layer-pooling layer-fully-connected layer, wherein the kernel sizes of the two convolutional layers and the two pooling layers are [5,5] and [1,2,2,1], the numbers of feature maps in the first and second convolutional layers of the convolutional neural network are 8 and 16, respectively, and the number of neurons in the fully-connected layer is 30.
In the process of training the convolutional neural network, the offline training device may first select tanh as an activation function of the convolutional neural network, set the training algebra to 500 times, set the sample batch size to 50, select an average pooling mode, use Mean Square Error (MSE) as a loss function of the convolutional neural network, set the convolutional neural network parameter optimizer to Adam, and set the initial learning rate to 0.001. Further, to prevent the model from overfitting, the offline training device may also introduce a dropout parameter and set this parameter to 0.5.
Thereafter, the offline training device may input the production data subjected to the time lag processing in each lag sample set into the convolutional neural network to be trained, with the production data of 33 variable dimensions regarding the production data as input, and with the standard output data of the corresponding 6 yield dimensions as output standard values, to train the convolutional neural network. When the training algebra reaches a predetermined training algebra (i.e. 500 times), the offline training device may stop the training operation and query the parameters with the best model evaluation indexes (e.g. root mean square error RMSE, mean absolute error MAE) as the final parameters of the convolutional neural network model. In this case, the trained convolutional neural network has a function of predicting the product yield from the input production data.
By adopting the model structure of the self-adaptive weight long-short term memory network-convolutional neural network, the invention can firstly utilize the self-adaptive weight long-short term memory network to carry out forward time sequence analysis and backward time sequence analysis, thereby effectively analyzing the reaction hysteresis of the catalytic cracking unit and effectively improving the prediction precision of the convolutional neural network. Therefore, the model based on the self-adaptive weight long-short term memory network-convolutional neural network has reference value for the oil refining production process with complex reaction process, multiple variables and strong nonlinearity and time lag, and has strong generalizability, more accurate regression fitting capability and good extrapolation capability.
Further, after the completion of the above-mentioned training process, the offline training device may also utilize the divided last 30% of the test samples to test the lag time window determined by the time series analysis and the pre-trained convolutional neural network, so as to verify the accuracy of the present invention in predicting the product yield of the catalytic cracking process.
Please refer to table 1, fig. 6 and fig. 7. Table 1 illustrates error data for product yield provided in accordance with some embodiments of the present invention. Figure 6 illustrates a comparative schematic of predicted results for various product yields provided according to some embodiments of the invention. Figure 7 illustrates a graph of the trend of the root mean square error of the gasoline product yield provided in accordance with some embodiments of the present invention.
TABLE 1
Product yield MAE RMSE
Gasoline yield 1.36 1.73
Yield of liquefied gas 0.98 1.25
Diesel oil yield 1.07 1.36
Oil slurry yield 0.41 0.59
Yield of dry gas 0.10 0.13
Yield of acid gas 0.04 0.05
As shown in table 1, fig. 6 and fig. 7, by adopting the model structure of the adaptive weight long-short term memory network-convolutional neural network, the present invention can obtain smaller average absolute error MAE and root mean square error RMSE, thereby improving the prediction accuracy of the product yield.
The working principle of the above-described online prediction device will be described below with continued reference to some steps of the online prediction phase. The online prediction device includes, but is not limited to, an Industrial Control Computer (Industrial Control Computer) integrated in the catalytic cracking unit, and/or a Personal Computer (PC) communicatively connected to the catalytic cracking unit, a workstation (Work Station), a tablet Computer, a smart phone, and/or a cloud server based on the internet of things of the chemical industry. It will be appreciated by those skilled in the art that these online prediction steps are merely provided as non-limiting embodiments of the present invention, which are intended to clearly illustrate the broad concepts of the present invention and to provide specific embodiments for facilitating implementation by the public, and are not intended to limit the overall function or overall operation of the online prediction device. Likewise, the online prediction device is only a non-limiting embodiment of the present invention, and does not limit the execution of the online prediction steps.
Referring to fig. 8, fig. 8 is a flow chart illustrating an on-line prediction stage of a product yield prediction method according to some embodiments of the present invention.
As shown in fig. 8, in the on-line prediction stage of the product yield prediction method of the catalytic cracking process, the on-line prediction apparatus may first acquire production data for a continuous time from the catalytic cracking apparatus of the above-described oil refinery according to a time interval of one minute to construct a production data set. In some embodiments, the production data may correspond to 33 production variable dimensions of data used in the training phase, relating to, but not limited to, at least one cracked product of diesel, slurry oil, gasoline, liquefied gas, dry gas, and sour gas.
Further, in some embodiments, after acquiring the production data of the catalytic cracking unit, the on-line prediction unit may preferably perform preprocessing operations such as removing outliers, normalizing the data, filling in vacancy values, etc. to eliminate erroneous data, noise, outliers therein, and perform some necessary conversions.
Specifically, if a certain set of production data has missing data, the online prediction device can remove the set of production data from the data set, and eliminate dirty data caused by device shutdown or network technology failure in the catalytic cracking process, so as to improve the accuracy of the prediction result. In addition, the online prediction device can also adopt a 3Sigma criterion to remove outliers in the data set and eliminate data with serious measurement errors in the data set so as to improve the accuracy of the prediction result. In addition, for the condition that the production data exceed the upper limit and the lower limit of the variable, the on-line prediction device can automatically replace and fill the production data into the corresponding upper limit value or lower limit value, so that the influence of the abnormal value on the training result is reduced, and the accuracy of the prediction result is improved.
Further, for the training method using normalization and denormalization, the on-line prediction device can also perform normalization on each input data according to a formula
Figure BDA0003486468390000151
Normalizing each input data to the interval [0, 1%]The method solves the problems that each input data has different physical meanings and obvious magnitude difference, and enables the input data to be more consistent with the input characteristics of the training sample set so as to improve the accuracy of the prediction result.
Furthermore, in order to avoid accidental errors, the on-line prediction device may select five sub-data sets with stable reaction temperature changes from the data set according to the fluctuation of the reaction temperature, and perform time lag processing and feature fusion processing on each sub-data set respectively to obtain the product yield prediction result corresponding to each sub-data set. Then, the on-line prediction device may average the product yield prediction results corresponding to each subdata set to determine the product yield of the catalytic cracking unit.
After the preprocessing operation is completed, the on-line prediction device can fill each production data into a t × m matrix according to the time dimension t of the acquisition time and the variable dimension m (e.g., 33) of each production data to construct a production data set
Figure BDA0003486468390000161
As shown in fig. 8, after the production data set of the catalytic cracking unit is constructed, the on-line prediction unit may generate a lag data set of the production data set from the lag time window Δ T determined through the time series analysis.
In some embodiments, the lag time window Δ T may be determined by time series analysis via the adaptive weighted long short term memory network, but is not limited thereto. Optionally, in other embodiments, a skilled person may also perform time series analysis on the production data of the catalytic cracking apparatus by using other neural network models or other online time series analysis methods based on the prior art in the field to obtain the corresponding lag time window Δ T, which is not described herein again.
After determining the lag time window Δ T of the production data, the on-line prediction apparatus may adjust a time dimension of at least one corresponding production data in the production data set according to a lag time indicated by the lag time window Δ T to generate a lag data set.
Further, for an application scenario where the catalytic cracking apparatus involves multiple cracked products such as diesel oil, slurry oil, gasoline, liquefied gas, dry gas, and acid gas, and the time lag characteristics of the various cracked products are different, the lag time window Δ T may preferably be configured with multiple different lag time elements Δ TjI.e. Δ T ═ Δ T1,Δt2,Δt3,Δt4,Δt5,Δt6In which each cracked product corresponds to a lag time window Δ tj. In the formation of stagnancyIn the case of a later data set, the on-line prediction means may be arranged to predict the respective lag time element Δ T from the lag time window Δ TjAn indicated lag time, individually adjusting a time dimension of the production data set for each corresponding cracked product j to generate a lag data set
Figure BDA0003486468390000162
Specifically, for the embodiments shown in FIGS. 3A-3C, the online prediction unit may predict each lag time element Δ T of the lag time window Δ T based onjThe indicated lag time, production data relating to both dry gas and sour gas are adjusted back for 20 minutes, production data relating to both gasoline and liquefied gas are adjusted back for 30 minutes, and production data relating to both diesel and slurry oil are adjusted back for 10 minutes to generate a lag data set.
As shown in fig. 8, after completion of the time lag process and generation of the lag data set, the on-line prediction unit may perform feature fusion on the lag data set to predict the product yield of the catalytic cracking unit.
In some embodiments, the above-described feature fusion operations may be performed via a convolutional neural network trained during the above-described offline training phase. Specifically, the on-line prediction device may input the lag data set of t × m into a convolutional neural network trained in advance, perform feature fusion on the lag data set via the convolutional neural network, and map the product yield Y (Y) of each cracked product at the output end of the convolutional neural network1,y2,y3,y4,y5,y6Where each output dimension of product yield Y corresponds to one catalytic cracking product.
It will be appreciated by those skilled in the art that the above feature fusion operation based on the pre-trained convolutional neural network is only a non-limiting embodiment of the present invention, which is intended to clearly illustrate the main concept of the present invention and provide a specific solution for the implementation by the public, and is not intended to limit the scope of the present invention.
Optionally, in other embodiments, a person skilled in the art may also select another classification network model to implement feature fusion of each variable dimension in the lag data set, and output a corresponding product yield prediction result, which is not described herein again.
While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance with one or more embodiments, occur in different orders and/or concurrently with other acts from that shown and described herein or not shown and described herein, as would be understood by one skilled in the art.
Those of skill in the art would understand that information, signals, and data may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits (bits), symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Although the product yield prediction apparatus described in the above embodiments can be implemented by a combination of software and hardware. However, it is to be understood that the product yield prediction apparatus may be implemented in software or hardware alone. For a hardware implementation, the product yield prediction apparatus may be implemented in one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, other electronic devices designed to perform the functions described herein, or a selected combination thereof. For software implementations, the product yield prediction apparatus may be implemented by separate software modules, such as program modules (processes) and function modules (functions), running on a common chip, each of which performs one or more of the functions and operations described herein.
The various illustrative logical modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (13)

1. A method for predicting the product yield of a catalytic cracking process, comprising the steps of:
acquiring production data of a catalytic cracking unit in continuous time to construct a production data set;
generating a lag time data set of the production data set from a lag time window determined via time series analysis, wherein the lag time window indicates a lag time of at least one of the production data in the production data set; and
feature fusion is performed on the lag data set to predict product yield of the catalytic cracking unit.
2. The prediction method of claim 1, wherein prior to constructing the production dataset, the prediction method further comprises the steps of:
and preprocessing the acquired production data, wherein the preprocessing comprises removing abnormal points, standardizing data and/or filling vacancy values.
3. The predictive method of claim 1, wherein the step of obtaining production data for the catalytic cracking unit over a continuous period of time to construct a production data set comprises:
continuously acquiring multiple groups of production data according to a preset time interval, wherein each group of production data comprises data of multiple variable dimensions at the same moment; and
and constructing the production data set according to the time dimension of the acquisition moment and the variable dimension.
4. The predictive method of claim 3, wherein the step of generating a lag dataset for the production dataset based on a lag time window determined via time series analysis comprises:
adjusting a time dimension of at least one of the production data in the production data set according to a lag time indicated by the lag time window to generate the lag data set.
5. The prediction method of claim 4, wherein the catalytic cracking process involves a plurality of cracked products, the production data includes variable dimensions for each of the cracked products, each of the cracked products corresponds to one of the lag time windows, the step of generating the lag data set of the production data set via the lag time windows determined by the time series analysis further comprises:
adjusting a time dimension of production data in the production data set for each corresponding cracked product, respectively, according to a lag time indicated by each of the lag time windows to generate the lag data set.
6. The prediction method of claim 5, wherein the plurality of cracked products comprises at least one of diesel, slurry oil, gasoline, liquefied gas, dry gas, and sour gas.
7. The prediction method of claim 5, wherein the step of feature fusing the lag data set to predict the product yield of the catalytic cracking unit comprises:
inputting the lag data set into a pre-trained convolutional neural network, performing feature fusion on the lag data set through the convolutional neural network, and mapping to obtain the product yield of each cracked product.
8. The prediction method of claim 1, wherein prior to generating the lag dataset of the production dataset from the lag time window determined via time series analysis, the prediction method further comprises the steps of:
collecting production data and actual product yield data of the catalytic cracking unit in a plurality of continuous time periods to construct a plurality of training sample sets; and
and constructing a self-adaptive weight long-short term memory network to be trained, and performing time sequence analysis on each training sample set to determine the lag time window.
9. The prediction method of claim 8, wherein the step of performing a time series analysis on each of the training sample sets to determine the lag time window comprises:
determining a plurality of candidate time windows, and respectively generating a plurality of candidate data sets of the training sample set according to the candidate time windows, wherein each candidate time window indicates the lag time of one candidate;
performing time series analysis on each candidate data set through the adaptive weight long-short term memory network to respectively determine root mean square error of each candidate data set; and
and determining the lag time window according to the candidate time window with the minimum root mean square error.
10. The prediction method of claim 9 wherein the adaptive weighted long short term memory network comprises forward-term LSTM units and backward-term LSTM units, and wherein the step of performing a time series analysis on each of the candidate data sets via the adaptive weighted long short term memory network to determine the root mean square error of each of the candidate data sets comprises:
performing forward time series analysis on the candidate data set via the forward extrapolation LSTM unit to determine a forward analysis result;
performing backward time series analysis on the candidate data set via the backward prediction LSTM unit to determine a backward analysis result;
according to preset weighted super parameters, carrying out weighted summation on the forward analysis result and the backward analysis result to determine product yield calculation data of the candidate data set; and
and determining the root mean square error of the candidate data set according to the product yield calculation data and the product yield actual data of the candidate data set.
11. The prediction method of claim 8, wherein the feature fusion is performed based on a pre-trained convolutional neural network, wherein the step of training the convolutional neural network comprises:
constructing a convolutional neural network to be trained;
determining a lag sample set of each of the training sample sets according to the lag time window; and
and training the convolutional neural network by taking variable dimensional data of the production data in each lag sample set as input and taking corresponding yield dimensional data as an output standard value so as to enable the convolutional neural network to have the function of predicting the product yield according to the input production data.
12. An apparatus for predicting the product yield of a catalytic cracking process, comprising:
a memory; and
a processor coupled to the memory and configured to implement the method of predicting product yield of a catalytic cracking process of any of claims 1-11.
13. A computer readable storage medium having stored thereon computer instructions, wherein the computer instructions, when executed by a processor, implement a method for predicting product yield of a catalytic cracking process according to any one of claims 1 to 11.
CN202210082237.3A 2022-01-24 2022-01-24 Method and device for predicting product yield in catalytic cracking process Pending CN114492988A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210082237.3A CN114492988A (en) 2022-01-24 2022-01-24 Method and device for predicting product yield in catalytic cracking process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210082237.3A CN114492988A (en) 2022-01-24 2022-01-24 Method and device for predicting product yield in catalytic cracking process

Publications (1)

Publication Number Publication Date
CN114492988A true CN114492988A (en) 2022-05-13

Family

ID=81474365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210082237.3A Pending CN114492988A (en) 2022-01-24 2022-01-24 Method and device for predicting product yield in catalytic cracking process

Country Status (1)

Country Link
CN (1) CN114492988A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024021536A1 (en) * 2022-07-27 2024-02-01 华东理工大学 Catalytic cracking unit key index modeling method based on time sequence feature extraction

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024021536A1 (en) * 2022-07-27 2024-02-01 华东理工大学 Catalytic cracking unit key index modeling method based on time sequence feature extraction

Similar Documents

Publication Publication Date Title
Yuan et al. Soft sensor model for dynamic processes based on multichannel convolutional neural network
CN112580263B (en) Turbofan engine residual service life prediction method based on space-time feature fusion
Salehi et al. On-line analysis of out-of-control signals in multivariate manufacturing processes using a hybrid learning-based model
CN109992921B (en) On-line soft measurement method and system for thermal efficiency of boiler of coal-fired power plant
Yu et al. Control chart recognition based on the parallel model of CNN and LSTM with GA optimization
Chen et al. Aero-engine remaining useful life prediction method with self-adaptive multimodal data fusion and cluster-ensemble transfer regression
CN114944203A (en) Wastewater treatment monitoring method and system based on automatic optimization algorithm and deep learning
CN115188429A (en) Catalytic cracking unit key index modeling method integrating time sequence feature extraction
Buragohain Adaptive network based fuzzy inference system (ANFIS) as a tool for system identification with special emphasis on training data minimization
CN114492988A (en) Method and device for predicting product yield in catalytic cracking process
Kumar Remaining useful life prediction of aircraft engines using hybrid model based on artificial intelligence techniques
WO2022098601A1 (en) Autonomous fluid management using fluid digital twins
CN109960146A (en) The method for improving soft measuring instrument model prediction accuracy
Zhu et al. A novel intelligent model integrating PLSR with RBF-kernel based extreme learning machine: Application to modelling petrochemical process
Wang et al. A new input variable selection method for soft sensor based on stacked auto-encoders
CN112342050B (en) Method and device for optimizing light oil yield of catalytic cracking unit and storage medium
CN115482877A (en) Fermentation process soft measurement modeling method based on time sequence diagram network
CN113111588A (en) NO of gas turbineXEmission concentration prediction method and device
Yu et al. Fault diagnosis of analog circuit based CS_SVM algorithm
Guo et al. Modelling for multi-phase batch processes using steady state identification and deep recurrent neural network
CN113420498B (en) AI modeling method of atmospheric and vacuum distillation unit
Wang A new variable selection method for soft sensor based on deep learning
Yoon et al. A Study on the Remaining Useful Life Prediction Performance Variation based on Identification and Selection by using SHAP
Yang et al. A Variable Attention-Based Gated GRU Approach for Soft Sensor Models
CN115481715A (en) Product quality index prediction method and system based on AM-GRU-BPNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination