CN116821695B - Semi-supervised neural network soft measurement modeling method - Google Patents

Semi-supervised neural network soft measurement modeling method Download PDF

Info

Publication number
CN116821695B
CN116821695B CN202311099248.3A CN202311099248A CN116821695B CN 116821695 B CN116821695 B CN 116821695B CN 202311099248 A CN202311099248 A CN 202311099248A CN 116821695 B CN116821695 B CN 116821695B
Authority
CN
China
Prior art keywords
data
formula
tag
value
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311099248.3A
Other languages
Chinese (zh)
Other versions
CN116821695A (en
Inventor
王平
李雪静
尹贻超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East China filed Critical China University of Petroleum East China
Priority to CN202311099248.3A priority Critical patent/CN116821695B/en
Publication of CN116821695A publication Critical patent/CN116821695A/en
Application granted granted Critical
Publication of CN116821695B publication Critical patent/CN116821695B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0895Weakly supervised learning, e.g. semi-supervised or self-supervised learning

Abstract

The invention discloses a semi-supervised neural network soft measurement modeling method, which belongs to the technical field of industrial process detection, and the method extracts dynamic and nonlinear characteristics of original data through a neural network; meanwhile, a point estimation item is added into a loss function of an upper limit estimation method and a lower limit estimation method, the width of a section, the coverage rate and the mean square error of point estimation are balanced, and the loss function is minimized through a particle swarm optimization algorithm so as to generate a compact section and a more accurate estimated value; then, adding model training through the label-free data with high confidence coefficient of double standard selection to update the label data set; and finally, carrying out iterative optimization on the output upper and lower limit weights and the estimated value weights. The semi-supervised learning framework provided by the invention not only can effectively quantify the influence of uncertain factors on the model, but also can fully utilize the supervision information contained in the tag data and assist in improving the generalization capability and reliability of the soft measurement model by using the structural information contained in the non-tag data.

Description

Semi-supervised neural network soft measurement modeling method
Technical Field
The invention belongs to the technical field of industrial process detection, and particularly relates to a semi-supervised neural network soft measurement modeling method.
Background
Modern industrial production processes are rapidly developing in the direction of digitization and intellectualization, and at the same time, the pursuit of product quality control is also becoming higher and higher. In the actual production process, some variables which are easy to measure online in real time, such as temperature, flow and the like, are called process variables, however, some key quality variables which are closely related to the product quality and are difficult to realize direct measurement, such as product concentration, components, physical parameters and the like, can only be obtained by offline analysis of an online analysis instrument or laboratory, and have the problems of long measurement period, strong feedback hysteresis, complex maintenance and the like. The soft measurement technique was developed in such a background, and the basic idea is to implement real-time estimation of a quality variable by establishing a mathematical regression model between a process variable that is easy to measure in real-time on line and a quality variable that is difficult to measure in real-time on line. Compared with laboratory test analysis or on-line component instrument, the technology has the advantages of low maintenance cost, timely measurement and the like, so that the technology has wide application in numerous industrial fields of oil refining, chemical industry, metallurgy, pharmacy and the like.
At present, a machine learning modeling method represented by an artificial neural network has many successful applications in soft measurement technology. These modeling methods generally only yield a single determined point estimate, however, the value of the point estimate is limited and its reliability cannot be effectively evaluated due to uncertainty of the modeling data (sampling errors, measurement errors, noise, etc.) and limitations of the model. For this problem, the interval estimation (Prediction Interval, PI) may fall within an estimated interval defined by the upper and lower boundaries of the interval at a certain confidence level, thereby quantifying the estimation uncertainty and providing more information to the decision maker. For example, bayesian techniques are used to construct PI based neural networks; chryssolouris et al (1996) propose delta technology; nix et al (1994) proposed a method of estimating PI based on mean variance, but the above method requires high data quality, high computational cost, and questionable PI quality constructed by minimizing an error-based cost function. Thus Khosravi et al (2011) propose an upper and lower bound estimation (Lower Upper Bound Estimation, LUBE) method to obtain a high quality PI with narrow width and large coverage, a new cost function is proposed, while considering the interval width and coverage, converting the multi-objective problem into a single-objective optimization problem by constructing a neural network with two outputs for estimating the interval boundaries. The LUBE method does not need other information about upper and lower bounds, and is realized by only minimizing the proposed objective function; in order to optimize the objective function, a particle swarm optimization algorithm, a simulated annealing or gradient descent method and the like are generally adopted, and furthermore, wang et al (2020) propose an adaptive optimization method based on a construction interval; simhayev et al (2022) propose a method combining interval and point estimation for regression, further perfecting the interval estimation method system.
In general, to obtain a generalizable soft measurement model, it is necessary to train with a large amount of input-output data covering the main operating conditions of the process. In particular, this is true for machine learning models such as artificial neural networks that are complex in structure and numerous in parameters. However, for practical soft measurement modeling problems, the sampling rate of the quality variable is typically much lower than the sampling frequency of the process variable. This results in only a small portion of the modeling data actually collected having both input and output values (tag data), while the vast majority of the data has only input values, with the corresponding output values being missing (no tag data). If the model is trained by adopting a supervised learning mode, namely only the label data is used for modeling, the effect of no-label data is ignored, the model overfitting phenomenon easily occurs under the condition that the label data is scarce, the reliability of the model is not ensured, and the requirement of practical application is difficult to meet. The unlabeled data contains abundant data structure information, and if the information contained in the unlabeled data can be reasonably utilized, the performance of the regression model is expected to be obviously improved. Therefore, a semi-supervised learning method, that is, a small amount of tag data and a large amount of non-tag data are simultaneously utilized to build a soft measurement mathematical model, has been receiving more and more attention in recent years.
In summary, the existing soft measurement modeling technology has the following problems: the established soft measurement model can only realize point estimation generally, but cannot effectively quantify the influence of various uncertainty factors on the model; a soft measurement model is established by adopting a supervised learning mode, and modeling performance is seriously dependent on the quantity and quality of tag data; a soft measurement model is established by adopting a semi-supervised learning mode, the mining depth of the information contained in the label-free data is insufficient, and the performance of the model is difficult to effectively improve.
Disclosure of Invention
In order to solve the problems, the invention provides a semi-supervised neural network soft measurement modeling method, dynamic and nonlinear information in process data is effectively extracted on the premise of ensuring algorithm calculation efficiency, meanwhile, label-free data with high confidence is selected through double criteria to assist label data to carry out model training, the information of the label-free data is fully utilized in the iterative process, meanwhile, interval and point estimation is realized, and model performance is improved.
The technical scheme of the invention is as follows:
a semi-supervised neural network soft measurement modeling method comprises the following steps:
step 1, acquiring a quality variable assay analysis value through offline analysis of a laboratory, acquiring a process variable measurement value through an industrial sensor, and carrying out normalization processing on the quality variable assay analysis value and the process variable measurement value;
step 2, constructing a neural network model with three outputs of an interval upper boundary, an interval lower boundary and an upper and lower limit relative weight coefficient, and training the output upper and lower limit weights and the estimated value weights of an initial model by using a label data set and a particle swarm optimization algorithm;
step 3, estimating a predictive label of the unlabeled data by using the output upper and lower limit weights and the estimated value weights of the model, and selecting high-confidence unlabeled data to form a candidate data set;
step 4, taking the interval width as a selection criterion, further selecting data lower than a width threshold from the candidate data sets, adding the data into the tag data sets, and updating the tag data sets; meanwhile, updating model parameters by using a particle swarm optimization algorithm;
step 5, repeating the step 3 and the step 4 until the maximum iteration times are reached, and outputting final model parameters at the moment;
and step 6, collecting test data, carrying out normalization processing to obtain a normalized test set, obtaining a data set after dimension expansion through a neural network, and finally estimating the test set by using training set model parameters.
Further, the specific process of step 1 is as follows:
step 1.1, recording the collected initial quality variable assay analysis value set asSet->Each initial quality variable assay analysis value of (1) is tag data,/v>Is->Personal tag data->For the serial number of the tag data, +.>For the total number of tag data, +.>Transpose the operator for the matrix; record the collected initial set of process variable measurements asSet->Each of the initial process variable measurements is training data,/>Is->Training data->For serial number of training data->For the total number of training data, +.>Feature dimension for the acquired data; the training data comprises label data and unlabeled data;
step 1.2, pair aggregationThe normalization process is performed according to formula (1),
(1);
in the method, in the process of the invention,normalized->The individual quality variables are assayed for analysis of the value data,representing the normalized quality variable assay analysis value set as a training real label value; />Representation set->Maximum value of>Representation set->Is the minimum value of (a);
similarly, pairProcessing in the same way as in equation (1) to obtain a normalized set of process variable measurements +.>;/>Normalized->Training data;
step 1.3, normalizing the measured value data of the process variableMapping to augmented data matrix via neural network,/>Is the +.o after the mapping by the neural network>Personal tag data->For feature dimension mapped by neural network, +.>
Further, the specific process of step 2 is as follows:
step 2.1, setting the tag data set as,/>Is the +.o after the mapping by the neural network>Personal tag data->To train the true tag value, the normalized +.>Individual quality variable assay analysis value data, using the true tag value +.>Randomly adding and subtracting a value as a preset upper limit and a preset lower limit, as shown in a formula (8) -a formula (9):
(8);
(9);
in the method, in the process of the invention,a preset upper boundary of the training set; />、/>Are all [0,1 ]]Random numbers in between;a preset lower boundary of the training set;
step 2.2, obtaining the weight of the upper limit and the lower limit of output through ridge regression on the preset upper boundary and the preset lower boundary, wherein the weight is shown in a formula (10) -a formula (11):
(10);
(11);
in the method, in the process of the invention,is->Label data set,/->An upper limit weight is output; />Is the ridge regression coefficient;is a unit matrix; />For outputting the lower limit weight;
step 2.3, calculating an upper boundary of a model interval, a lower boundary of the interval and an estimated value, wherein the specific calculation formula is formula (12) -formula (14):
(12);
(13);
(14);
in the method, in the process of the invention,is the upper boundary of the model interval; />An upper limit weight is output; />Is the lower boundary of the model interval; />For outputting the lower limit weight; />For the estimated value +.>The%>Tag data; />Is the relative weight coefficient of the upper limit and the lower limitThe%>Personal tag data->,/>Weighting the estimated value; />For the upper boundary of the model interval->The%>Tag data; />For the lower boundary of the model interval->The%>Tag data;
step 2.4, obtaining an optimal output upper and lower limit weight and an estimated value weight through a particle swarm optimization algorithm, wherein an objective function is shown as a formula (15) -a formula (17):
(15);
(16);
(17);
in the method, in the process of the invention,a first sub-objective loss function; />For the serial number of the tag data, +.>The total number of the tag data;the value rule of the label data is as follows: if->The tag data falls within the estimated interval +.>Otherwise->;/>For regularization parameters, ++>Is a preset confidence level; />A second sub-objective loss function; />A total target loss function; />To balance the super-parameters of the different sub-objective functions.
Further, in step 2.4, the particle swarm optimization algorithm is optimized by minimizing the total objective loss functionTo obtain the optimal weight as shown in formula (18) -formula (19):
(18);
(19);
in the method, in the process of the invention,is particle->In->Velocity vector in the course of a second iteration, +.>Is particle->In->A velocity vector in the iterative process; />Representing inertial weights; />、/>Respectively representing individual learning factors and group learning factors;、/>representing interval [0,1 ]]Random of differences betweenA number; />Is particle->In->Optimal position in the secondary iteration; />Is particle->In->Position vector in the iterative process of times; />Indicating that the population is at->Historical optimal positions in the iteration, namely optimal solutions in the whole particle swarm; />Indicating particle->In->Position vector in the iterative process of times; />Is particle->In->Position vector in the iterative process of times;
optimal solutionThe upper and lower limit weights and the estimated value weights are output correspondingly to the optimal outputs; optimal solution->In the form of vectors, expressed as->,/>、/>、/>The output upper limit weight, the output lower limit weight and the estimated value weight are respectively.
Further, the specific process of step 3 is as follows:
step 3.1, calculate the firstEstimated value of individual tag-free data +.>As shown in formula (20):
(20);
in the method, in the process of the invention,is the +.>Personal no tag data->Is->The upper and lower limits of the label-free data are relative weight coefficients;
the estimated values of all unlabeled data form a predictive label set,/>The total number of the unlabeled data;
step 3.2, calculating the mean square error of each label-free data on the adjacent neighbor of the label-free data, the firstPersonal no tag data->Mean square error>The calculation formula of (2) is shown as formula (21):
(21);
in the method, in the process of the invention,the +.>Personal tag data->Is->Is->The data set formed by the adjacent label data is measured by Euclidean distance; />Is->True value of the individual tag data; />Model pair representing training of tag data>Regression estimates of (a); />Model pair representing training after adding predictive tag data>Regression estimates of (a);
step 3.3, calculating a mean square error of each piece of non-tag data through a formula (21), sorting the mean square errors of all obtained non-tag data in a descending order, and selecting the data before selectingThe label-free data with high confidence constitute a candidate data set +.>,/>Is->High confidence no tag data, +.>Is->An estimate of the unlabeled data with high confidence.
Further, the specific process of step 4 is as follows:
step 4.1, calculating the candidate numberData setAs shown in equation (22):
(22);
in the method, in the process of the invention,is->Width of label-free data with high confidence;
step 4.2, calculating a width of each high confidence label-free data through a formula (22), sequencing the widths in ascending order, and selecting the previous onePersonal data->Add tag dataset +.>Updating a tag dataset,/>Is->Data below the width threshold, +.>Is->An estimate of data below the width threshold;
step 4.3, pellets represented by the following formulas (18) to (19)The subgroup optimization algorithm updates model parameters including、/>And->The method comprises the steps of carrying out a first treatment on the surface of the Optimal solution final according to particle swarm optimization algorithm>The updated model parameters can be obtained、/>And->
Further, the specific process of step 6 is as follows:
step 6.1, collecting test data online, and obtaining a normalized quality variable test set after normalization processingWherein->For testing the number of quality values in the set, +.>For normalized test set +.>A personal quality value;
step 6.2, mapping by a neural network to obtain an amplified data matrix after dimension expansion
And 6.3, estimating upper and lower limits and a true value of the test set by using parameters of the training set model, wherein the specific process is as shown in a formula (29) -a formula (31):
(29);
(30);
(31);
in the method, in the process of the invention,is the upper limit of the test set; />Is the lower limit of the test set; />An estimated value for the test data; />Is->Is a column vector of 1.
The invention has the beneficial technical effects that: aiming at the problems that the prior method cannot effectively quantify the influence of various uncertainty factors on a model and seriously depends on the quantity and quality of tag data, the invention provides a semi-supervised neural network soft measurement modeling method for simultaneously realizing interval and point estimation, which extracts dynamic and nonlinear characteristics of original data through a neural network; in addition, in order to improve the utilization rate of the unlabeled data, the unlabeled data with high confidence is selected through double criteria to be added into model training. And finally, carrying out iterative optimization on the output upper and lower limit weights and the estimated value weights. Therefore, the semi-supervised learning framework provided by the invention not only can effectively quantify the influence of uncertain factors on the model, but also can fully utilize the supervision information contained in the tag data and assist in improving the generalization capability and reliability of the soft measurement model by using the structural information contained in the non-tag data.
Drawings
FIG. 1 is a flow chart of a semi-supervised neural network soft measurement modeling method of the present invention.
Fig. 2 is a diagram showing a predicted tag data change selected in an initial state according to an embodiment of the present invention.
FIG. 3 is a graph of predicted tag data change for 10 iterations in an embodiment of the present invention.
FIG. 4 is a graph of predicted tag data change for 20 iterations in an embodiment of the present invention.
FIG. 5 is a graph of predicted tag data change for 30 iterations in an embodiment of the present invention.
Fig. 6 is a graph of the effect of the method described in the examples of the present invention on debutanizer datasets.
Detailed Description
The invention is described in further detail below with reference to the attached drawings and detailed description:
as shown in fig. 1, the invention provides a semi-supervised neural network soft measurement modeling method for simultaneously realizing interval and point estimation, which comprises the following steps:
in the off-line modeling stage, acquiring a quality variable assay analysis value through off-line analysis of a laboratory, acquiring a process variable measurement value through an industrial sensor, and carrying out normalization processing on the quality variable assay analysis value and the process variable measurement value. The specific process is as follows:
step 1.1, recording the collected initial quality variable assay analysis value set asSet->Each of the initial substances in (a)The analysis values of the quantitative variable assay are all label data, < >>Is->Personal tag data->For the serial number of the tag data, +.>For the total number of tag data, +.>Transpose the operator for the matrix; record the collected initial set of process variable measurements asSet->Each of the initial process variable measurements is training data,/>Is->Training data->For serial number of training data->For the total number of training data, +.>Feature dimension for the acquired data; the training data comprises label data and unlabeled data;
step 1.2, pair aggregationThe normalization process is performed according to formula (1),
(1);
in the method, in the process of the invention,normalized->The individual quality variables are assayed for analysis of the value data,representing the normalized quality variable assay analysis value set as a training real label value; />Representation set->Maximum value of>Representation set->Is the minimum value of (a);
similarly, pairProcessing in the same way as in equation (1) to obtain a normalized set of process variable measurements +.>;/>Normalized->Training data;
step 1.3, normalizing the measured value data of the process variableMapping to augmented data matrix via neural network,/>Is the +.o after the mapping by the neural network>Personal tag data->For feature dimension mapped by neural network, +.>. The neural network adopted by the invention is a cyclic width learning network, and the specific process is as follows:
step 1.3.1, introducing the idea of a cyclic neural network to replace the original characteristic layer of the breadth-learning network, and further constructing the cyclic breadth-learning networkObtaining characteristic layer output by cyclic width learning network mapping>Specifically, as shown in the formula (2) -formula (4):
(2);
(3);
(4);
for the followingEach of +.>Sequentially generating state vectors according to the sampling time through a formula (2), obtaining mapping characteristics through a formula (3), and finally obtaining characteristic layer output through a formula (4);
in the method, in the process of the invention,is->State vectors of the individual training data; />Representing a nonlinear activation function in the feature layer;an internal connection matrix in the cyclic feature node; />Is->State vectors of the individual training data;is an input matrix; />The number of the characteristic nodes is the number of each group; />In the form of a high-dimensional data set, representing training data +.>Mapping features of individual feature node windows;/>Is->State vectors of the individual training data; />Is->Mapping features of the individual feature node windows; />Representing the number of characteristic node windows;
step 1.3.2, using nonlinear activation function and feature layer outputCalculating output matrix of enhanced layer of cyclic width learning network>Specifically, as shown in the formula (5) -formula (6):
(5);
(6);
in the method, in the process of the invention,indicate->Enhancement layer output of individual enhancement node window, +.>Representing the number of the enhancement nodes in each group; />Representing the%>A nonlinear activation function of the individual enhancement node windows; />Representing the weight; />Representing the bias; />Is->Enhancement layer output of each enhancement node window; />Representing the number of the enhanced node windows;
step 1.3.3 outputting the feature layerAnd enhancement layer output matrix->Combining according to the rows to obtain an augmentation data matrix +.>Specifically, the method is shown as a formula (7);
(7);
in the method, in the process of the invention,for feature dimension mapped by neural network, +.>
And 2, constructing a neural network model with three outputs of an interval upper boundary, an interval lower boundary and an upper and lower limit relative weight coefficient, which are respectively used for estimating the interval upper and lower boundaries and the upper and lower limit relative weight coefficient, and training the output upper and lower limit weights and the estimated value weights of the initial model by using a tag data set and a particle swarm optimization algorithm. The specific process is as follows:
step 2.1, setting the tag data set as,/>Is the +.o after the mapping by the neural network>Personal tag data->To train the true tag value, the normalized +.>Individual quality variable assay analysis value data, using the true tag value +.>Randomly adding and subtracting a value as a preset upper limit and a preset lower limit, as shown in a formula (8) -a formula (9):
(8);
(9);
in the method, in the process of the invention,a preset upper boundary of the training set; />、/>Are all [0,1 ]]Random numbers in between;a preset lower boundary of the training set;
step 2.2, obtaining the weight of the upper limit and the lower limit of output through ridge regression on the preset upper boundary and the preset lower boundary, wherein the weight is shown in a formula (10) -a formula (11):
(10);
(11);
in the method, in the process of the invention,is->Label data set,/->An upper limit weight is output; />Is the ridge regression coefficient;is a unit matrix; />For outputting the lower limit weight;
step 2.3, calculating an upper boundary of a model interval, a lower boundary of the interval and an estimated value, wherein the specific calculation formula is formula (12) -formula (14):
(12);
(13);
(14);
in the method, in the process of the invention,is the upper boundary of the model interval; />An upper limit weight is output; />Is the lower boundary of the model interval; />For outputting the lower limit weight; />For the estimated value +.>The%>Tag data; />Is the relative weight coefficient of the upper limit and the lower limitThe%>Personal tag data->,/>Weighting the estimated value; />For the upper boundary of the model interval->The%>Tag data; />For the lower boundary of the model interval->The%>Tag data;
step 2.4, obtaining an optimal output upper and lower limit weight and an estimated value weight through a particle swarm optimization algorithm, wherein an objective function is shown as a formula (15) -a formula (17):
(15);
(16);
(17);
in the method, in the process of the invention,a first sub-objective loss function; />For the serial number of the tag data, +.>The total number of the tag data;the value rule of the label data is as follows: if->The tag data falls within the estimated interval +.>Otherwise->;/>For regularization parameters, ++>Is a preset confidence level; />A second sub-objective loss function; />For the total target loss function, for balancing the two sub-targets, generating a tight interval and generating a more accurate estimate, i.e. this is achieved by synchronizing the first sub-target function formula loss (15) and the second sub-target loss function formula (16); />Super parameters for balancing different sub objective functions;
particle swarm optimization algorithm by minimizing the total objective loss functionTo obtain the optimal weight as shown in formula (18) -formula (19):
(18);
(19);
in the method, in the process of the invention,is particle->In->Velocity vector in the course of a second iteration, +.>Is particle->In->A velocity vector in the iterative process; />Representing inertial weights; />、/>Respectively representing individual learning factors and group learning factors;、/>representing interval [0,1 ]]Different random numbers are used to increase the randomness of the search; />Is particle->In->Optimal position in the secondary iteration; />Is particle->In->Position vector in the iterative process of times; />Indicating that the population is at->Historical optimal positions in the iteration, namely optimal solutions in the whole particle swarm; />Indicating particle->In->Position vector in the iterative process of times; />Is particle->In->Position vector in the course of the multiple iterations.
Optimal solutionCorresponding to the upper and lower limit weights of the optimal outputAnd estimating a value weight; optimal solution->In vector form, can be expressed as +.>,/>、/>、/>The output upper limit weight, the output lower limit weight and the estimated value weight are respectively.
And 3, estimating the predictive label of the unlabeled data by using the output upper and lower limit weights and the estimated value weights of the model, and selecting the unlabeled data with high confidence to form a candidate data set. The specific process is as follows:
step 3.1, calculate the firstEstimated value of individual tag-free data +.>As shown in formula (20):
(20);
in the method, in the process of the invention,is the +.>Personal no tag data->Is->The upper and lower limits of the label-free data are relative weight coefficients;
the estimated values of all unlabeled data form a predictive label set,/>The total number of the unlabeled data;
step 3.2, calculating the mean square error of each label-free data on the adjacent neighbor of the label-free data, the firstPersonal no tag data->Mean square error>The calculation formula of (2) is shown as formula (21):
(21);
in the method, in the process of the invention,the +.>Personal tag data->Is->Is->The data set formed by the adjacent label data is measured by Euclidean distance; />Is->True value of the individual tag data; />Model pair representing training of tag data>Regression estimates of (a); />Model pair representing training after adding predictive tag data>Regression estimates of (a);
step 3.3, calculating a mean square error of each piece of non-tag data through a formula (21), sorting the mean square errors of all obtained non-tag data in a descending order, and selecting the data before selectingThe label-free data with high confidence constitute a candidate data set +.>,/>Is->High confidence no tag data, +.>Is->An estimate of the unlabeled data with high confidence.
Step 4, using the interval width as the selection criterion, further selecting the data below the width threshold from the candidate data setAdding the tag data set, updating the tag data set,sequence number of data below width threshold; meanwhile, model parameters are updated by using a particle swarm optimization algorithm, and the model parameters comprise +.>、/>And->. The specific process is as follows:
step 4.1, calculating a candidate datasetAs shown in equation (22):
(22);
in the method, in the process of the invention,is->Width of label-free data with high confidence;
step 4.2, calculating a width of each high confidence label-free data through a formula (22), sequencing the widths in ascending order, and selecting the previous onePersonal data->Add tag dataset +.>Updating a tag dataset,/>Is->Data below the width threshold, +.>Is->An estimate of data below the width threshold;
step 4.3, updating model parameters by using a particle swarm optimization algorithm shown in a formula (18) -a formula (19), wherein the model parameters comprise、/>And->. Optimal solution final according to particle swarm optimization algorithm>The updated model parameters can be obtained、/>And->。/>
Step 5, repeating the step 3 and the step 4 until the maximum iteration number is reached, and outputting final model parametersAnd->
Step 6, online testing: collecting test data and carrying out normalization processing to obtain a normalized test set, obtaining a data set after dimension expansion through a cyclic width learning network, and finally utilizing training set model parameters、/>And->The test set is evaluated. The specific process is as follows:
step 6.1, collecting test data online, and obtaining a normalized quality variable test set after normalization processingWherein->For testing the number of quality values in the set, +.>For the feature dimension of the acquisition data +.>For normalized test set +.>A personal quality value;
and 6.2, mapping the obtained amplified data matrix through a neural network, thereby extracting dynamic characteristics and nonlinear characteristics in the data. The neural network adopted by the invention is a cyclic width learning network, and the dimension expansion process of the cyclic width learning network is specifically shown as a formula (23) -a formula (28):
(23);
(24);
(25);
(26);
(27);
(28);
in the method, in the process of the invention,for test set->A state vector of individual quality values; />For test set->A state vector of individual quality values; />For test set->Mapping features of the individual feature node windows; />For test set->A state vector of individual quality values; />For test set->Outputting a feature layer obtained by mapping feature augmentation of each feature node window; />For test set->Mapping features of the individual feature node windows; />For test set->Enhancement layer output of each enhancement node window; />Representing the%>A nonlinear activation function of the individual enhancement node windows;for test set->Enhancement layer output sets obtained by the window augmentation of the individual enhancement nodes; />For test set->Enhancement layer output of each enhancement node window; />An augmented data matrix for the test set;
and 6.3, estimating upper and lower limits and a true value of the test set by using parameters of the training set model, wherein the specific process is as shown in a formula (29) -a formula (31):
(29);
(30);/>
(31);
in the method, in the process of the invention,is the upper limit of the test set; />Is the lower limit of the test set; />An estimated value for the test data; />Is->Is a column vector of 1.
The method extracts dynamic and nonlinear characteristics of original data through a cyclic width learning network; meanwhile, the demand of soft measurement regression modeling on the estimated value is fully considered, a point estimation item is added into the loss function of an upper limit estimation method and a lower limit estimation method, the loss function is minimized through a particle swarm optimization algorithm by balancing the width and coverage rate of the interval and the mean square error of point estimation, and therefore a compact interval and an accurate estimated value are generated; finally, in order to improve the utilization rate of the label-free data, the predictive label data with high confidence coefficient is selected through double criteria to be added into the model for training; in order to improve accuracy and sufficiency of model estimation, iterative optimization is performed on the output upper and lower limit weights and the estimated value weights.
In order to demonstrate the feasibility and advantages of the method of the invention, the following specific examples are given. This example is described in detail with process data for a debutanizer column.
The debutanizer rectifying column is part of a sulfur removal and naphtha separator unit and its main task is to maximize the C5 (stabilized gasoline) content in the debutanizer overhead (liquid gas separator feed) and minimize the C4 (butane) content in the debutanizer bottoms (Naptha separator feed). Besides the rectifying tower, the debutanizer also comprises a heat exchanger, a tower top condenser, a bottom reboiler, a tower top reflux pump, a water feeding pump of the LPG separator and other devices. The C5 content in the top of the debutanizer column was measured indirectly by an analyzer located at the bottom of the No. 900 plant liquefied petroleum gas fractionation column. The measurement period of the device was 10 minutes. In addition, the position of the measuring device may cause a delay which is not known but is constant, possibly in the range of 20-60 minutes. Likewise, the C4 content in the bottom of the debutanizer cannot be detected directly at the bottom of the column, but is detected by installing a gas chromatograph at the top of the column. The measurement period of the device is typically 15 minutes, and again because of the installation position of the analysis instrument there is a great delay in obtaining the concentration value, which is not well known, but is constant and may be in the range of 30 minutes to 75 minutes. Therefore, in order to realize real-time measurement of butane concentration and improve the control quality of the debutanizer, it is necessary to build a soft measurement model to estimate the bottom butane concentration, which is the quality variable, in real time. In addition, considering the problems of low sampling efficiency and large time delay of quality variables in the actual production process, it is assumed that only one fifth of all the historical samples have labels (including both input data and output data), and the other historical samples are unlabeled data (including only input data). According to the knowledge of the process mechanism, 7 variables which are easy to measure are selected as the process variables of the soft measurement model, and the object meanings are respectively as follows: the column top temperature, the column top pressure, the flow of the reflux, the flow into the next process, the column plate temperature, the column bottom temperature 1, the column bottom temperature 2.
The specific steps of the invention are described below in connection with a debutanizer production process:
step 1, offline modeling stage: the acquired data is used as a training data set and preprocessed.
The method comprises the steps of designating 600 nodes of a cycle width learning network characteristic layer and 50 nodes of an enhancement layer; acquiring process variable measurement values through industrial sensor and carrying out normalization processing, and collecting normalized process variable measurement values for trainingHidden layer output augmentation data matrix obtained through cyclic width learning networkSequencing and normalizing the quality variable assay analysis values collected through off-line analysis of a laboratory to obtain a normalized quality variable assay analysis value set +.>. The data in the process variable measurement value set comprises the top temperature, the top pressure, the reflux flow, the flow flowing into the next process, the tray temperature, the bottom temperature 1 and the bottom temperature 2. The data in the mass-variable assay analysis value set is the bottom butane concentration.
Step 2, training the output upper limit weight of the initial model by using the label data and the particle swarm optimization algorithmLower limit weight->And estimate weight +.>
The particle swarm size was specified to be 200, the number of iterations was 50,set to 1 +.>Setting to 1.3, regularization parameter +.>Set to 1, super parameter->Set to 0.4 @, ->0.05.
And step 3, designating the number of neighbors as 3, sorting the mean square error of each piece of unlabeled data in a descending order, and selecting the first 5 pieces of unlabeled data with high confidence to form a candidate data set.
Step 4, 1 predictive label data is selected from the candidate data sets according to the interval width to update the label data sets, and the model parameters are updated respectively as follows、/>And->
And 5, repeating the step 3 and the step 4 until the maximum iteration number is reached.
Step 6, online testing: collecting test data, and obtaining a normalized quality variable test set after normalization processingObtaining the data +.>The test set is estimated using the training set model parameters.
The invention adopts average width (MPIW) and interval coverage (PICP) to evaluate the estimation performance of a soft measurement model, and the expression is shown in a formula (32) -a formula (33):
(32);
(33);
in the method, in the process of the invention,take the value of average width; />;/>The number of the test data; />Is the firstUpper interval limits of the test data; />Is->A lower interval limit for the individual test data; />The coverage rate of the interval is valued; />A variable indicating whether the true value falls within the estimated interval, if +.>If the true value falls within the estimated interval, +.>Otherwise->
The invention selects three methods of an upper and lower boundary estimation method, a self-adaptive optimization method based on a construction interval and a supervised neural network algorithm of integrated interval and point estimation to carry out comparison experiments with the method of the invention, and the comparison results are shown in table 1.
Table 1 three algorithms compare results with the method of the present invention;
as can be seen from table 1, the method of the present invention has a narrower interval width and a larger interval coverage than the other three methods, i.e., it is shown that the present invention has a higher quality of interval estimation.
Fig. 2-5 show the change diagrams of the predicted tag data selected under different iteration times, and as shown in fig. 2-5, each iteration selects the label-free data updating tag data set meeting the requirements to perform model training, so that compared with the traditional method, the method utilizes more information of the label-free data to expand the interval prediction to the semi-supervision field.
As shown in fig. 6, the effect diagram on the test set can be known, on the basis of guaranteeing the predicted average width, most of the true values of the samples fall in the predicted interval, only a few samples fall near the predicted boundary, and the true values are basically consistent with the estimated values, thus proving the accuracy of the method.
It should be understood that the above description is not intended to limit the invention to the particular embodiments disclosed, but to limit the invention to the particular embodiments disclosed, and that the invention is not limited to the particular embodiments disclosed, but is intended to cover modifications, adaptations, additions and alternatives falling within the spirit and scope of the invention.

Claims (7)

1. The semi-supervised neural network soft measurement modeling method is characterized by comprising the following steps of:
step 1, acquiring a quality variable assay analysis value through offline analysis of a laboratory, acquiring a process variable measurement value through an industrial sensor, and carrying out normalization processing on the quality variable assay analysis value and the process variable measurement value;
step 2, constructing a neural network model with three outputs of an interval upper boundary, an interval lower boundary and an upper and lower limit relative weight coefficient, and training the output upper and lower limit weights and the estimated value weights of an initial model by using a label data set and a particle swarm optimization algorithm;
step 3, estimating a predictive label of the unlabeled data by using the output upper and lower limit weights and the estimated value weights of the model, and selecting high-confidence unlabeled data to form a candidate data set;
step 4, taking the interval width as a selection criterion, further selecting data lower than a width threshold from the candidate data sets, adding the data into the tag data sets, and updating the tag data sets; meanwhile, updating model parameters by using a particle swarm optimization algorithm;
step 5, repeating the step 3 and the step 4 until the maximum iteration times are reached, and outputting final model parameters at the moment;
and step 6, collecting test data, carrying out normalization processing to obtain a normalized test set, obtaining a data set after dimension expansion through a neural network, and finally estimating the test set by using training set model parameters.
2. The method for modeling soft measurement of semi-supervised neural network according to claim 1, wherein the specific process of step 1 is as follows:
step 1.1, recording the collected initial quality variable assay analysis value set asSet->Each initial quality variable assay analysis value of (1) is tag data,/v>Is->Personal tag data->For the serial number of the tag data, +.>For the total number of tag data, +.>Transpose the operator for the matrix; record the collected initial set of process variable measurements asSet->Each of the initial process variable measurements is training data,/>Is->Training data->For serial number of training data->For the total number of training data,feature dimension for the acquired data; training dataComprises both tag data and unlabeled data;
step 1.2, pair aggregationThe normalization process is performed according to formula (1),
(1);
in the method, in the process of the invention,normalized->The individual quality variables are assayed for analysis of the value data,representing the normalized quality variable assay analysis value set as a training real label value; />Representation set->Maximum value of>Representation set->Is the minimum value of (a);
similarly, pairProcessing in the same way as in equation (1) to obtain a normalized set of process variable measurements +.>;/>Normalized->Training data;
step 1.3, normalizing the measured value data of the process variableMapping to augmented data matrix via neural network>,/>Is the +.o after the mapping by the neural network>Personal tag data->For feature dimension mapped by neural network, +.>
3. The method for modeling soft measurement of semi-supervised neural network according to claim 1, wherein the specific process of step 2 is as follows:
step 2.1, setting the tag data set as,/>Is the +.o after the mapping by the neural network>The number of tag data to be used in the tag,to train the true tag value, the normalized +.>Individual quality variable assay analysis value data, using true tag valuesRandomly adding and subtracting a value as a preset upper limit and a preset lower limit, as shown in a formula (8) -a formula (9):
(8);
(9);
in the method, in the process of the invention,a preset upper boundary of the training set; />、/>Are all [0,1 ]]Random numbers in between; />A preset lower boundary of the training set;
step 2.2, obtaining the weight of the upper limit and the lower limit of output through ridge regression on the preset upper boundary and the preset lower boundary, wherein the weight is shown in a formula (10) -a formula (11):
(10);
(11);
in the method, in the process of the invention,is->Label data set,/->An upper limit weight is output; />Is the ridge regression coefficient; />Is a unit matrix; />For outputting the lower limit weight;
step 2.3, calculating an upper boundary of a model interval, a lower boundary of the interval and an estimated value, wherein the specific calculation formula is formula (12) -formula (14):
(12);
(13);
(14);
in the method, in the process of the invention,is the upper boundary of the model interval; />An upper limit weight is output; />Is the lower boundary of the model interval;for outputting the lower limit weight; />For the estimated value +.>The%>Tag data; />Is the relative weight coefficient of the upper limit and the lower limitThe%>Personal tag data->,/>Weighting the estimated value; />For the upper boundary of the model interval->The%>Tag data; />For the lower boundary of the model interval->The%>Tag data;
step 2.4, obtaining an optimal output upper and lower limit weight and an estimated value weight through a particle swarm optimization algorithm, wherein an objective function is shown as a formula (15) -a formula (17):
(15);
(16);
(17);
in the method, in the process of the invention,a first sub-objective loss function; />For the serial number of the tag data, +.>The total number of the tag data; />The value rule of the label data is as follows: if->The tag data falls within the estimated intervalOtherwise->;/>For regularization parameters, ++>Is a preset confidence level; />A second sub-objective loss function; />A total target loss function; />To balance the super-parameters of the different sub-objective functions.
4. A semi-supervised neural network soft measurement modeling method as defined in claim 3, wherein in step 2.4, the particle swarm optimization algorithm is implemented by minimizing the total objective loss functionTo obtain the optimal weight as shown in formula (18) -formula (19):
(18);
(19);
in the method, in the process of the invention,is particle->In->Velocity vector in the course of a second iteration, +.>Is particle->In->A velocity vector in the iterative process; />Representing inertial weights; />、/>Respectively representing individual learning factors and group learning factors; />、/>Representing interval [0,1 ]]Random numbers different from each other; />Is particle->In->Optimal position in the secondary iteration; />Is particle->In->Position vector in the iterative process of times; />Indicating that the population is at->Historical optimal positions in the iteration, namely optimal solutions in the whole particle swarm; />Indicating particle->In->Position vector in the iterative process of times; />Is particle->In->Position vector in the iterative process of times;
optimal solutionThe upper and lower limit weights and the estimated value weights are output correspondingly to the optimal outputs; optimal solution->In the form of vectors, expressed as,/>、/>、/>The output upper limit weight, the output lower limit weight and the estimated value weight are respectively.
5. The method for modeling soft measurement of semi-supervised neural network according to claim 1, wherein the specific process of step 3 is as follows:
step 3.1, calculate the firstEstimated value of individual tag-free data +.>As shown in formula (20):
(20);
in the method, in the process of the invention,is the +.>Personal no tag data->Is->The upper and lower limits of the label-free data are relative weight coefficients;
the estimated values of all unlabeled data form a predictive label set,/>The total number of the unlabeled data;
step 3.2, calculating the mean square error of each label-free data on the adjacent neighbor of the label-free data, the firstPersonal no tag data->Mean square error>The calculation formula of (2) is shown as formula (21):
(21);
in the method, in the process of the invention,the +.>Personal tag data->Is->Is->The data set formed by the adjacent label data is measured by Euclidean distance; />Is->True value of the individual tag data; />Model pair representing training of tag data>Regression estimates of (a); />Model pair representing training after adding predictive tag data>Regression estimates of (a);
step 3.3, calculating a mean square error of each piece of non-tag data through a formula (21), sorting the mean square errors of all obtained non-tag data in a descending order, and selecting the data before selectingHigh confidence unlabeled data comprising candidate data sets,/>Is->High confidence no tag data, +.>Is->An estimate of the unlabeled data with high confidence.
6. The method for modeling soft measurement of semi-supervised neural network according to claim 1, wherein the specific process of step 4 is as follows:
step 4.1, calculating a candidate datasetAs shown in equation (22):
(22);
in the method, in the process of the invention,is->Width of label-free data with high confidence;
step 4.2, calculating a width of each high confidence label-free data through a formula (22), sequencing the widths in ascending order, and selecting the previous onePersonal data->Add tag dataset +.>Updating a tag dataset,/>Is->Data below the width threshold, +.>Is->An estimate of data below the width threshold;
step 4.3, updating model parameters by using a particle swarm optimization algorithm shown in a formula (18) -a formula (19), wherein the model parameters comprise、/>And->The method comprises the steps of carrying out a first treatment on the surface of the Optimal solution final according to particle swarm optimization algorithm>The updated model parameters can be obtained>And->
7. The method for modeling soft measurement of semi-supervised neural network according to claim 1, wherein the specific process of step 6 is as follows:
step 6.1, collecting test data online, and obtaining a normalized quality variable test set after normalization processingWherein->For testing the number of quality values in the set, +.>For normalized test set +.>A personal quality value;
step 6.2, mapping by a neural network to obtain an amplified data matrix after dimension expansion
And 6.3, estimating upper and lower limits and a true value of the test set by using parameters of the training set model, wherein the specific process is as shown in a formula (29) -a formula (31):
(29);
(30);
(31);
in the method, in the process of the invention,is the upper limit of the test set; />Is the lower limit of the test set; />An estimated value for the test data; />Is->Is a column vector of 1.
CN202311099248.3A 2023-08-30 2023-08-30 Semi-supervised neural network soft measurement modeling method Active CN116821695B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311099248.3A CN116821695B (en) 2023-08-30 2023-08-30 Semi-supervised neural network soft measurement modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311099248.3A CN116821695B (en) 2023-08-30 2023-08-30 Semi-supervised neural network soft measurement modeling method

Publications (2)

Publication Number Publication Date
CN116821695A CN116821695A (en) 2023-09-29
CN116821695B true CN116821695B (en) 2023-11-03

Family

ID=88122451

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311099248.3A Active CN116821695B (en) 2023-08-30 2023-08-30 Semi-supervised neural network soft measurement modeling method

Country Status (1)

Country Link
CN (1) CN116821695B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117272244B (en) * 2023-11-21 2024-03-15 中国石油大学(华东) Soft measurement modeling method integrating feature extraction and self-adaptive composition

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107291975A (en) * 2017-05-03 2017-10-24 中国石油大学(北京) A kind of method and system of catalytic cracking reaction product hard measurement
CN107451102A (en) * 2017-07-28 2017-12-08 江南大学 A kind of semi-supervised Gaussian process for improving self-training algorithm returns soft-measuring modeling method
CN107505837A (en) * 2017-07-07 2017-12-22 浙江大学 A kind of semi-supervised neural network model and the soft-measuring modeling method based on the model
CN109840362A (en) * 2019-01-16 2019-06-04 昆明理工大学 A kind of integrated instant learning industrial process soft-measuring modeling method based on multiple-objection optimization
US10678196B1 (en) * 2020-01-27 2020-06-09 King Abdulaziz University Soft sensing of a nonlinear and multimode processes based on semi-supervised weighted Gaussian regression
CN113158473A (en) * 2021-04-27 2021-07-23 昆明理工大学 Semi-supervised integrated instant learning industrial rubber compound Mooney viscosity soft measurement method
CN114117919A (en) * 2021-11-29 2022-03-01 中国石油大学(华东) Instant learning soft measurement modeling method based on sample collaborative representation
CN114841073A (en) * 2022-05-17 2022-08-02 中国石油大学(华东) Instant learning semi-supervised soft measurement modeling method based on local label propagation
CN116386756A (en) * 2022-12-16 2023-07-04 浙江大学 Soft measurement modeling method based on integrated neural network reliability estimation and weighted learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110162857A (en) * 2019-05-14 2019-08-23 北京工业大学 A kind of flexible measurement method for surveying parameter towards complex industrial process difficulty

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107291975A (en) * 2017-05-03 2017-10-24 中国石油大学(北京) A kind of method and system of catalytic cracking reaction product hard measurement
CN107505837A (en) * 2017-07-07 2017-12-22 浙江大学 A kind of semi-supervised neural network model and the soft-measuring modeling method based on the model
CN107451102A (en) * 2017-07-28 2017-12-08 江南大学 A kind of semi-supervised Gaussian process for improving self-training algorithm returns soft-measuring modeling method
CN109840362A (en) * 2019-01-16 2019-06-04 昆明理工大学 A kind of integrated instant learning industrial process soft-measuring modeling method based on multiple-objection optimization
US10678196B1 (en) * 2020-01-27 2020-06-09 King Abdulaziz University Soft sensing of a nonlinear and multimode processes based on semi-supervised weighted Gaussian regression
CN113158473A (en) * 2021-04-27 2021-07-23 昆明理工大学 Semi-supervised integrated instant learning industrial rubber compound Mooney viscosity soft measurement method
CN114117919A (en) * 2021-11-29 2022-03-01 中国石油大学(华东) Instant learning soft measurement modeling method based on sample collaborative representation
CN114841073A (en) * 2022-05-17 2022-08-02 中国石油大学(华东) Instant learning semi-supervised soft measurement modeling method based on local label propagation
CN116386756A (en) * 2022-12-16 2023-07-04 浙江大学 Soft measurement modeling method based on integrated neural network reliability estimation and weighted learning

Also Published As

Publication number Publication date
CN116821695A (en) 2023-09-29

Similar Documents

Publication Publication Date Title
US11164095B2 (en) Fuzzy curve analysis based soft sensor modeling method using time difference Gaussian process regression
CN108897286B (en) Fault detection method based on distributed nonlinear dynamic relation model
CN112101480B (en) Multivariate clustering and fused time sequence combined prediction method
US20110010318A1 (en) System and method for empirical ensemble- based virtual sensing
CN109992921B (en) On-line soft measurement method and system for thermal efficiency of boiler of coal-fired power plant
CN116821695B (en) Semi-supervised neural network soft measurement modeling method
CN110400231B (en) Failure rate estimation method for electric energy metering equipment based on weighted nonlinear Bayes
CN109635245A (en) A kind of robust width learning system
CN104899425A (en) Variable selection and forecast method of silicon content in molten iron of blast furnace
CN109389314A (en) A kind of quality hard measurement and monitoring method based on optimal neighbour&#39;s constituent analysis
Yuan et al. Virtual sensor modeling for nonlinear dynamic processes based on local weighted PSFA
Ehsan et al. Wind speed prediction and visualization using long short-term memory networks (LSTM)
CN114117919B (en) Instant learning soft measurement modeling method based on sample collaborative representation
CN109886314B (en) Kitchen waste oil detection method and device based on PNN neural network
CN114936528A (en) Extreme learning machine semi-supervised soft measurement modeling method based on variable weighting self-adaptive local composition
Yang et al. Domain adaptation network with uncertainty modeling and its application to the online energy consumption prediction of ethylene distillation processes
Li et al. Data cleaning method for the process of acid production with flue gas based on improved random forest
CN114117852A (en) Regional heat load rolling prediction method based on finite difference working domain division
CN116738866B (en) Instant learning soft measurement modeling method based on time sequence feature extraction
Li et al. Adaptive soft sensor based on a moving window just-in-time learning LS-SVM for distillation processes
CN116432856A (en) Pipeline dynamic early warning method and device based on CNN-GLSTM model
CN115271186B (en) Reservoir water level prediction and early warning method based on delay factor and PSO RNN Attention model
CN115631804A (en) Method for predicting outlet concentration of sodium aluminate solution in evaporation process based on data coordination
CN114861759A (en) Distributed training method of linear dynamic system model
CN113849479A (en) Comprehensive energy supply station oil tank leakage detection method based on instant learning and self-adaptive threshold

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant