CN114692507A - Counting data soft measurement modeling method based on stacking Poisson self-encoder network - Google Patents

Counting data soft measurement modeling method based on stacking Poisson self-encoder network Download PDF

Info

Publication number
CN114692507A
CN114692507A CN202210403851.5A CN202210403851A CN114692507A CN 114692507 A CN114692507 A CN 114692507A CN 202210403851 A CN202210403851 A CN 202210403851A CN 114692507 A CN114692507 A CN 114692507A
Authority
CN
China
Prior art keywords
poisson
self
encoder
network
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210403851.5A
Other languages
Chinese (zh)
Inventor
张新民
刘颖
宋执环
朱哲人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202210403851.5A priority Critical patent/CN114692507A/en
Publication of CN114692507A publication Critical patent/CN114692507A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Optimization (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Geometry (AREA)
  • Evolutionary Biology (AREA)
  • Computer Hardware Design (AREA)
  • Medical Informatics (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention discloses a counting data soft measurement modeling method based on a stacked Poisson self-encoder network, and provides a stacked Poisson self-encoder network structure. The encoder introduces counting type quality variables to guide feature extraction in a pre-training stage, and aiming at the discreteness of counting data, the quality variables are integrated into a deep stacking self-encoder framework in a Poisson regression network layer mode, so that feature representation learned by a model is highly related to the counting type quality variables. The method not only improves the feature extraction capability of the counting data soft measurement model, but also improves the prediction effect of the counting type quality variable.

Description

Counting data soft measurement modeling method based on stacking Poisson self-encoder network
Technical Field
The invention belongs to the field of industrial process prediction and soft measurement, and relates to a counting data soft measurement modeling method based on a stacked Poisson self-encoder network.
Background
Counting data is an important data type and has the characteristics of discrete, non-negative integer, high-deviation distribution and the like, and a discrete counting data model is required to be established, namely, a relation between the occurrence frequency of a certain event (called a dependent variable, an output variable or a response variable) and a factor (called an independent variable, an input variable or a process variable) causing the occurrence of the event is established so as to forecast the occurrence frequency of the event.
In the process industry, soft measurements are used as a tool to predict product quality or other important variables that can be considered for modeling process of count data. Common methods for data-driven soft-measurement-based modeling are Multiple Linear Regression (MLR) and Partial Least Squares (PLS) regression. They assume that the response variables obey normal and iso-variance distributions, which is in contrast to the highly over-dispersed distribution of observed count data. Furthermore, the count data is a non-negative integer, but MLR and PLS may generate negative values for the dependent variable. Non-linear modeling methods such as Support Vector Regression (SVR) and Artificial Neural Network (ANN) methods suffer from poor interpretability, while the non-negativity of the prediction cannot be guaranteed.
For the count data, the poisson regression model is a typical representation of its modeling. However, in the industrial process, the process data has characteristics of high dimension, nonlinearity and the like, and the poisson regression has the problem of insufficient data characteristic mining when being used for the industrial process. Therefore, extracting the depth features of the process data is a crucial step in the soft-metric modeling of the count data.
Self-encoder structures, as a representative thereof, have been designed and widely used in complex industrial processes. However, the pre-training of the conventional self-encoder adopts an unsupervised learning mode, and learns effective feature representation by reconstructing input and constraining error minimization, so that features extracted from the deep network may have no relation with the predicted output of counting data soft measurement, and the process is rendered inefficient.
When the counting data of the industrial process is predicted, because the process variables are possibly more and the data has the characteristics of nonlinearity, high dimension and the like, when a counting data soft measurement model is established, it is necessary to extract the characteristic with high correlation with the counting data type quality variable. For the problems existing in the feature extraction stage, if a reasonable way can be designed to introduce quality variables to effectively guide the feature extraction of the input data, and the characteristics of the counting data can be considered, the problem can be solved easily.
Disclosure of Invention
Aiming at the problem that the conventional self-encoder cannot extract the relevant characteristics of the quality variable and considering the characteristics of dispersion, nonnegativity and high deflection of counting data, the invention provides a counting data soft measurement modeling method based on a stacking Poisson self-encoder network. According to the method, the counting type quality variable is introduced to guide feature extraction in a pre-training decoding stage, and the counting type quality variable is integrated into a deep stacking encoder structure through a Poisson network layer, so that feature representation learned by a model is highly related to the counting type quality variable, the feature extraction efficiency is improved, and the prediction effect of the counting type quality variable is improved.
The specific technical scheme of the invention is as follows:
a counting data soft measurement modeling method based on a stacked Poisson self-encoder network comprises the following steps:
s1: collecting input and output training data sets for modeling:
Figure BDA0003601028050000021
wherein x represents an input variable, y represents an output variable of a discrete counting data type, and N represents the number of data samples;
s2: constructing a stacked Poisson self-encoder network, wherein the stacked Poisson self-encoder network is formed by stacking a plurality of supervision Poisson self-encoders in a layered mode, and the output of a hidden layer of a previous supervision Poisson self-encoder is used as the input of an input layer of a next supervision Poisson self-encoder; the supervising Poisson self-encoder comprises an input layer, a hidden layer and an output layer, wherein the hidden layer and the output layer comprise an input reconstruction network layer and a Poisson network layer, the input reconstruction network layer is used for reconstructing an input vector, and the Poisson network layer is used for predicting counting type quality data;
randomly initializing the Poisson network weight, the neural network connection weight and the bias parameter of the stacked Poisson self-encoder network.
S3: inputting training data into a stacked Poisson self-encoder network, training a first supervised Poisson self-encoder according to a minimum loss function, and obtaining the weight and the bias parameters of the first supervised Poisson self-encoder
Figure BDA0003601028050000022
And output of hidden layer
Figure BDA0003601028050000023
H is to be1As the input of the input layer of the second supervised Poisson self-encoder, training the second supervised Poisson self-encoder according to the minimum loss function to obtain the corresponding weight and bias parameters, so as to use h for progressive layer by layerk -1According to the k-th supervised Poisson self-encoder SPAE of trainingkObtaining parameters
Figure BDA0003601028050000024
And hkUntil the last supervision Poisson self-encoder training is finished; k is less than or equal to L, wherein L is the number of the monitoring Poisson self-encoders;
s4: after the layer-by-layer training of S3 is finished, the output h of the hidden layer of the L-th supervised Poisson self encoderLEstablishing a Poisson network between the output variable y and the output variable y for regression, and adjusting and updating the parameters of the regression network according to the prediction error; after the regression network training is finished, storing the stacked Poisson self-encoder network;
s5: inputting input data to be predicted into a stored stacking Poisson self-encoder network, and obtaining the counting type quality variable prediction value through forward propagation of the stacking Poisson self-encoder network.
Further, in S3, the encoder in the supervised poisson self-encoder is represented as:
h=σ(We·x+be)
where σ represents sigmoid activation function, x is input vector of input layer, h is concealmentOutput vector of layer, WeAnd beRespectively representing the weight and the offset of the encoder;
the decoder in a supervised poisson self-encoder is represented as:
Figure BDA0003601028050000031
Figure BDA0003601028050000032
where exp represents an exponential function, WrAnd brRespectively representing the weight and the offset of the reconstructed input vector in the decoder; wpAnd bpRespectively representing the weight and bias parameters of the poisson network layer,
Figure BDA0003601028050000033
representing the input vector after reconstruction and,
Figure BDA0003601028050000034
respectively predicted output vectors;
the loss function LrecExpressed as:
Figure BDA0003601028050000035
wherein λ represents a ratio of weights of a reconstruction error of an input vector and a prediction error of an output vector;
Figure BDA0003601028050000036
a meaning of (d) is two-norm, which indicates a hadamard product.
Further, in S3, the training process of the k-th supervised poisson self-encoder is represented as follows:
Figure BDA0003601028050000037
Figure BDA0003601028050000038
Figure BDA0003601028050000039
Figure BDA00036010280500000310
wherein k is 1,2, … L,
Figure BDA00036010280500000311
and
Figure BDA00036010280500000312
respectively the input data and the reconstructed data of the ith sample at the kth supervised poisson self-encoder,
Figure BDA00036010280500000313
and
Figure BDA00036010280500000314
weight matrices and bias terms for the k-th layer encoder and decoder, respectively;
the method is realized by the following substeps:
the loss function for the kth supervised poisson self-encoder training is as follows:
Figure BDA0003601028050000041
wherein, yiAnd
Figure BDA0003601028050000042
respectively representing the actual observed value of the counting type quality variable corresponding to the ith sample and the predicted value of the counting type quality variable at the kth supervision Poisson self-encoder.
Further, in the step S4, pre-treatingMeasured output variable
Figure BDA0003601028050000043
The calculation formula of (a) is as follows:
Figure BDA0003601028050000044
wherein, WyAnd byRespectively representing the weight and the bias of the Poisson network;
the loss function is as follows:
Figure BDA0003601028050000045
the invention has the following beneficial effects:
the counting data soft measurement modeling method based on the stacking Poisson self-encoder network is used for counting data quality prediction, and solves the problems that the conventional self-encoder has low feature extraction efficiency and is not suitable for counting data modeling. By adding the counting type quality variable to an output layer of a decoding stage and considering the discreteness and nonnegativity of counting data, the counting data is integrated into a deep self-encoder frame in a Poisson regression network layer mode, a loss function is improved, so that a model can learn characteristics highly related to the counting data quality variable, and the prediction effect of the model on the counting data is improved.
Drawings
FIG. 1 is a diagram of a depth stacked Poisson self encoder (SSPAE) architecture;
FIG. 2 is a flow chart of SSPAE-based counting data soft measurement modeling;
FIG. 3 is a flow chart of a defect system of the steel casting and rolling process;
fig. 4 is a graph of the prediction results of the SSPAE, STAE and SAE methods, corresponding to sub-graphs (c), (b) and (a), respectively, where the abscissa represents the measurement sample, the ordinate represents the value of the quality data, "+" represents the model prediction value, and "+" represents the true value.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and preferred embodiments, and the objects and effects of the present invention will become more apparent, it being understood that the specific embodiments described herein are merely illustrative of the present invention and are not intended to limit the present invention.
The method is based on a stacked Poisson self-encoder network (SSPAE) structure, an encoder is improved on the basis of an original self-encoder, a quality variable is introduced in a pre-training decoding stage to guide feature extraction, in addition, in consideration of the discreteness of counting data, the quality variable is integrated into a deep stacked self-encoder frame in a Poisson regression network layer mode, so that the feature representation learned by a model is highly related to the counting data type quality variable, the feature extraction efficiency is improved, and the soft measurement precision of the counting data is improved.
As shown in fig. 1, the method of the present invention comprises the following steps:
s1: collecting device data to form an input and output training data set for modeling:
Figure BDA0003601028050000051
wherein x represents an input variable, y represents an output variable of a discrete counting data type, and N represents the number of data samples; dividing the data set into a training set, a verification set and a test set, and preprocessing data according to different working conditions;
s2: constructing a stacked poisson self-encoder network SSPAE, as shown in fig. 2, the SSPAE is formed by stacking a plurality of supervised poisson self-encoders SPAE in layers, and the output of the hidden layer of the previous supervised poisson self-encoder is used as the input of the input layer of the next supervised poisson self-encoder; the supervising Poisson self-encoder comprises an input layer, a hidden layer and an output layer, wherein the hidden layer and the output layer comprise an input reconstruction network layer and a Poisson network layer, the input reconstruction network layer is used for reconstructing an input vector, and the Poisson network layer is used for predicting counting type quality data;
let x be the input vector, h the hidden vector, and y the quality variable. SPAE simultaneously reconstructs its input data and pre-encodes in the decoderMeasuring count data, the two data being respectively expressed as
Figure BDA0003601028050000052
And
Figure BDA0003601028050000053
the SPAE prediction quality data employs a Poisson network for the count data.
{We,beAnd { W }d,bdAre used to represent the parameter sets of the encoder and decoder, respectively. The encoder is represented as:
h=σ(We·x+be)
wherein σ represents a sigmoid activation function.
Since the decoder is composed of two parts, the decoding weight matrix and the decoding offset vector can be decomposed into two parts of input data and counting data quality variables. That is, its parameters can be decomposed into:
Figure BDA0003601028050000054
at the output layer of the SPAE, the reconstructed input data is mapped by the hidden data through the input reconstruction network layer to obtain:
Figure BDA0003601028050000055
in particular, for discrete counting data modeling, the mapping of hidden features to output data is a poisson network layer, as follows:
Figure BDA0003601028050000056
wherein exp represents an exponential function, { W }p,bpDenotes the weight and bias parameters of the poisson network layer. In one aspect, the error structure of the poisson distribution of the poisson network layer allows data to have non-linear characteristics and non-constant powerA difference structure; on the other hand, the poisson network layer may guarantee non-negativity of the prediction.
Therefore, the decoder output of SPAE can be expressed as:
Figure BDA0003601028050000061
Figure BDA0003601028050000062
(2) given input training data X ═ X1,x2,…,xNAnd corresponding counting quality data Y ═ Y1,y2,…,yNThe way the SPAE network learns the parameters is by minimizing the loss function of the output layer, as follows:
Figure BDA0003601028050000063
wherein λ represents a ratio of weights of a reconstruction error of an input vector and a prediction error of an output vector;
Figure BDA0003601028050000064
a meaning of (d) is two-norm, which indicates a hadamard product.
(3) Training the first supervised Poisson self-encoder according to the above minimum loss function to obtain parameters
Figure BDA0003601028050000065
And a first latent feature representation
Figure BDA0003601028050000066
And randomly initializing the Poisson network weight, the neural network connection weight and the bias parameter of the SSPAE model.
S3: inputting training data into a stacked Poisson self-encoder network, training a first supervision Poisson self-encoder according to a minimum loss function, and obtainingWeight and bias parameters for a first supervised Poisson-self encoder
Figure BDA0003601028050000067
And output of hidden layer
Figure BDA0003601028050000068
H is to be1As the input of the input layer of the second supervised Poisson self-encoder, training the second supervised Poisson self-encoder according to the minimum loss function to obtain the corresponding weight and bias parameters, so as to use h for progressive layer by layerk -1According to the k-th supervised Poisson self-encoder SPAE of trainingkObtaining parameters
Figure BDA0003601028050000069
And hkUntil the last supervision Poisson self-encoder training is finished; k is less than or equal to L, wherein L is the number of the monitoring Poisson self-encoders;
s3 specifically includes the following substeps:
(1) constructing a deep SPAE network by layering a plurality of SPAEs, transferring input data to an input layer of an SSPAE, via a network having parameter sets
Figure BDA0003601028050000071
Generating corresponding first-level feature data
Figure BDA0003601028050000072
SPAE1Original input data and Poisson network layer prediction quality data are reconstructed at an output layer of the data processing system by using an input reconstruction network layer. Its pre-training is performed by minimizing reconstruction and prediction errors on the training data, as follows:
Figure BDA0003601028050000073
(2)SPAE1after pre-training, the encoder part of the encoder is kept in the whole SPAE network, and can calculate
Figure BDA0003601028050000074
Then, the first layer hidden feature vector is used as an input of a second SPAE, and a second layer feature vector is obtained through mapping. In SPAE2The first layer feature vector and the count data quality vector are also reconstructed and predicted from the second layer feature vector. Thereby obtaining second-level feature data
Figure BDA0003601028050000075
For the remaining levels of features, it can be obtained step by step in a similar manner.
Suppose that the k-1 st layer has hidden characteristics
Figure BDA0003601028050000076
It has been learned that it will be set by the parameter as
Figure BDA0003601028050000077
Obtaining the k-th layer characteristic hidden layer by the nonlinear function
Figure BDA0003601028050000078
Then reconstructing h at the output layer by input reconstruction network layer mappingk-1The output count data is predicted, in particular, by the poisson network layer.
The process is as follows:
Figure BDA0003601028050000079
Figure BDA00036010280500000710
Figure BDA00036010280500000711
Figure BDA00036010280500000712
wherein k is 1,2, … L,
Figure BDA00036010280500000713
and
Figure BDA00036010280500000714
respectively the input data and reconstructed data of the ith sample at the kth self-encoder,
Figure BDA00036010280500000715
and
Figure BDA00036010280500000716
weight matrices and bias terms for the k-th layer encoder and decoder, respectively.
S4: after the layer-by-layer training of S3 is finished, the output h of the hidden layer of the L-th supervised Poisson self encoderLEstablishing a Poisson network between the output variable y and the output variable y for regression, and adjusting and updating the parameters of the regression network according to the prediction error; after the regression network training is finished, storing the stacked Poisson self-encoder network;
after the forward layer-by-layer training is finished, the output mapping network is added to the top layer, and the last layer of hidden layer feature vector h is utilizedLTo predict the output data
Figure BDA00036010280500000717
And minimizing the loss function of the prediction error to update relevant parameters of the SSPAE network:
Figure BDA0003601028050000081
Figure BDA0003601028050000082
wherein, WyAnd byRepresenting the weight and bias of the poisson network, respectively.
S6: inputting the input data to be predicted into the stored SSPAE model, and obtaining the counting type quality variable prediction value through the forward propagation of the SSPAE network.
The usefulness of the present invention is illustrated below with reference to a specific industrial example. In order to improve the product quality and save the production cost, it is important to predict the number of defects of the steel plate in real time. For example, based on online predictions of the number of defects, an operator may change operating conditions to control the occurrence of defects; in addition, the defect prediction model provides an early measure of the number of defects, which helps the operator to prevent further degradation of operating conditions; in addition, based on the defect prediction model, key factors influencing the defect occurrence rate can be further explored.
The data used is a certain type of steel defect data collected from a certain steel plant, which is collected in the processes of secondary refining, continuous casting, rolling and cooling, etc., as shown in fig. 3. The data includes process variable data and quality variable data. The process variable data comprises 146 process operation variables such as heating temperature and the like, and is a continuous variable, and strong nonlinearity exists in the data. The quality variable represents the number of defects and is of the discrete count data type. In this experiment, 2500 collected samples are randomly divided into three data sets, wherein 1500 samples are used as training data sets for model training, 500 samples are used as verification data sets for model parameter selection, and 500 samples are used as testing data sets for model testing. ,
table 1: network structure parameter
Network layer Input layer Hidden layer 1 Hidden layer 2 Hidden layer 3 Hidden layer 4
Number of nodes 146 83 43 10 5
For comparative analysis, various methods including stacked supervised poisson autocoder SSPAE, stacked target dependent autocoder STAE, stacked autocoder SAE and poisson regression PS are used for soft measurement modeling of count data for the industrial process. The network structure parameters of SSPAE, STAE and SAE are shown in table 1. The key hyperparameter λ in the SSPAE model is set to 1.5.
Table 2: comparison of predicted performance of each comparison method on test set
Figure BDA0003601028050000083
Figure BDA0003601028050000091
Table 2 provides the root mean square error RMSE and the correlation coefficient R for each of the comparison methods2And comparing the prediction accuracy under the evaluation indexes of the two models. The smaller the RMSE and the smaller the prediction error of the representative model. R2The larger the representative model prediction accuracy. As can be seen from the two indexes, the prediction performance of the SSPAE of the method is the best, and the method has the minimum prediction error and the highest prediction precision.
Fig. 4 shows partial prediction effect graphs of the SSPAE model and the STAE model, respectively, SAE model, in which the abscissa represents the measurement sample and the ordinate represents the value of the quality data. FIG. 4(c) shows that the predicted value and the true value of the number of the steel defects are more closely fitted by the SSPAE of the method, and the predicted result is more accurate.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and although the invention has been described in detail with reference to the foregoing examples, it will be apparent to those skilled in the art that various changes in the form and details of the embodiments may be made and equivalents may be substituted for elements thereof. All modifications, equivalents and the like which come within the spirit and principle of the invention are intended to be included within the scope of the invention.

Claims (4)

1. A counting data soft measurement modeling method based on a stacked Poisson self-encoder network is characterized by comprising the following steps:
s1: collecting an input and output training data set for modeling:
Figure FDA0003601028040000011
wherein x represents an input variable, y represents an output variable of a discrete counting data type, and N represents the number of data samples;
s2: constructing a stacked Poisson self-encoder network, wherein the stacked Poisson self-encoder network is formed by stacking a plurality of supervision Poisson self-encoders in a layered mode, and the output of a hidden layer of a previous supervision Poisson self-encoder is used as the input of an input layer of a next supervision Poisson self-encoder; the supervising Poisson self-encoder comprises an input layer, a hidden layer and an output layer, wherein the hidden layer and the output layer comprise an input reconstruction network layer and a Poisson network layer, the input reconstruction network layer is used for reconstructing an input vector, and the Poisson network layer is used for predicting counting type quality data;
randomly initializing the Poisson network weight, the neural network connection weight and the bias parameter of the stacked Poisson self-encoder network.
S3: inputting training data into a stacked Poisson self-encoder network, training a first supervised Poisson self-encoder according to a minimum loss function, and obtaining the weight and the bias parameters of the first supervised Poisson self-encoder
Figure FDA0003601028040000012
And output of hidden layer
Figure FDA0003601028040000013
H is to be1As the input of the input layer of the second supervised Poisson self-encoder, training the second supervised Poisson self-encoder according to the minimum loss function to obtain the corresponding weight and bias parameters, so as to use h for progressive layer by layerk-1According to the k-th supervised Poisson self-encoder SPAE of trainingkObtaining parameters
Figure FDA0003601028040000014
And hkUntil the last supervision Poisson self-encoder training is finished; k is less than or equal to L, wherein L is the number of the monitoring Poisson self-encoders;
s4: after the layer-by-layer training of S3 is finished, the output h of the hidden layer of the L-th supervised Poisson self encoderLAnd establishing a Poisson network between the output variable y for regression, and adjusting and updating the parameters of the regression network according to the prediction error; after the regression network training is finished, storing the stacked Poisson self-encoder network;
s5: inputting input data to be predicted into a stored stacking Poisson self-encoder network, and obtaining the counting type quality variable prediction value through forward propagation of the stacking Poisson self-encoder network.
2. The method for modeling counting data soft measurement based on the stacked poisson self-encoder network as claimed in claim 1, wherein in S3, the encoder in the supervising poisson self-encoder is expressed as:
h=σ(We·x+be)
where σ represents sigmoid activation function, and x is inputInput vector into the layer, h is the output vector of the hidden layer, WeAnd beRespectively representing the weight and the offset of the encoder;
the decoder in a supervised poisson self-encoder is represented as:
Figure FDA0003601028040000021
Figure FDA0003601028040000022
where exp represents an exponential function, WrAnd brRespectively representing the weight and the offset of the reconstructed input vector in the decoder; wpAnd bpRespectively representing the weight and bias parameters of the poisson network layer,
Figure FDA0003601028040000023
representing the input vector after reconstruction of the input vector,
Figure FDA0003601028040000024
an output vector representing a prediction;
the loss function LrecExpressed as:
Figure FDA0003601028040000025
wherein λ represents a ratio of weights of a reconstruction error of an input vector and a prediction error of an output vector;
Figure FDA0003601028040000026
a meaning of (d) is two-norm, which indicates a hadamard product.
3. The method according to claim 1, wherein in S3, the training process of the k-th supervised poisson self-encoder is represented as follows:
Figure FDA0003601028040000027
Figure FDA0003601028040000028
Figure FDA0003601028040000029
Figure FDA00036010280400000210
wherein k is 1,2, … L,
Figure FDA00036010280400000211
and
Figure FDA00036010280400000212
respectively the input data and the reconstructed data of the ith sample at the kth supervised poisson self-encoder,
Figure FDA00036010280400000213
and
Figure FDA00036010280400000214
weight matrices and bias terms for the k-th layer encoder and decoder, respectively;
the method is realized by the following substeps:
the loss function for the kth supervised poisson self-encoder training is as follows:
Figure FDA0003601028040000031
wherein, yiAnd
Figure FDA0003601028040000032
respectively representing the actual observed value of the counting type quality variable corresponding to the ith sample and the predicted value of the counting type quality variable at the kth supervision Poisson self-encoder.
4. The method according to claim 1, wherein in S4, the predicted output variable is
Figure FDA0003601028040000033
The calculation formula of (a) is as follows:
Figure FDA0003601028040000034
wherein, WyAnd byRespectively representing the weight and the bias of the Poisson network;
the loss function is as follows:
Figure FDA0003601028040000035
CN202210403851.5A 2022-04-18 2022-04-18 Counting data soft measurement modeling method based on stacking Poisson self-encoder network Pending CN114692507A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210403851.5A CN114692507A (en) 2022-04-18 2022-04-18 Counting data soft measurement modeling method based on stacking Poisson self-encoder network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210403851.5A CN114692507A (en) 2022-04-18 2022-04-18 Counting data soft measurement modeling method based on stacking Poisson self-encoder network

Publications (1)

Publication Number Publication Date
CN114692507A true CN114692507A (en) 2022-07-01

Family

ID=82143067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210403851.5A Pending CN114692507A (en) 2022-04-18 2022-04-18 Counting data soft measurement modeling method based on stacking Poisson self-encoder network

Country Status (1)

Country Link
CN (1) CN114692507A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116389706A (en) * 2023-06-06 2023-07-04 北京深势科技有限公司 Combined training method and device for encoder and decoder of electronic microscope projection chart
CN116644377A (en) * 2023-05-16 2023-08-25 华东理工大学 Soft measurement model construction method based on one-dimensional convolution-stacking self-encoder

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116644377A (en) * 2023-05-16 2023-08-25 华东理工大学 Soft measurement model construction method based on one-dimensional convolution-stacking self-encoder
CN116389706A (en) * 2023-06-06 2023-07-04 北京深势科技有限公司 Combined training method and device for encoder and decoder of electronic microscope projection chart
CN116389706B (en) * 2023-06-06 2023-09-29 北京深势科技有限公司 Combined training method and device for encoder and decoder of electronic microscope projection chart

Similar Documents

Publication Publication Date Title
CN116757534B (en) Intelligent refrigerator reliability analysis method based on neural training network
CN109214708B (en) Electric power system risk assessment method based on cross entropy theory optimization support vector machine
CN110571792A (en) Analysis and evaluation method and system for operation state of power grid regulation and control system
US11650968B2 (en) Systems and methods for predictive early stopping in neural network training
CN114692507A (en) Counting data soft measurement modeling method based on stacking Poisson self-encoder network
CN107832789B (en) Feature weighting K nearest neighbor fault diagnosis method based on average influence value data transformation
CN113743016B (en) Engine residual life prediction method based on self-encoder and echo state network
CN112116002A (en) Determination method, verification method and device of detection model
CN114547974A (en) Dynamic soft measurement modeling method based on input variable selection and LSTM neural network
CN111178585A (en) Fault reporting amount prediction method based on multi-algorithm model fusion
CN114707712A (en) Method for predicting requirement of generator set spare parts
CN111985825A (en) Crystal face quality evaluation method for roller mill orientation instrument
CN114266278A (en) Dual-attention-network-based method for predicting residual service life of equipment
CN115456460A (en) Multi-quality index output prediction method and system for tobacco shred loosening and conditioning process
CN115982141A (en) Characteristic optimization method for time series data prediction
CN114169434A (en) Load prediction method
CN112307536A (en) Dam seepage parameter inversion method
CN115062528A (en) Prediction method for industrial process time sequence data
CN113780420B (en) GRU-GCN-based method for predicting concentration of dissolved gas in transformer oil
CN109540089B (en) Bridge deck elevation fitting method based on Bayes-Kriging model
CN115438897A (en) Industrial process product quality prediction method based on BLSTM neural network
CN114357870A (en) Metering equipment operation performance prediction analysis method based on local weighted partial least squares
CN114239397A (en) Soft measurement modeling method based on dynamic feature extraction and local weighted deep learning
Zong et al. Embedded software fault prediction based on back propagation neural network
CN116957534A (en) Method for predicting replacement number of intelligent electric meter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination