CN108062566A - A kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear - Google Patents

A kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear Download PDF

Info

Publication number
CN108062566A
CN108062566A CN201711327861.0A CN201711327861A CN108062566A CN 108062566 A CN108062566 A CN 108062566A CN 201711327861 A CN201711327861 A CN 201711327861A CN 108062566 A CN108062566 A CN 108062566A
Authority
CN
China
Prior art keywords
mrow
msub
msubsup
msup
prime
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711327861.0A
Other languages
Chinese (zh)
Inventor
汤健
刘卓
余刚
赵建军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201711327861.0A priority Critical patent/CN108062566A/en
Publication of CN108062566A publication Critical patent/CN108062566A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/02Computing arrangements based on specific mathematical models using fuzzy logic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Fuzzy Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Algebra (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

The present invention discloses a kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear, carries out integrated construction based on multiple candidate's nuclear parameters extraction feature, obtains the potential character subset towards different nuclear parameters;Candidate's fuzzy reasoning submodel is built using these potential character subsets as training subset, builds to obtain selective ensemble fuzzy reasoning master cast using optimization algorithm and adaptive weighted algorithm;Master cast prediction error is calculated, select nuclear parameter and predicts the relevant potential characteristic set of error with master cast using in KPLS extraction input datas;Integrated construction is carried out using Bootstrap algorithms based on these potential characteristic sets, obtains the training subset sampled towards training sample;Candidate's submodel based on core random weight neutral net is constructed based on these training subsets, selective ensemble KRWNN compensation models are built using genetic algorithm optimization tool box and AWF;The output of selective ensemble fuzzy reasoning master cast and selective ensemble KRWNN compensation models is merged to obtain prediction result.

Description

Intelligent integrated soft measurement method based on multi-core potential feature extraction
Technical Field
The invention belongs to the technical field of industrial processes, and particularly relates to an intelligent integrated soft measurement method based on multi-core potential feature extraction.
Background
In the field of complex industrial processes, due to the mechanism complexity of the production process and the strong coupling of a plurality of factors, some key process parameters related to the quality and the safety of products are difficult to directly detect by adopting an instrument. At present, the parameters are obtained mainly by means of excellent field experts through empirical estimation or manual timing sampling and laboratory detection methods, the defects of high dependence, low accuracy, detection lag, time consumption and the like exist, and effective support is difficult to provide for operation optimization and control of an industrial process. The use of off-line historical data to build soft measurement models is an alternative approach to this problem [1 ]. The industrial process data has strong nonlinearity and colinearity, and the modeling by adopting all variables not only increases the complexity of the model, but also influences the modeling precision and speed of the model. In general, the number of input variables (features) is always more than needed to build an efficient compact model, especially one based on spectra, images, and text. Meanwhile, the small sample modeling data contains valuable information with uncertainty and inaccuracy.
Aiming at the problem of data collinearity in the industrial process, the feature extraction and feature selection technology can effectively process. Both have their advantages and disadvantages: the feature selection technology only selects some most important related features, and the unselected features may reduce the generalization performance of the estimation model [2 ]; the feature extraction method is to determine a proper low-dimensional potential feature to replace an original high-dimensional original feature in a linear or nonlinear mode. Commonly used Principal Component Analysis (PCA) -based feature extraction methods do not take into account the correlation between input and output data [3 ]. The feature extraction method based on partial least squares or latent variable mapping (PLS) can effectively overcome the defect [4], and the kernel version, namely Kernel PLS (KPLS), is a simple and efficient method [5,6] for realizing nonlinear feature extraction by expanding nonlinear terms for input data. However, the kernel type and kernel parameters are often related to modeling data, so that reasonable selection is difficult to perform, and different potential features can be extracted by adopting different kernel parameters based on the KPLS method.
Fuzzy inference is an efficient method for dealing with nonlinear modeling problems that contain non-deterministic and non-precise information. Document [7] proposes a method for extracting efficient fuzzy rules from modeling data, thereby simplifying the difficulty of constructing an inference model. In general, the process of extracting fuzzy rules is called structure recognition. Many off-line and on-line clustering strategies such as fuzzy C-means, hill-climbing clustering [8], subtractive clustering [9], recursive on-line clustering [10] are used for extraction of fuzzy rules, but these algorithmic strategies do not take into account the interrelationship existing in the input and output data spaces. Document [11] effectively solves this problem by introducing newly designed parameters to weight the input space. The current fuzzy inference model mostly adopts a traditional single model structure. The integrated modeling method can improve the generalization, effectiveness and reliability of the model. Ensemble learning achieves better prediction performance and stability than a single model by integrating submodels with differences. Studies have shown that selective integration (SEN) of preferably available submodels can lead to better generalization performance than simply integrating all submodels and a single model [12 ]. Therefore, selective optimization and integration of a plurality of fuzzy inference submodels can also obtain better prediction performance, and meanwhile, inference rules can be simplified. Obviously, the fuzzy inference model based on the knowledge rules has strong inference capability, but has weak learning and pattern recognition capability.
In the case of small sample data, it is difficult for the error inverse propagation neural network (BPNN) to establish a prediction model with high stability. The Support Vector Machine (SVM) modeling method based on the structure risk minimization is suitable for small sample data modeling, and needs to spend more time for solving an optimal solution. Although the solving speed of the Random Weight Neural Network (RWNN) is high [13,14,15], the problem of unstable prediction performance also exists when the random weight neural network is oriented to small sample data modeling, and the random weight neural network is difficult to be directly used for high-dimensional data modeling. Introduction of nuclear technology into the RWNN-constructed nuclear RWNN (KRWNN) model effectively overcomes the above problems [16 ]. Obviously, these non-regular inferential data-driven modeling methods can effectively fit the modeling data.
Industry practice has shown that experts need to accumulate a certain period of experience to be able to effectively estimate certain process parameters. From another perspective, when the experience accumulated by experts is insufficient, the judgment of the experts brings errors, and the fuzzy inference rule needs to be compensated. In addition, in the process of accumulating experiences, experts can store value information and discard and forget useless experiences. To some extent, these experiences correspond to training samples representing different conditions used in modeling the data.
Disclosure of Invention
The invention provides an intelligent integrated soft measurement method based on multi-core potential feature extraction, which is based on the view point that a simulated human brain has a strong fuzzy reasoning mechanism and a compensation cognition mechanism for the faced uncertain factors, and comprises the following steps: by utilizing an ensemble learning framework, firstly constructing a fuzzy inference SEN main model based on multi-core potential features, then constructing a random weight neural network SEN compensation model based on the potential features, and finally fusing the two models from a master-slave perspective. The soft measurement method is essentially to sequentially fuse multi-source characteristics and multi-working-condition samples from a master-slave angle, and is suitable for a cognitive process in which human experts carry out reasoning based on main knowledge and gradually compensate and perfect in practice. The validity of the method is verified by adopting the synthetic data.
In order to achieve the purpose, the invention adopts the following technical scheme:
an intelligent integrated soft measurement method based on multi-core potential feature extraction comprises the following steps:
step 1, extracting features by adopting a Kernel Partial Least Squares (KPLS) algorithm based on a plurality of candidate kernel parameters to carry out integrated construction, and obtaining potential feature subsets facing different kernel parameters;
step 2, constructing candidate fuzzy inference submodels by taking the potential feature subsets as training subsets, and constructing a selective integrated fuzzy inference main model by adopting an optimization algorithm and an adaptive weighting Algorithm (AWF);
step 3, calculating a main model prediction error, selecting kernel parameters and extracting a potential feature set related to the main model prediction error in input data by adopting KPLS (kernel level clustering system);
step 4, performing integrated construction by adopting a Bootstrap algorithm based on the potential feature sets to obtain a training subset for training sample sampling;
step 5, constructing candidate submodels based on a Kernel Random Weight Neural Network (KRWNN) based on the training subsets, and constructing a selective integrated KRWNN compensation model by adopting a genetic algorithm optimization tool box (GAOT) and AWF;
and 6, combining the output of the selective integrated fuzzy inference main model and the output of the selective integrated KRWNN compensation model to obtain a prediction result of the intelligent integrated soft measurement model.
The soft measurement method of the invention essentially fuses the multi-source characteristics and the multi-working condition samples in turn from the master-slave angle, and is suitable for the cognitive process that human experts carry out reasoning based on main knowledge and gradually compensate and perfect in practice. The validity of the method is verified by adopting the synthetic data.
Drawings
FIG. 1 is a flow chart of an intelligent integrated soft measurement method based on multi-core latent feature extraction;
fig. 2(a) clustering threshold vs. main model prediction performance (KLV 2); wherein, the left graph: when KLV is 2, the relation between the main model prediction performance and the clustering threshold range between 0.001 and 0.01, right graph: when KLV is 2, the relation between the main model prediction performance and the clustering threshold value range between 0.01 and 0.1;
fig. 2(b) clustering threshold vs. main model prediction performance (KLV 3); wherein, the left graph: when KLV is 3, the relation between the main model prediction performance and the clustering threshold range between 0.001 and 0.01, right graph: when KLV is 3, the relation between the main model prediction performance and the clustering threshold value range between 0.01 and 0.1;
fig. 2(c) clustering threshold vs. main model prediction performance (KLV 4); wherein, the left graph: when KLV is 4, the relation between the main model prediction performance and the clustering threshold range between 0.001 and 0.01, right graph: when KLV is 4, the relation between the main model prediction performance and the clustering threshold value range between 0.01 and 0.1;
fig. 2(d) clustering threshold vs. main model prediction performance (KLV ═ 5); wherein, the left graph: when KLV is 5, the relation between the main model prediction performance and the clustering threshold range between 0.001 and 0.01, right graph: when KLV is 5, the relation between the main model prediction performance and the clustering threshold value range between 0.01 and 0.1;
FIG. 3 is a Gaussian membership function plot for the sub-model No. 3;
FIG. 4 clustering groups of latent variables and output variables of the # 3 sub-model;
FIG. 5 is a membership function of the 3 rd sub-model cluster center;
FIG. 6 illustrates an error in the output of training data for the fuzzy inference main model;
FIG. 7(a) nuclear radius and intelligent integrated soft measurement model prediction performance for potential feature extraction; wherein, the left graph: the relation between the core radius value of potential feature extraction and the prediction performance of the intelligent integrated soft measurement model when the core radius value is between 0.01 and 0.1, and the right graph: the relation between the core radius value of the potential feature extraction and the prediction performance of the intelligent integrated soft measurement model when the core radius value is between 0.1 and 1;
FIG. 7(b) KRWNN's penalty parameters and intelligent integrated soft measurement model prediction performance; wherein, the left graph: the relationship between the penalty parameter value of KRWNN and the prediction performance of the intelligent integrated soft measurement model when the penalty parameter value is between 1 and 100, wherein the relationship is as follows: the relationship between the penalty parameter value of KRWNN and the prediction performance of the intelligent integrated soft measurement model when the penalty parameter value is between 100 and 1000, and the right graph: the relationship between the penalty parameter value of KRWNN and the prediction performance of the intelligent integrated soft measurement model when the penalty parameter value is between 1000 and 10000;
FIG. 7(c) KRWNN's kernel radius and intelligent integrated soft measurement model prediction performance; wherein, the left graph: the relationship between the core radius value of KRWNN and the prediction performance of the intelligent integrated soft measurement model when the core radius value is between 0.001 and 0.01 is shown in the middle graph: the relationship between the core radius value of KRWNN and the prediction performance of the intelligent integrated soft measurement model when the core radius value is between 0.01 and 0.1, and the right graph: the relationship between the core radius value of KRWNN and the prediction performance of the intelligent integrated soft measurement model when the core radius value of KRWNN is between 0.1 and 1;
FIG. 8 is a prediction curve of a fuzzy main model and an intelligent integrated soft measurement model; the left graph is a prediction curve of the fuzzy main model and the intelligent integrated soft measurement model facing the training data, and the right graph is a prediction curve of the fuzzy main model and the intelligent integrated soft measurement model facing the testing data;
FIG. 9 shows the prediction errors of the fuzzy main model and the intelligent integrated soft measurement model, wherein the left diagram shows the prediction errors of the fuzzy main model and the intelligent integrated soft measurement model facing the training data, and the right diagram shows the prediction errors of the fuzzy main model and the intelligent integrated soft measurement model facing the testing data.
Detailed Description
As shown in fig. 1, the present invention provides an intelligent integrated soft measurement method based on multi-core potential feature extraction, which includes:
step 1, extracting features by adopting a Kernel Partial Least Squares (KPLS) algorithm based on a plurality of candidate kernel parameters to carry out integrated construction, and obtaining potential feature subsets facing different kernel parameters;
step 2, constructing candidate fuzzy inference submodels by taking the potential feature subsets as training subsets, and constructing a selective integrated fuzzy inference main model by adopting an optimization algorithm and an adaptive weighting Algorithm (AWF);
step 3, calculating a main model prediction error, selecting kernel parameters and extracting a potential feature set related to the main model prediction error in input data by adopting KPLS (kernel level clustering system);
step 4, performing integrated construction by adopting a Bootstrap algorithm based on the potential feature sets to obtain a training subset for training sample sampling;
step 5, constructing candidate submodels based on a Kernel Random Weight Neural Network (KRWNN) based on the training subsets, and constructing a selective integrated KRWNN compensation model by adopting a genetic algorithm optimization tool box (GAOT) and AWF;
and 6, combining the output of the selective integrated fuzzy inference main model and the output of the selective integrated KRWNN compensation model to obtain a prediction result of the intelligent integrated soft measurement model.
The invention relates to an intelligent integrated soft measurement method based on multi-core potential feature extraction, which mainly comprises an integrated structure based on multi-core potential feature extraction, a selective integrated fuzzy inference model based on a branch-and-bound (BB) algorithm, potential feature extraction based on KPLS, an integrated structure based on Bootstrap, a selective integrated KRWNN model based on GA and model output combination, wherein the main model comprises the integrated structure based on multi-core potential feature extraction and the selective integrated fuzzy inference model based on the branch-and-bound (BB) algorithm, and the compensation model comprises the potential feature extraction based on KPLS, the integrated structure based on Bootstrap and the selective integrated KRWNN model based on GA, as shown in figure 1.
In fig. 1, x ═ x1,…,xp]And y represents inputs and outputs of the industrial process modeling object; assume that the number of samples acquired offline is k and the modeling dataset can be represented as Represents a set of J candidate kernel parameters, (p)ker)jRepresents the jth kernel parameter;represents the set of J potential features extracted using KPLS, also a training subset for constructing candidate fuzzy inference submodels, zjRepresenting a jth potential feature extracted based on the jth kernel parameter; z represents a kernel-based parameter pkerExtracting the potential features;representing the training subsets generated using Bootstrap, also the training subsets used to construct the candidate KRWNN submodels, zj′Represents the jth training subset;andrespectively representing the outputs of the main model and the compensation model;representing built intelligent integrated soft measurement modelTo output of (c).
Fig. 1 shows that the two integrations in the main model and the compensation model are different: the integration in the main model is an integrated construction based on multi-core potential features, a selective integrated fuzzy inference model established by a BB algorithm and an adaptive weighted fusion Algorithm (AWF) is used for fusing multi-source features, and main knowledge can be simulated; the integration in the compensation model is an integrated construction of potential features of the prediction error of the directional main model based on a Bootstrap algorithm, and a selective integrated KRWNN model established by adopting GA and AWF algorithms is a fusion of multi-working-condition samples and can simulate auxiliary knowledge. Obviously, the strategy can simulate the fuzzy reasoning mechanism and the compensation cognition mechanism of the human brain on the faced uncertain factors to a certain extent.
Step 1, extracting features by adopting a Kernel Partial Least Squares (KPLS) algorithm based on a plurality of candidate kernel parameters to carry out integrated construction, and obtaining potential feature subsets facing different kernel parameters, specifically;
when the KPLS is used for latent feature extraction, even if the same kernel function (such as a commonly used radial basis function) is used, the extracted latent features are different for different kernel parameters. It is apparent that the modeling data set can beAnd extracting multi-core potential features by adopting different core parameters, thereby realizing integrated construction.
With the jth kernel parameter (p)ker)jThe description is made for the sake of example. Based on (p)ker)jMapping X to a high-dimensional space by using a selected kernel function, and marking the obtained kernel function as Kj. It is calibrated according to the following formula:
wherein I is an identity matrix of dimension k; 1kIs a vector of length k having a value of 1。
The total number of Kernel Latent Variables (KLVs) was extracted by the KPLS algorithm shown in table 1.
TABLE 1KPLS Algorithm
Low dimensional scoring matrix T by the KPLS algorithm of Table 1j=[t1,t2,...,th]And Uj=[u1,u2,...,uh]Can be obtained separately. The dimensionality of the original input matrix X is reduced to h and the extracted features can be written as:
the process of integrated construction based on multi-core latent feature extraction can be represented by the following processes:
wherein,representing a set of candidate kernel parameters; j represents the number of adopted candidate kernel parameters, namely the training subset extracted by KPLS and based on potential features, and the number of candidate fuzzy inference submodels.
The input feature dimension and the sample number of the training subsets generated by adopting the integrated construction method are not changed, and only the potential features are different. Therefore, the proposed way of obtaining latent variables for integrated construction of "manipulated kernel function parameters" can be regarded as a special way of "manipulating input features". Since each newly generated training subset has different input features and the same output, each training subset can be considered as a new source of information. The SEN model is constructed using information from these various sources, similar to the identification or estimation of process parameters by domain experts selecting valuable source information.
Step 2, constructing candidate fuzzy inference submodels by taking the potential feature subsets as training subsets, and constructing a selective integrated fuzzy inference main model by adopting an optimization algorithm and an adaptive weighting Algorithm (AWF), wherein the method specifically comprises the following steps:
and constructing a candidate sub-model based on fuzzy inference aiming at each generated training subset, wherein the construction process of the jth candidate fuzzy inference sub-model is as follows:
wherein, L represents a clustering threshold value set when the fuzzy inference submodel is constructed.
The set of all J candidate submodels may be represented as:
whereinRepresenting the set of all candidate submodels.
Representing all selected integration submodels asThe relationship between the integrated submodel and the candidate submodel may be expressed as:
wherein,represents a set of integrated submodels; j is a function ofsel=1,2,…,Jsel,JselRepresents the integration size of the selectively integrated fuzzy inference model, i.e. the number of integrated sub-models selected.
Calculating the weighting coefficient of the integrated sub-model by using an AWF algorithm according to the following formula:
wherein,0≤wjsel≤1,wjselis based on the jselthe weighting coefficients corresponding to th integrated fuzzy inference submodels; sigmajselOutputting values for an integrated fuzzy inference submodelStandard deviation of (2).
The Root Mean Square Relative Error (RMSRE) of the selective integration model may be expressed as:
wherein k is the number of samples; y islIs the true value of the l sample;a predicted value for the first sample for the selective integration model;is based on the j (th)selAnd th integrated fuzzy inference model is used for predicting the predicted value of the l sample.
The process of establishing the selective integrated fuzzy inference model requires determining the number of integrated fuzzy inference submodels, selecting a fuzzy inference integration submodel and determining a weighting factor w thereofjselIt can be expressed as the following optimization problem:
with the optimization objective maximization, the above optimization problem turns into:
wherein theta isthTo set the threshold.
Directly solving the above optimization problem requires determining the number of the integrated fuzzy inference submodels at the same time, and selecting the integrated fuzzy inference submodels and their weighting coefficients. However, how many submodels need to be integrated is not known in advance, the weighting coefficients of the submodels are obtained by a weighting algorithm after the submodels are selected, and the number of the optimal submodels is unknown. This more complex optimization problem is decomposed into several sub-optimization problems: (1) firstly, the number of integrated fuzzy inference submodels is given; (2) then selecting an integrated fuzzy inference submodel and calculating a weighting coefficient; (3) after the most-optimal selective integrated fuzzy inference model with different sub-model numbers is selected, the selective integrated fuzzy inference model with the smallest modeling error is selected as the final soft measurement model in a sequencing mode.
The algorithm for selecting the optimal integrated fuzzy inference submodel is similar to the optimal feature selection algorithm under the condition that the weight coefficient is determined by adopting an AWF algorithm. When the number of the optimal features is known, only an enumeration algorithm and a BB algorithm can be used for realizing the optimal feature selection. The BB algorithm is used as a combined optimization tool, and an optimal subset can be obtained with high computational efficiency through a branching and delimiting process. Therefore, by combining the BB-based optimization algorithm and the AWF-based weighting algorithm, selective integrated modeling for simultaneously selecting the optimal integrated fuzzy inference submodel and calculating the weighting coefficient of the optimal integrated fuzzy inference submodel can be realized. The selection of the optimal submodel is realized by running BB and AWF algorithms for multiple times, namely: firstly, the most preferable selective integrated fuzzy inference model when the number of the integrated submodels is 2,3, …, (J-1) is respectively determined, then the selective integrated fuzzy inference models are sequenced, and the final fuzzy inference main model is selected according to the modeling precision.
In summary, the selective integrated fuzzy inference model algorithm based on BB and AWF is shown in table 2.
TABLE 2 BB and AWF based Selective Integrated fuzzy inference model Algorithm
Final output value of the fuzzy inference main modelCalculated from the following formula:
wherein,representation is based on the jselth integrating the output of the fuzzy inference submodel.
Step 3, calculating the prediction error of the main model, selecting kernel parameters and adopting KPLS to extract a potential feature set related to the prediction error of the main model in input data, specifically:
the prediction error of the main model is first calculated as follows:
next, the kernel parameter p is usedkerInput/output data based on the KPLS algorithm shown in Table 1Performing latent feature extraction, i.e. usingAndsubstituted for that in 1And y, potential features extracted therefrom being
Wherein,the representation is based on a kernel parameter pkerThe mapped and scaled kernel matrices, T and U, represent the correspondences obtained by the algorithm shown in Table 1A low dimensional scoring matrix.
The above process can be expressed as:
step 4, based on the potential feature sets, adopting a Bootstrap algorithm to carry out integrated construction, and obtaining a training subset facing to training sample sampling, wherein the method specifically comprises the following steps:
and aiming at the latent variable characteristics, performing integrated construction by adopting a 'manipulation training sample' mode, wherein the aim is to select a training sample which can represent to construct a final compensation model.
Adopting Bootstrap algorithm to carry out integrated construction, wherein the process is as follows:
where J' represents the number of training subsets generated using boottrap, and is also the number of candidate KRWNN submodels and GA populations.
The input feature dimension and the number of samples of the training subset generated by adopting the integrated construction method are not changed, but input and output sample pairs with different sequence numbers are generated, and repeated input and output sample pairs exist in the training subset due to the replaced samples. Thus, the valuable sample can be reused.
Step 5, constructing candidate submodels based on the Kernel Random Weight Neural Network (KRWNN) based on the training subsets, and constructing a selective integrated KRWNN compensation model by adopting a genetic algorithm optimization tool box (GAOT) and an AWF, wherein the method specifically comprises the following steps:
and constructing a KRWNN-based candidate sub-model for each generated training subset, wherein the construction process of the jth candidate KRWNN sub-model is as follows:
wherein, KKRWNNAnd CKRWNNA kernel parameter and a penalty parameter representing the KRWNN model.
Thus, the set of all J' candidate KRWNN submodels can be represented as:
whereinRepresenting the set of all candidate submodels.
Constructing an effective SEN model requires selecting and merging integrated KRWNN submodels with different diversity and prediction accuracy from the candidate KRWNN submodels. All integrated KRWNN submodels selected are represented here asThus the relationship between the integrated KRWNN submodel and the candidate KRWNN submodel may be expressed as:
whereinRepresents a set of Integrated sub-models, J'selRepresenting the integrated size of the SEN model.
Theoretically, to construct an efficient SEN model, it is necessary to use a validation datasetHere, its validation dataset of potential features extracted with respect to the main model prediction error is noted asWaiting based on verification data setThe prediction output of the selected KRWNN submodel is represented as:
the prediction error is calculated as follows:
wherein,
the correlation coefficient between the jth 'th and s' th candidate KRWNN submodels is obtained by using the following formula:
the correlation matrix thus obtained is represented by the following formula:
next, a random weight vector is generated for each candidate sub-modelReusing GAOT toolkits based on correlation matrixEvolvement processing these weight vectors to obtain optimized weight vectorsThese weight vectors are selected to be greater than a threshold of 1/J' as the integrated KRWNN submodels. Inputs of these integrated KRWNN submodelsThe out can be expressed as:
the weights of these integrated KRWNN submodels are calculated using an Adaptive Weighted Fusion (AWF) algorithm:
wherein sigmaj′selIs the standard deviation of the prediction output of the integrated KRWNN submodel.
The KPLS and GA based selective integration KRWNN model is used as a compensation model, and the output is expressed as:
and 6, combining the output of the selective integrated fuzzy inference main model and the output of the selective integrated KRWNN compensation model to obtain a prediction result of the intelligent integrated soft measurement model, which specifically comprises the following steps:
adding the outputs of the main model and the compensation model to obtain the output of the intelligent integrated soft measurement model:
simulation verification
The test function for generating the simulation verification data is as follows:
wherein t ∈ [ -1,1];Is noise, where isy1,2,3,4,5, 6. The data were distributed in 4 different regions C1, C2, C3 and C4, as detailed in table 3.
TABLE 3 different regions of simulation data
The number of modeling and testing samples for this simulation experiment was 240 and 120, respectively, where the training samples consisted of 60 samples each in each zone and the testing samples consisted of 30 samples each in each zone. And carrying out nuclear latent feature extraction on the modeling data. The commonly used RBF function was chosen and the latent variable contribution rates with different kernel radii and different KLVs are shown in table 4.
TABLE 4(a) latent variable contribution (nuclear radius 0.1)
TABLE 4(b) latent variable contribution (nuclear radius 1)
TABLE 4(c) latent variable contribution (nuclear radius 10)
TABLE 4(d) latent variable contribution (nuclear radius 100)
Table 4 shows that different kernel radii have a greater effect on the latent variable contribution rate of the input and output data. The range of the radius of the nucleus is taken here to be 26 groups "0.01, 0.03,0.05,0.07,0.09,0.1,0.3,0.5,0.7,0.9,1,3,5,7,9,10,30,50,70,100,300,500,700,900, 1000". Thus, the total number of training subsets generated is 26, and 26 candidate fuzzy inference submodels can be constructed.
When KLV is 2,3,4,5, the relation between the clustering threshold and the prediction performance of the main model is shown in fig. 2.
As can be seen from fig. 2, for KLV 2,4,5, there is an optimum (around 0.01) for the prediction error of the training and testing data, and it is clear that the magnitude of this value is related to the data; for KLV 3, the variation of the prediction error from the clustering threshold is not very obvious, and the generalization performance of the training data is weaker than that of the test data, for which reason it is left to be studied intensively. Theoretically, the smaller the value of the clustering threshold, the greater the number of clusters, and the smaller the clustering threshold will result in the increase of rules and the complexity of the model. Obviously, a balanced selection is required according to the modeling object.
In this example, a clustering threshold of 0.01, KLV 4, is selected. According to the candidate kernel parameters selected previously, the number of the candidate sub-models which are required to be generated is 26; the SEN models are generated in 24 in total by removing the best submodel and integrating the fully integrated models of all the submodels. All the candidate submodels and the SEN fuzzy inference model table 5 and table 6 at the top 9 of the prediction performance are shown.
TABLE 5 statistical table of candidate fuzzy inference submodels
TABLE 6 statistical table of SEN fuzzy inference model
From table 5, it can be seen that the fuzzy inference submodel is ordered according to the test error as: 5,6, 4, 19, 20, 18, 21, 3, 22, 7, 23, 24, 25, 8, 2, 9, 11; and as can be seen from table 6, the integrated fuzzy inference submodels with the integration size of 2 are 7 and 3, and the starting point of the selection is from the training precision. Therefore, SEN fuzzy inference can select the sub-model that results in the best fusion. The kernel parameter of the 3 rd sub-model is 0.05, and the contribution rate of the extracted latent variable is shown in table 7.
TABLE 7 contribution ratio of input latent variable of sub-model No. 3 of synthetic data (kernel radius 0.03)
Fig. 3 to 5 show a gaussian membership function curve, a clustering group of latent variables and output variables, and a membership function of the No. 3 sub-model.
As can be seen from fig. 3, different latent variables have different membership function curves according to their data characteristics; from fig. 4, the raw data is divided into 58 groups; fig. 5 shows the membership function for each set of data of 120 training samples.
And calculating the training data output error of the fuzzy inference main model, as shown in figure 6.
The compensation model is trained with the output error shown in fig. 6 as the true output value. Here, the number of KLVs used for compensation model latent feature extraction is set to 5, and the number of candidate submodels for the compensation model is set to 40. The compensation model is calculated as the relation between the main modeling parameters (kernel radius of potential feature extraction, penalty parameter of KRWNN, kernel radius of KRWNN) and the predicted performance of the intelligent integrated soft measurement model, as shown in fig. 7.
According to fig. 7, the model parameters are selected as: the nuclear radius of potential feature extraction is 0.01, the penalty parameter of KRWNN is 4000, and the nuclear radius of KRWNN is 0.009. The prediction curves and prediction errors of the final main model and the compensated intelligent integrated soft measurement model are shown in fig. 8 and 9:
as shown in fig. 8 and 9, the prediction performance of the intelligent integrated soft measurement model is significantly better than that of the fuzzy main model, especially for the training data. Due to the adoption of the Bootstrap-based integrated construction method in the compensation model and the optimization of the integrated sub-model by means of the GAOT tool box, random factors are introduced, and the statistical result of 20 times of running of the intelligent integrated soft measurement model is compared with the fuzzy main model, and the result is shown in the table 8.
TABLE 8 statistical results of the main model and the integrated intelligent soft measurement model
From table 8, from the comparison of the training errors of the main model and the intelligent integrated model, the maximum error of the intelligent integrated model after 20 times of operation is reduced by 40%, and the difference between the maximum value and the minimum value is 0.0032, and the variance is only 0.0007522; from the comparison of test errors, the average value of 20 runs is improved by 7.3% from the prediction precision and is less than 6% from the training precision, which indicates that the generalization performance of the model is not good, and indicates that the phenomenon of over-training fitting exists and further optimization selection of model parameters is required. The main reason is that a plurality of learning parameters of the model need to be selected, and strong coupling action exists among the learning parameters; in addition, the training of the compensation model only considers the difference between the true value and the prediction of the main model as the instructor signal, which results in an overfitting of the training process.
The invention provides an intelligent integrated soft measurement method based on multi-core potential feature extraction, which has the main innovation points that: the main model and the auxiliary model of the method are both selective integrated models, and different integrated construction strategies are adopted; the main model adopts an integrated construction strategy based on the multi-latent variable feature subset, and the auxiliary model adopts an integrated construction strategy based on the manipulation training sample. The soft measurement method is essentially to sequentially fuse multi-source characteristics and multi-working-condition samples from a master-slave angle, and is suitable for a cognitive process in which human experts carry out reasoning based on main knowledge and gradually compensate and perfect in practice. The validity of the method is verified by adopting the synthetic data.
Reference to the literature
[1]Kadlec P,Gabrys B,Strand S.Data-driven soft-sensors in the processindustry[J].Computers and Chemical Engineering,2009,33(4):795-814.
[2]Lázaro J.M.B.D.,Moreno A.P.,Santiago O.L.,and NetoA.J.D.S.Optimizing kernel methods to reduce dimensionality in fault diagnosisof industrial systems[J].Computers&Industrial Engineering,2015,87(C):140-149.
[3]Tang J.,Chai T.Y.,Zhao L.J.,Yu W.,and Yue H.Soft sensor forparameters of mill load based on multi-spectral segments PLS sub-models andon-line adaptive weighted fusion algorithm[J].Neurocomputing,2012,78(1):38-47.
[4]Charanpal D.,Gunn S.R.,and John S.T.Efficient sparse kernelfeature extraction based on partial least squares[J].IEEE Transactions onPattern Analysis&Machine Intelligence,2009,31(8):1347-1361.
[5]Qin S.J.Survey on data-driven industrial process monitoring anddiagnosis[J].Annual Reviews in Control,2012,36(2):220-234.
[6]Motai Y.Kernel association for classification and prediction:Asurvey[J].IEEE Transactions on Neural Networks and Learning Systems,2015,26(2):208-223.
[7]Wang L.X.,and Mendel J.M.Generating fuzzy rules by learning fromexamples[J].IEEE Transactions on Systems,Man,and Cybernetics,2002,22(6):1414-1427.
[8]Mitra S.,and Hayashi Y.,Neuro-fuzzy rule generation:survey in softcomputing framework[J].IEEE Transactions on Neural Networks,2000,11(3):748-68.
[9]Chiu S.L.,Fuzzy model identification based on cluster estimation,Journal of Intelligent and Fuzzy Systems,1994,2:267-278.
[10]Angelov P.An approach for fuzzy rule-base adaptation using on-line clustering[J].International Journal of Approximate Reasoning,2004,35(3):275-289.
[11]Yu W.,and Li X.O.On-line fuzzy modeling via clustering andsupport vector machines[J].Information Sciences,2008,178(22):4264-4279.
[12]Zhou Z.H.,Wu J.,and Tang W.Ensembling neural networks:many couldbe better than all[J].Artificial Intelligence,2002,137(1-2):239-263.
[13]Pao,Y.H.,Takefuji,Y.Functional-link net computing,theory,systemarchitecture,and functionalities[J].IEEE Comput.,1992,25(5):76-79.
[14]Igelnik,B.,Pao,Y.H.Stochastic choice of basis functions inadaptive function approximation and the functional-link net[J].IEEETrans.Neural Network,1995,6(6):1320-1329.
[15]Comminiello D.,Scarpiniti M.,Azpicueta-Ruiz L.A.,Arenas-GarciaJ.,Uncini A.Functional link adaptive filters for nonlinear acoustic echocancellation[J].IEEE Trans.Audio Speech Lang.Process.2013,21(7):1502-1512.
[16]Tang,J.,Jia,M.Y.,Li,D.Selective ensemble simulate metamodelingapproach based on latent features extraction and kernel Learning[C].In:the27th Chinese Control and Decision Conference(2015 CCDC),Qingdao,China,May 23-May 25,2015.

Claims (7)

1. An intelligent integrated soft measurement method based on multi-core potential feature extraction is characterized by comprising the following steps:
step 1, extracting features by adopting a Kernel Partial Least Squares (KPLS) algorithm based on a plurality of candidate kernel parameters to carry out integrated construction, and obtaining potential feature subsets facing different kernel parameters;
step 2, constructing candidate fuzzy inference submodels by taking the potential feature subsets as training subsets, and constructing a selective integrated fuzzy inference main model by adopting an optimization algorithm and an adaptive weighting Algorithm (AWF);
step 3, calculating a main model prediction error, selecting kernel parameters and extracting a potential feature set related to the main model prediction error in input data by adopting KPLS (kernel level clustering system);
step 4, performing integrated construction by adopting a Bootstrap algorithm based on the potential feature sets to obtain a training subset for training sample sampling;
step 5, constructing candidate submodels based on a Kernel Random Weight Neural Network (KRWNN) based on the training subsets, and constructing a selective integrated KRWNN compensation model by adopting a genetic algorithm optimization tool box (GAOT) and AWF;
and 6, combining the output of the selective integrated fuzzy inference main model and the output of the selective integrated KRWNN compensation model to obtain a prediction result of the intelligent integrated soft measurement model.
2. The intelligent integrated soft measurement method based on multi-core potential feature extraction as claimed in claim 1, wherein let x ═ x1,…,xp]And y represents inputs and outputs of the industrial process modeling object; assume that the number of samples acquired offline is k and the modeling dataset can be represented asRepresents a set of J candidate kernel parameters, (p)ker)jRepresents the jth kernel parameter;
the step 1 specifically comprises the following steps:
when KPLS is adopted to extract potential features, modeling data set is subjected toAdopts different nuclear parameters to extract the potential characteristics of multiple cores, further realizes the integrated structure,
with the jth kernel parameter (p)ker)jDescribed as an example based on (p)ker)jMapping X to a high-dimensional space by using a selected kernel function, and marking the obtained kernel function as KjThe method is calibrated according to the following formula:
<mrow> <msup> <mover> <mi>K</mi> <mo>~</mo> </mover> <mi>j</mi> </msup> <mo>=</mo> <mrow> <mo>(</mo> <mi>I</mi> <mo>-</mo> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> <msub> <mn>1</mn> <mi>k</mi> </msub> <msubsup> <mn>1</mn> <mi>k</mi> <mi>T</mi> </msubsup> <mo>)</mo> </mrow> <msup> <mi>K</mi> <mi>j</mi> </msup> <mrow> <mo>(</mo> <mi>I</mi> <mo>-</mo> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> <msub> <mn>1</mn> <mi>k</mi> </msub> <msubsup> <mn>1</mn> <mi>k</mi> <mi>T</mi> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
wherein I is a k-dimensional identity matrix; 1kIs a vector of length k with a value of 1,
by KPLS algorithm, low-dimensional scoring matrix Tj=[t1,t2,...,th]And Uj=[u1,u2,...,uh]It can be obtained separately that the dimension of the original input matrix X is reduced to h, and the extracted features can be recorded as:
<mrow> <msup> <mi>Z</mi> <mi>j</mi> </msup> <mo>=</mo> <msup> <mover> <mi>K</mi> <mo>~</mo> </mover> <mi>j</mi> </msup> <msup> <mi>U</mi> <mi>j</mi> </msup> <msup> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msup> <mi>T</mi> <mi>j</mi> </msup> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msup> <mover> <mi>K</mi> <mo>~</mo> </mover> <mi>j</mi> </msup> <msup> <mi>U</mi> <mi>j</mi> </msup> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msubsup> <mrow> <mo>{</mo> <msub> <mrow> <mo>(</mo> <msup> <mi>z</mi> <mi>j</mi> </msup> <mo>)</mo> </mrow> <mi>l</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
the process of integrated construction based on multi-core latent feature extraction can be represented by the following processes:
wherein,representing a set of candidate kernel parameters; j represents the number of adopted candidate kernel parameters, namely the training subset extracted by KPLS and based on potential features, and the number of candidate fuzzy inference submodels.
3. The intelligent integrated soft measurement method based on multi-core potential feature extraction as claimed in claim 1, wherein the step 2 specifically comprises:
and constructing a candidate sub-model based on fuzzy inference aiming at each generated training subset, wherein the construction process of the jth candidate fuzzy inference sub-model is as follows:
wherein L represents a clustering threshold value set when the fuzzy inference submodel is constructed,
the set of all J candidate submodels may be represented as:
<mrow> <msubsup> <mi>S</mi> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> <mrow> <mi>C</mi> <mi>a</mi> <mi>n</mi> </mrow> </msubsup> <mo>=</mo> <msubsup> <mrow> <mo>{</mo> <msubsup> <mi>f</mi> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> <mrow> <mi>c</mi> <mi>a</mi> <mi>n</mi> </mrow> </msubsup> <msub> <mrow> <mo>(</mo> <mo>&amp;CenterDot;</mo> <mo>)</mo> </mrow> <mi>j</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>J</mi> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
wherein,a set of all candidate sub-models is represented,
representing all selected integration submodels asThe relationship between the integrated submodel and the candidate submodel may be expressed as:
<mrow> <msubsup> <mi>S</mi> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> <mrow> <mi>S</mi> <mi>e</mi> <mi>l</mi> </mrow> </msubsup> <mo>=</mo> <msubsup> <mrow> <mo>{</mo> <msubsup> <mi>f</mi> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msubsup> <msub> <mrow> <mo>(</mo> <mo>&amp;CenterDot;</mo> <mo>)</mo> </mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>}</mo> </mrow> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msubsup> <mo>&amp;Element;</mo> <msubsup> <mi>S</mi> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> <mrow> <mi>C</mi> <mi>a</mi> <mi>n</mi> </mrow> </msubsup> <mo>,</mo> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>&amp;le;</mo> <mi>J</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
wherein,represents a set of integrated submodels; j is a function ofsel=1,2,…,Jsel,JselThe integration size, i.e. the number of selected integrated sub-models,
calculating the weighting coefficient of the integrated sub-model by using an AWF algorithm according to the following formula:
<mrow> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>=</mo> <mo>/</mo> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&amp;sigma;</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <munderover> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <mfrac> <mn>1</mn> <msup> <mrow> <mo>(</mo> <msub> <mi>&amp;sigma;</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
wherein, is based on the jselthe weighting coefficients corresponding to th integrated fuzzy inference submodels;outputting values for an integrated fuzzy inference submodelThe standard deviation of (a) is determined,
the Root Mean Square Relative Error (RMSRE) of the selective integration model may be expressed as:
<mrow> <msub> <mi>E</mi> <mrow> <mi>r</mi> <mi>m</mi> <mi>s</mi> <mi>r</mi> <mi>e</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mrow> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </munderover> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msup> <mi>y</mi> <mi>l</mi> </msup> <mo>-</mo> <msup> <mover> <mi>y</mi> <mo>^</mo> </mover> <mi>l</mi> </msup> </mrow> <msup> <mi>y</mi> <mi>l</mi> </msup> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> <mo>=</mo> <msqrt> <mrow> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </munderover> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msup> <mi>y</mi> <mi>l</mi> </msup> <mo>-</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <msubsup> <mover> <mi>y</mi> <mo>^</mo> </mover> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mi>l</mi> </msubsup> </mrow> <msup> <mi>y</mi> <mi>l</mi> </msup> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
wherein k is the number of samples; y islIs the true value of the l sample;a predicted value for the first sample for the selective integration model;is based on the j (th)selth integrated fuzzy inference model to the predicted value of the l sample,
the process of establishing the selective integrated fuzzy inference model requires determining the number of integrated fuzzy inference submodels, selecting a fuzzy inference integration submodel and determining its weighting coefficientsThe following optimization problem can be expressed:
<mrow> <mtable> <mtr> <mtd> <mi>min</mi> </mtd> <mtd> <mrow> <msub> <mi>E</mi> <mrow> <mi>r</mi> <mi>m</mi> <mi>s</mi> <mi>r</mi> <mi>e</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mrow> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </munderover> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msubsup> <mi>y</mi> <mi>i</mi> <mi>l</mi> </msubsup> <mo>-</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <msubsup> <mover> <mi>y</mi> <mo>^</mo> </mover> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mi>l</mi> </msubsup> </mrow> <msubsup> <mi>y</mi> <mi>i</mi> <mi>l</mi> </msubsup> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> </mrow> </mtd> <mtd> <mrow> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>&amp;le;</mo> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>&amp;le;</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>&lt;</mo> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>&lt;</mo> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>,</mo> <mn>1</mn> <mo>&lt;</mo> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>&amp;le;</mo> <mi>J</mi> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
with the optimization objective maximization, the above optimization problem turns into:
<mrow> <mtable> <mtr> <mtd> <mi>max</mi> </mtd> <mtd> <mrow> <msub> <mi>E</mi> <mrow> <mi>r</mi> <mi>m</mi> <mi>s</mi> <mi>r</mi> <mi>e</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>&amp;theta;</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msub> <mo>-</mo> <msqrt> <mrow> <mfrac> <mn>1</mn> <mi>k</mi> </mfrac> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </munderover> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msup> <mi>y</mi> <mi>l</mi> </msup> <mo>-</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <msubsup> <mover> <mi>y</mi> <mo>^</mo> </mover> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mi>l</mi> </msubsup> </mrow> <msubsup> <mi>y</mi> <mi>i</mi> <mi>l</mi> </msubsup> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>s</mi> <mo>.</mo> <mi>t</mi> <mo>.</mo> </mrow> </mtd> <mtd> <mrow> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>&amp;le;</mo> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>&amp;le;</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>&lt;</mo> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>&lt;</mo> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>,</mo> <mn>1</mn> <mo>&lt;</mo> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>&amp;le;</mo> <mi>J</mi> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
wherein, thetathIn order to set the threshold value(s),
final output value of the fuzzy inference main modelCalculated from the following formula:
<mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <msubsup> <mi>f</mi> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msubsup> <msub> <mrow> <mo>(</mo> <mo>&amp;CenterDot;</mo> <mo>)</mo> </mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
wherein,representation is based on the jselth integrating the output of the fuzzy inference submodel.
4. The intelligent integrated soft measurement method based on multi-core potential feature extraction as claimed in claim 3, wherein step 3 specifically comprises:
the prediction error of the main model is first calculated as follows:
<mrow> <msup> <mi>y</mi> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <mi>y</mi> <mo>-</mo> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> </msub> <mo>=</mo> <mi>y</mi> <mo>-</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow>
next, the kernel parameter p is usedkerInput and output data based on KPLS algorithmPerforming latent feature extraction, i.e. usingAndsubstituted for that in 1And y, potential features extracted therefrom being
<mrow> <mi>Z</mi> <mo>=</mo> <mover> <mi>K</mi> <mo>~</mo> </mover> <mi>U</mi> <msup> <mrow> <mo>(</mo> <msup> <mi>T</mi> <mi>T</mi> </msup> <mover> <mi>K</mi> <mo>~</mo> </mover> <mi>U</mi> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <msubsup> <mrow> <mo>{</mo> <msub> <mi>z</mi> <mi>l</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow>
Wherein,the representation is based on a kernel parameter pkerThe mapped and calibrated kernel matrix is then used,
the above process can be expressed as:
5. the intelligent integrated soft measurement method based on multi-core potential feature extraction as claimed in claim 4, wherein the step 4 specifically comprises:
aiming at the latent variable characteristics, the integrated construction is carried out by adopting a 'manipulation training sample' mode, the aim is to select a training sample which can represent to construct a final compensation model,
adopting Bootstrap algorithm to carry out integrated construction, wherein the process is as follows:
where J' represents the number of training subsets generated using boottrap, and is also the number of candidate KRWNN submodels and GA populations.
6. The intelligent integrated soft measurement method based on multi-core potential feature extraction as claimed in claim 1, wherein the step 5 specifically comprises:
and constructing a KRWNN-based candidate sub-model for each generated training subset, wherein the construction process of the jth candidate KRWNN sub-model is as follows:
wherein, KKRWNNAnd CKRWNNRepresenting the kernel and penalty parameters of the KRWNN model,
thus, the set of all J' candidate KRWNN submodels can be represented as:
<mrow> <msubsup> <mi>S</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>C</mi> <mi>a</mi> <mi>n</mi> </mrow> </msubsup> <mo>=</mo> <msubsup> <mrow> <mo>{</mo> <msubsup> <mi>f</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>c</mi> <mi>a</mi> <mi>n</mi> </mrow> </msubsup> <msub> <mrow> <mo>(</mo> <mo>&amp;CenterDot;</mo> <mo>)</mo> </mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>}</mo> </mrow> <mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>17</mn> <mo>)</mo> </mrow> </mrow>
wherein,representing the set of all candidate submodels.
Constructing an effective SEN model requires selecting and combining integrated KRWNN submodels with different diversity and prediction precision from candidate KRWNN submodels, and expressing all the selected integrated KRWNN submodels asThus the relationship between the integrated KRWNN submodel and the candidate KRWNN submodel may be expressed as:
<mrow> <msubsup> <mi>S</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>S</mi> <mi>e</mi> <mi>l</mi> </mrow> </msubsup> <mo>=</mo> <msubsup> <mrow> <mo>{</mo> <msubsup> <mi>f</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msubsup> <msub> <mrow> <mo>(</mo> <mo>&amp;CenterDot;</mo> <mo>)</mo> </mrow> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msub> <mo>}</mo> </mrow> <mrow> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mn>1</mn> </mrow> <msubsup> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msubsup> <mo>&amp;Element;</mo> <msubsup> <mi>S</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> <mi>G</mi> <mi>A</mi> </mrow> <mrow> <mi>C</mi> <mi>a</mi> <mi>n</mi> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>&amp;le;</mo> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>18</mn> <mo>)</mo> </mrow> </mrow>
wherein,represents a set of Integrated sub-models, J'selRepresenting the integrated size of the SEN model,
constructing an efficient SEN model using validation datasetsThe validation dataset of potential features extracted with respect to the main model prediction error is noted asThe predicted output of the candidate KRWNN submodel based on the validation dataset is represented as:
<mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mrow> <mo>(</mo> <msubsup> <mover> <mi>y</mi> <mo>^</mo> </mover> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> <mo>)</mo> </mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>}</mo> </mrow> <mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> </msubsup> <mo>=</mo> <msubsup> <mrow> <mo>{</mo> <msubsup> <mi>f</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>c</mi> <mi>a</mi> <mi>n</mi> </mrow> </msubsup> <msub> <mrow> <mo>(</mo> <msup> <mi>z</mi> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msup> <mo>)</mo> </mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>}</mo> </mrow> <mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <mn>1</mn> </mrow> <mover> <mi>J</mi> <mo>&amp;CenterDot;</mo> </mover> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>19</mn> <mo>)</mo> </mrow> </mrow>
the prediction error is calculated as follows:
<mrow> <msub> <mrow> <mo>(</mo> <msubsup> <mi>e</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> <mo>)</mo> </mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>=</mo> <msub> <mrow> <mo>(</mo> <msubsup> <mover> <mi>y</mi> <mo>^</mo> </mover> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mo>&amp;prime;</mo> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> <mo>)</mo> </mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> </msub> <mo>-</mo> <msup> <mi>y</mi> <mrow> <mo>&amp;prime;</mo> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msup> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>20</mn> <mo>)</mo> </mrow> </mrow>
wherein,
the correlation coefficient between the jth 'th and s' th candidate KRWNN submodels is obtained by using the following formula:
<mrow> <msubsup> <mi>c</mi> <mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> <mo>=</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>k</mi> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msup> </munderover> <msubsup> <mi>e</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> <mrow> <mo>(</mo> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>k</mi> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msup> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <msubsup> <mi>e</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> <mrow> <mo>(</mo> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>k</mi> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msup> <mo>)</mo> </mrow> <mo>/</mo> <msup> <mi>k</mi> <mrow> <mi>v</mi> <mi>a</mi> <mi>i</mi> <mi>l</mi> <mi>d</mi> </mrow> </msup> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>21</mn> <mo>)</mo> </mrow> </mrow>
the correlation matrix thus obtained is represented by the following formula:
<mrow> <msubsup> <mi>C</mi> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> <mo>=</mo> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msubsup> <mi>c</mi> <mn>11</mn> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> <mtd> <msubsup> <mi>c</mi> <mn>12</mn> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msubsup> <mi>c</mi> <mrow> <mn>1</mn> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>c</mi> <mn>21</mn> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> <mtd> <msubsup> <mi>c</mi> <mn>22</mn> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> <mtd> <mn>....</mn> </mtd> <mtd> <msubsup> <mi>c</mi> <mrow> <mn>2</mn> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> </mtr> <mtr> <mtd> <mn>...</mn> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msubsup> <mi>c</mi> <mrow> <msup> <mi>j</mi> <mo>&amp;prime;</mo> </msup> <msup> <mi>s</mi> <mo>&amp;prime;</mo> </msup> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> <mtd> <mn>...</mn> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>c</mi> <mrow> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> <mn>1</mn> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> <mtd> <msubsup> <mi>c</mi> <mrow> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> <mn>2</mn> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msubsup> <mi>c</mi> <mrow> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> </mrow> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mrow> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> <mo>&amp;times;</mo> <msup> <mi>J</mi> <mo>&amp;prime;</mo> </msup> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>22</mn> <mo>)</mo> </mrow> </mrow>
next, a random weight vector is generated for each candidate sub-modelReusing GAOT toolkits based on correlation matrixEvolvement processing these weight vectors to obtain optimized weight vectorsSelecting those weight vectors that are greater than a threshold 1/J' as integrated KRWNN submodels whose output can be represented as:
<mrow> <msubsup> <mrow> <mo>{</mo> <msubsup> <mover> <mi>y</mi> <mo>^</mo> </mover> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>&amp;prime;</mo> </msubsup> <mo>}</mo> </mrow> <mrow> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mn>1</mn> </mrow> <msubsup> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msubsup> <mo>=</mo> <msubsup> <mrow> <mo>{</mo> <msubsup> <mi>f</mi> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msubsup> <msub> <mrow> <mo>(</mo> <msup> <mi>x</mi> <mrow> <mi>v</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>d</mi> </mrow> </msup> <mo>)</mo> </mrow> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msub> <mo>}</mo> </mrow> <mrow> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mn>1</mn> </mrow> <msubsup> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>23</mn> <mo>)</mo> </mrow> </mrow>
the weights of these integrated KRWNN submodels are calculated using an Adaptive Weighted Fusion (AWF) algorithm:
<mrow> <msub> <mi>w</mi> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msub> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mrow> <mo>(</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>&amp;sigma;</mi> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mn>1</mn> </mrow> <msubsup> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </munderover> <mfrac> <mn>1</mn> <msup> <mrow> <mo>(</mo> <msub> <mi>&amp;sigma;</mi> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mfrac> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>24</mn> <mo>)</mo> </mrow> </mrow>
wherein,is an integrated KRWNN seedThe standard deviation of the model's predicted output,
the KPLS and GA based selective integration KRWNN model is used as a compensation model, and the output is expressed as:
<mrow> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> </msub> <mo>=</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mn>1</mn> </mrow> <msubsup> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </munderover> <msub> <mi>w</mi> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mn>1</mn> </mrow> <mo>&amp;prime;</mo> </msubsup> </msub> <msubsup> <mover> <mi>y</mi> <mo>^</mo> </mover> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>&amp;prime;</mo> </msubsup> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>25</mn> <mo>)</mo> </mrow> </mrow>
7. the intelligent integrated soft measurement method based on multi-core potential feature extraction as claimed in claim 1, wherein step 6 specifically comprises:
adding the outputs of the main model and the compensation model to obtain the output of the intelligent integrated soft measurement model:
<mrow> <mover> <mi>y</mi> <mo>^</mo> </mover> <mo>=</mo> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mrow> <mi>F</mi> <mi>u</mi> <mi>z</mi> <mi>z</mi> <mi>y</mi> </mrow> </msub> <mo>+</mo> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <mrow> <mi>K</mi> <mi>R</mi> <mi>W</mi> <mi>N</mi> <mi>N</mi> </mrow> </msub> <mo>=</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </munderover> <msub> <mi>w</mi> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <msub> <mover> <mi>y</mi> <mo>^</mo> </mover> <msub> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> </msub> <mo>+</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mn>1</mn> </mrow> <msubsup> <mi>J</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </munderover> <msub> <mi>w</mi> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </msub> <msubsup> <mover> <mi>y</mi> <mo>^</mo> </mover> <msubsup> <mi>j</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>&amp;prime;</mo> </msubsup> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>26</mn> <mo>)</mo> </mrow> </mrow>
CN201711327861.0A 2017-12-13 2017-12-13 A kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear Pending CN108062566A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711327861.0A CN108062566A (en) 2017-12-13 2017-12-13 A kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711327861.0A CN108062566A (en) 2017-12-13 2017-12-13 A kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear

Publications (1)

Publication Number Publication Date
CN108062566A true CN108062566A (en) 2018-05-22

Family

ID=62138478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711327861.0A Pending CN108062566A (en) 2017-12-13 2017-12-13 A kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear

Country Status (1)

Country Link
CN (1) CN108062566A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144035A (en) * 2018-09-27 2019-01-04 杭州电子科技大学 A kind of Monitoring of Chemical method based on supporting vector
CN109960873A (en) * 2019-03-24 2019-07-02 北京工业大学 A kind of city solid waste burning process dioxin concentration flexible measurement method
CN110135057A (en) * 2019-05-14 2019-08-16 北京工业大学 Solid waste burning process dioxin concentration flexible measurement method based on multilayer feature selection
CN111860934A (en) * 2019-04-26 2020-10-30 开利公司 Method for predicting power consumption
CN112365048A (en) * 2020-11-09 2021-02-12 大连理工大学 Unmanned vehicle reconnaissance method based on opponent behavior prediction

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144035A (en) * 2018-09-27 2019-01-04 杭州电子科技大学 A kind of Monitoring of Chemical method based on supporting vector
CN109960873A (en) * 2019-03-24 2019-07-02 北京工业大学 A kind of city solid waste burning process dioxin concentration flexible measurement method
CN109960873B (en) * 2019-03-24 2021-09-10 北京工业大学 Soft measurement method for dioxin emission concentration in urban solid waste incineration process
CN111860934A (en) * 2019-04-26 2020-10-30 开利公司 Method for predicting power consumption
CN110135057A (en) * 2019-05-14 2019-08-16 北京工业大学 Solid waste burning process dioxin concentration flexible measurement method based on multilayer feature selection
CN110135057B (en) * 2019-05-14 2021-03-02 北京工业大学 Soft measurement method for dioxin emission concentration in solid waste incineration process based on multilayer characteristic selection
US11976817B2 (en) 2019-05-14 2024-05-07 Beijing University Of Technology Method for detecting a dioxin emission concentration of a municipal solid waste incineration process based on multi-level feature selection
CN112365048A (en) * 2020-11-09 2021-02-12 大连理工大学 Unmanned vehicle reconnaissance method based on opponent behavior prediction

Similar Documents

Publication Publication Date Title
CN108062566A (en) A kind of intelligent integrated flexible measurement method based on the potential feature extraction of multinuclear
CN107451101B (en) Method for predicting concentration of butane at bottom of debutanizer by hierarchical integrated Gaussian process regression soft measurement modeling
Chen A hybrid ANFIS model for business failure prediction utilizing particle swarm optimization and subtractive clustering
Veres et al. Deep learning architectures for soil property prediction
de Campos Souza et al. An evolving neuro-fuzzy system based on uni-nullneurons with advanced interpretability capabilities
Liao et al. Combining deep learning and survival analysis for asset health management
Vlachos Neuro-fuzzy modeling in bankruptcy prediction
Alves et al. A novel rule-based evolving fuzzy system applied to the thermal modeling of power transformers
Muneer et al. Genetic algorithm based intelligent system for estate value estimation
Ribeiro et al. A holistic multi-objective optimization design procedure for ensemble member generation and selection
Cuentas et al. An SVM-GA based monitoring system for pattern recognition of autocorrelated processes
CN112101516A (en) Generation method, system and device of target variable prediction model
Rifat et al. EduNet: a deep neural network approach for predicting CGPA of undergraduate students
Jamaleddyn et al. An improved approach to Arabic news classification based on hyperparameter tuning of machine learning algorithms
Surono et al. Developing an optimized recurrent neural network model for air quality prediction using K-means clustering and PCA dimension reduction
Jia et al. A dendritic neuron model with nonlinearity validation on Istanbul stock and Taiwan futures exchange indexes prediction
Li et al. Multiple-input multiple-output soft sensors based on KPCA and MKLS-SVM for quality prediction in atmospheric distillation column
Passalis et al. Deep temporal logistic bag-of-features for forecasting high frequency limit order book time series
Lv et al. Soft computing for overflow particle size in grinding process based on hybrid case based reasoning
Wu Car assembly line fault diagnosis model based on triangular fuzzy Gaussian wavelet kernel support vector classifier machine and genetic algorithm
Liu et al. Mutual information based feature selection for multivariate time series forecasting
CN115471043A (en) Wax mould casting heat treatment equipment health state evaluation method based on integrated SAE-SOM
Zha et al. Recognizing plans by learning embeddings from observed action distributions
Mozaffari et al. An evolvable self-organizing neuro-fuzzy multilayered classifier with group method data handling and grammar-based bio-inspired supervisors for fault diagnosis of hydraulic systems
CN114861436A (en) Method for predicting fatigue strength of steel by using graph convolution network fused with feature pyramid

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180522

WD01 Invention patent application deemed withdrawn after publication