CN108732931A - A kind of multi-modal batch process modeling method based on JIT-RVM - Google Patents

A kind of multi-modal batch process modeling method based on JIT-RVM Download PDF

Info

Publication number
CN108732931A
CN108732931A CN201810471890.2A CN201810471890A CN108732931A CN 108732931 A CN108732931 A CN 108732931A CN 201810471890 A CN201810471890 A CN 201810471890A CN 108732931 A CN108732931 A CN 108732931A
Authority
CN
China
Prior art keywords
data
formula
batch process
mode
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810471890.2A
Other languages
Chinese (zh)
Other versions
CN108732931B (en
Inventor
王建林
张维佳
韩锐
邱科鹏
赵利强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Chemical Technology
Original Assignee
Beijing University of Chemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Chemical Technology filed Critical Beijing University of Chemical Technology
Priority to CN201810471890.2A priority Critical patent/CN108732931B/en
Publication of CN108732931A publication Critical patent/CN108732931A/en
Application granted granted Critical
Publication of CN108732931B publication Critical patent/CN108732931B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/042Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance

Abstract

The multi-modal batch process modeling method based on JIT-RVM that the invention discloses a kind of, belongs to Batch process monitoring technical field.Three-dimensional batch process historical data is unfolded and is standardized along time orientation first;Then mode division is carried out to the data after standardization using SCFCM algorithms, obtain multiple modal data subsets, instant learning is introduced in RVM, and it introduces and can calculate the data fusion similarity between real time data and historical data simultaneously from mode ownership and structure apart from the upper data fusion similarity calculation factor for evaluating data similarity;Optimization goal is finally turned to data fusion similarity maximum, establishes the optimal training dataset of instant learning, and then multi-modal batch process model is established using RVM algorithms.This method has fully considered the dynamic characteristic of batch process data, establishes the optimal training dataset of instant learning using instant learning and data fusion similarity, improves modeling accuracy.

Description

A kind of multi-modal batch process modeling method based on JIT-RVM
Technical field
The present invention relates to a kind of multi-modal batch process modeling methods, belong to Batch process monitoring technical field, especially relate to And it is a kind of based on instant learning Method Using Relevance Vector Machine (Just-in-time Learning Relevance Vector Machine, JIT-RVM multi-modal batch process modeling method).
Background technology
Batch production process is that one kind of the field of industrial production such as fine chemistry industry, biochemistry, pharmacy and food is important The mode of production.The batch process model of high quality is established, the dynamic change of accurate description batch process is capable of, is supervised for batch process It surveys and provides basis with optimal control.
It is traditional based on data-drivens such as pivot recurrence, Partial Least Squares Regression, artificial neural network, support vector regressions Modeling method lacks the expression for model uncertainty, and institute's established model does not have statistical interpretation, and modeling accuracy is relatively low.It is based on Model parameter can be considered as stochastic variable by the modeling method of Method Using Relevance Vector Machine (Relevance Vector Machine, RVM) And prior distribution is assigned, distributed constant is modified by observing data, and the not true of model is provided by Bayesian inference It is qualitative, so that model is had statistical interpretation, has been applied to batch process modeling, there is higher model accuracy.However, between reality There are multiple production mode during having a rest, different modalities have different process characteristics, and containing complicated dynamic characteristic, and show Multi-modal batch process modeling method of some based on RVM utilizes the process data training pattern of same mode so that training number According to the invalid training data for containing part and active procedure characteristic dissmilarity is concentrated, the raising of model accuracy is constrained.
Therefore, it is necessary to be screened to historical data before modeling, extraction and the most similar history of current process data Data establish effective training dataset.The present invention fully considers the dynamic characteristic of batch process data, introduces in RVM and learns immediately Habit and the data fusion similarity calculation factor, establish the optimal training dataset of instant learning, improve the precision of model.
Invention content
The present invention for the purpose of improving multi-modal batch process modeling accuracy, first by three-dimensional batch process historical data along when Between direction be unfolded and be standardized;Then fuzzy C-means clustering (the Sequence-Constrained of temporal constraint is utilized Fuzzy C-Means, SCFCM) mode division is carried out to the data after standardization, multiple modal data subsets are obtained, in RVM Instant learning is introduced, and introducing can be similar apart from the upper evaluation data fusion of data similarity with structure from mode ownership simultaneously Degree calculates the factor, and it is similar to calculate the data fusion between real time data and historical data using the data fusion similarity calculation factor Degree;Finally, Optimization goal is turned to data fusion similarity maximum, obtains the optimal training dataset of instant learning, and then utilize RVM algorithms establish multi-modal batch process model.
The technical solution adopted by the present invention is a kind of multi-modal batch process modeling method based on JIT-RVM, this method Specifically include following steps:
Step 1:Batch process state variable and measurand three-dimensional historical data are expanded into two-dimemsional number along variable direction According to, then temporally direction expands into 2-D data and standardized data by status variable data in three-dimensional historical data, thus To pretreated batch process data;
Step 2:Mode division is carried out to the pretreated batch process data that step 1 obtains using SCFCM algorithms, Obtain multi-modal division result;
Step 3:State estimation is carried out using the mechanism model of batch process state variable, state variable is obtained and estimates in real time It counts, according to multi-modal division result obtained by step 2, judges the affiliated mode of batch process data and its number;
Step 4:The data fusion similarity calculation factor is introduced, state variable real-time estimation data and historical data are calculated Between similarity, extract the importation of all similar historical data and its corresponding output value part, structure instant learning is most Excellent similar training set, establishes JIT-RVM models, finally obtains multi-modal batch process model.
Specifically, step 1 includes the following steps:
By batch process three-dimensional historic state variable dataTwo-dimensional matrix is expanded into according to time orientation Xt(IJ1× K), wherein i is Mission Number, XiFor the status data matrix of i-th of batch, I is total batch number, J1For state Variable number, K are total sampling time, and according to formula (1) to expanding data XtCarry out the standardization on time orientation.
In formula, mean () is data set according to the mean value on time orientation;Std () is data set according to time orientation On standard deviation;For the data set after standardization.
By batch process state variable and measurand historical dataAccording to change Amount direction expands into two-dimensional matrix Xv(IK×J1)、Yv(IK×J2), YiFor the measurand data matrix of i-th of batch, J2For Measurand number provides data basis for subsequent step.
Step 2 includes the following steps:
Using track partitioning to the data set to be divided after the time is unfoldedMode thick division is carried out, is obtained Obtain C thick division cluster centre1<C < < N;Left borderAnd subordinated-degree matrix Wherein N is training data number, and n is data number, and c numbers for cluster centre, vcAnd bcRespectively c-th of cluster centre and its Left margin, uc,nFor the degree of membership of n-th of training data pair, c-th of cluster centre.
It carries out SCFCM mode according to error sum of squares minimum principle using the object function of formula (2) and carefully divides.
In formula, m is fuzzy clustering index, 1≤m<∞;| | | | it is L2 normal forms.
In order to solve majorized function shown in formula (2), Lagrange multiplier is introduced, the is calculated separately using formula (3) and formula (4) The degree of membership of k-th of cluster centre of k cluster centre and data pair, k are current cluster centre serial number.
Consider training dataset timing, introduce iteration optimal policy, read in one by one according to sequential training data and its If after thick division as a result, training dataPreceding half-interval [b in kth classk,bk+zk/ 2], zk=bk+1-bk,zC=N-bC, K=2,3 ..., C, the then degree of membership updated the data according to formula (5);If training dataRear half-interval [b in kth classk+ zk/2,bk+1], zk=bk+1-bk, k=1,2 ..., C-1, then the degree of membership updated the data according to formula (6).
In formula, h numbers for cluster centre;zkFor the siding-to-siding block length of kth class.
Using formula (7) to k-th of cluster centre vkIt is updated, remembers that updated k-th of cluster centre is
Using formula (8) to the error sum of squares L of k-th of cluster centrekIt is updated, in updated k-th of the cluster of note The error sum of squares of the heart is
Judge target error quadratic sumWhether the threshold value of setting, or update after degree of membership are less thanWith former degree of membership uc,n Difference whether be less than a certain range, if two conditions are unsatisfactory for, re-execute formula (5) to the calculating process of formula (8), carry out Mode updates;If there are one conditions to meet, update terminates.
Extract dataMembership vectorMiddle maximum valueAnd its serial numberMould is carried out using formula (9) State identifies, if maximum membership degreeThen thinkBelong toMaster mode;If maximum membership degreeAnd secondary maximum Degree of membership isOrThen think current sampleIn byToIt is a or byToA main mould Transition mode between state.
In formula, κnFor current dataAffiliated mode serial number.
Judge whether to have traversed all training datas, if so, mode division terminates;If it is not, then reading next training Sample, and mode division is carried out using formula (6) to formula (9), multi-modal division result is finally obtained, by Xv(IK×J1), Yv(IK ×J2) it is divided into C master mode data subset and D transition mode data subset, D=C-1.
Step 3 includes the following steps:
State real-time estimation is carried out using batch process mechanism model shown in formula (10), it is new to obtain state variable real-time estimation Data xtest, and using mode division result obtained by step 2, obtain state variable real-time estimation data xtestAffiliated mode and its Number κtestIf current data belongs to master mode, κ is extractedtestA master mode historical data subset;If current data is in Transition mode, then extract κtestIt is real-time to constitute state variable for a transition mode and its adjacent two master modes historical data subset Estimate and data xtestMode historical data subset { Xtest,Ytest}。
In formula, υ variables in order to control;η is system noise.
Step 4 includes the following steps:
In order to be found and state variable real-time estimation data x simultaneously with spatial character from process characteristictestCharacteristic close Historical data, data modality similarity S is calculated according to formula (11) and formula (12) respectivelym,nWith data algebraic space similarity So,n
Sm,n=exp (- | | utest-un||2) (11)
So,n=exp (- | | xtest-xn||2) (12)
In formula, n is data sequence number;unAnd utestN-th historical data and state variable new data are subordinate to angle value respectively; xnFor n-th of historical state data.
In order to merge two similarities, using formula (13) and formula (14) respectively to calculated data modality similarity Sm,n With data algebraic space similarity So,nIt is standardized.
In formula,For data modality similarity set;For data modality similarity after standardization; For data algebra space similarity set;For data algebra space similarity after standardization;For historical data number.
By the data algebra similarity after standardizationWith data modality similarityAverage value as data fusion Similarity measure calculates the factor.
State variable real-time estimation data x is calculated using formula (15)testWith each historical state data xnBetween data fusion Similarity Sn, obtain data fusion similarity setIn order to select the higher history number of data fusion similarity According to as training data, important similarity discrimination threshold ζ is set, compares the data fusion similarity S of historical datanWith differentiation threshold The size of value ζ, if historical data xnData fusion similarity SnRelationship meets formula (16) between setting discrimination threshold ζ, then it is assumed that Data xnWith state variable real-time estimation data xtestIt is similar.
Sn≥ζ (16)
Extract all historical state data x for meeting formula (17)nAnd its corresponding measurand value yn, structure instant learning is most Excellent training data set
Utilize the optimal training dataset of instant learningIn conjunction with RVM algorithms, JIT-RVM models are built by formula (17).
In formula, ε is that mean value is 0, variance β-1Gaussian sequence, and independently of each other; For weight vector;For set of metadata of similar data number;For kernel function.
It is calculated using formula (18),For n-th of historical data similar with current state.
In formula, σ is gaussian kernel function width parameter.
For each weight w of modelnIt is 0 that parameter, which introduces mean value, and variance isIndependent Gaussian prior distribution, and according to formula (19) shown in, its Posterior distrbutionp is updated using Bayes' theorem.
Σ=(β-1ΦTΦ+A) (21)
In formula, μ is weights Posterior Mean;Σ is weights posterior variance;For the hyper parameter of model,
It converts formula (19) to
Formula (22) is solved using II type maximum-likelihood method, calculates model hyper parameter, noise variance and weighting parameter Optimal estimation valueAnd associated vector set, obtain observational variable and the JIT-RVM of state variable using formula (23) Model.
The mechanism model g of bonding state variable finally obtains multi-modal batch process model shown in formula (24).
Advantages of the present invention:The dynamic characteristic for having fully considered batch process data introduces instant learning and data Merge the similarity calculation factor, using the data fusion similarity calculation factor calculate batch process historical data and real time data it Between data fusion similarity, and by maximize data fusion similarity establish the optimal training dataset of instant learning, in turn Multi-modal batch process model is established using RVM, improves modeling accuracy.
Description of the drawings
Fig. 1 is a kind of flow chart of multi-modal batch process modeling method based on JIT-RVM of the present invention.
Fig. 2 to Fig. 4 is quantity of heat production, gas concentration lwevel in specific implementation mode under different gaussian kernel function width parameter σ And the predicted root mean square error of pH value JIT-RVM models.
Fig. 5 to Fig. 7 is built with by traditional RVM modeling methods in specific implementation mode under JIT-RVM model optimized parameters Prediction effect comparison diagram of the vertical RVM models for test lot 5.
Specific implementation mode
With reference to example and attached drawing, the invention will be further described, it should be noted that embodiment does not limit The scope of protection of present invention.
Embodiment
Penicillin fermentation process is typical batch process, and Pensim emulation platforms are penicillin fed batch fermentation process Provide the process simulation data of standard.Different primary condition are set, experimental data is generated by Pensim emulation platformsNumber of batches is I=20, and random perturbation, every batch of are added in every batch of data Duration is 400 hours, and the sampling interval is 0.5 hour, and number of samples K=800 selects J from multiple variables1=5 shapes State variable, J2=3 measurands, as shown in table 1.15 batches randomly choosed in 20 batches train sample as history This, remaining 5 batches are as test data.
1 penicillin fermentation process variable declaration of table
It applies the invention in above-mentioned Modeling for Penicillin Fermentation Process, is as follows:
Step 1:By penicillin fermentation process three-dimensional state variables dataIt is unfolded according to time orientation For two-dimensional matrix Xt(75 × 800), wherein i are Mission Number, XiFor the status data matrix of i-th of batch, and according to formula (1) To expanding data XtCarry out the standardization on time orientation, the data set after being standardizedFor the data set after standardization;
By batch process state variable and measurand historical dataAccording to change Amount direction expands into two-dimensional matrix Xv(12000×5)、Yv(12000 × 3), YiFor the measurand data matrix of i-th of batch, Data basis is provided for subsequent step;
Step 2:With formula (2) for object function, first with track partitioning to the data to be divided after the time is unfolded CollectionMode thick division is carried out, C=4 thick division cluster centre has been obtainedLeft borderAnd subordinated-degree matrix U=[uc,n](4×12000), n is data number, and c numbers for cluster centre, vcAnd bcRespectively C cluster centre and its left margin, uc,nFor the degree of membership of n-th of training data pair, c-th of cluster centre;
The degree of membership of each cluster centre and data to each cluster centre is calculated separately using formula (3) and formula (4);
Read in one by one according to sequential after training data and its thick division as a result, training of judgement dataIn generic In section position, if in kth class preceding half-interval [bk,bk+zk/ 2], (zk=bk+1-bk,z4=12000-b4, k=2,3, 4) degree of membership, then updated the data according to formula (5);If training dataRear half-interval [b in kth classk+zk/2,bk+1], (zk=bk+1-bk, k=1,2,3), then the degree of membership updated the data according to formula (6), zkFor the siding-to-siding block length of kth class;
Using formula (7) to k-th of cluster centre vkIt is updated;
Using formula (8) to the error sum of squares L of k-th of cluster centrekIt is updated;
Judge target error quadratic sumWhether the threshold value of setting, or update after degree of membership are less thanWith former degree of membership uc,n Difference whether be less than a certain range, if two conditions are unsatisfactory for, re-execute formula (5) to the calculating process of formula (8), carry out Mode updates;If there are one conditions to meet, update terminates;
Extract dataMembership vectorMiddle maximum valueAnd its serial numberMould is carried out using formula (9) State identifies;
Judge whether to have traversed all training datas, if so, mode division terminates;If it is not, then reading next training Sample, and mode division is carried out using formula (6) to formula (9), multi-modal division result is finally obtained, by Xv(12000 × 5), Yv (12000 × 3) are divided into C=4 master mode data subset and D=3 transition mode data subset.
Step 3:Due to Pensim platforms be based on Birol mechanism models, state variable emulation data be by Birol mechanism models generate, therefore directly utilize mode division result obtained by step 2, to the state variable x of test lottestInto Row mode divides, and obtains the affiliated mode of data and its number κtestIf current data belongs to master mode, κ is extractedtestA master Mode historical data subset;If current data is in transition mode, κ is extractedtestA transition mode and its adjacent two main mould State historical data subset constitutes mode historical data subset { Xtest,Ytest};
Step 4:Experiment utilizes formula (11) to formula using the similarity degree between fusion similarity calculation factor S metric data (15) test lot status data x is calculatedtestWith each historical state data xnBetween data fusion similarity Sn, counted According to fusion similarity setFor mode historical data number;Set similarity discrimination threshold ζ=0.9, profit Differentiate each historical state data x with formula (16)nWhether it is current state variable set of metadata of similar data.
Extract similar state data xnAnd its corresponding measurand value yn, build the optimal training data set of instant learning
Utilize the optimal similar training dataset of instant learningIn conjunction with RVM algorithms, built by formula (17) to formula (23) JIT-RVM models.
In order to select the optimal kernel function width parameter in JIT-RVM models, one in 15 trained batches is randomly choosed A lot data sets the variation range of gaussian kernel function width parameter σ as { δ/8, δ/4, δ/2, δ, 2 δ, 4 δ, 8 δ }, and δ is instruction The average value for practicing each state variable standard deviation of data, brings each nuclear parameter into model successively and measures the real-time pre- of variable It surveys, the root-mean-square error that model predication value is arranged is the evaluation index of nuclear parameter, and the smaller then model accuracy of root-mean-square error is higher, Select the nuclear parameter with lowest mean square root error as model optimized parameter.Fig. 2 to Fig. 4 is respectively quantity of heat production, carbon dioxide The predicted root mean square error curve graph under each nuclear parameter of concentration and pH value, table 2 are each output variable JIT-RVM models Optimal and parameter and its corresponding root-mean-square error.
2 parameter optimization result of table
The mechanism model g of bonding state variable finally obtains multi-modal batch process model shown in formula (24).
Above-mentioned steps are the concrete application that the present invention establishes process model in penicillin fermentation process emulation platform, in order to test Effectiveness of the invention is demonstrate,proved, multi-modal batch process modeling method of the setting based on RVM is Experimental comparison's method, is utilized respectively most RVM models and JIT-RVM models under excellent nuclear parameter predict the process data of the measurand of test lot, and with pre- The root-mean-square error of measured value is model-evaluation index, and table 3 is the prediction result of 5 test lots.Fig. 5 to Fig. 7 is respectively to test The prediction effect figure of 5 each measurand of batch.
35 batch measurand predicted root mean square errors of table
It can be obtained from the prediction result of 5 batches, compared to multi-modal batch process modeling side of the tradition based on RVM Method can be extracted accurately to build with high similar historical data based on the multi-modal batch process modeling methods of JIT-RVM and be learned immediately Optimal training dataset is practised, and then establishes multi-modal batch process model, describes the dynamic change of process, what is obtained is multi-modal The model prediction curve of batch process model has smaller error with Pensim simulation curves, shows the method for the invention tool There is lower multi-modal batch process data prediction error, building multi-modal batch process model has higher model accuracy.

Claims (5)

1. a kind of multi-modal batch process modeling method based on JIT-RVM, it is characterised in that:This method specifically includes following step Suddenly:
Step 1:Batch process state variable and measurand three-dimensional historical data are expanded into 2-D data along variable direction, By status variable data in three-dimensional historical data, temporally direction expands into 2-D data and standardized data again, thus obtains pre- Batch process data that treated;
Step 2:Mode division is carried out to the pretreated batch process data that step 1 obtains using SCFCM algorithms, is obtained Multi-modal division result;
Step 3:State estimation is carried out using the mechanism model of batch process state variable, obtains state variable real-time estimation number According to according to multi-modal division result obtained by step 2, judging the affiliated mode of batch process data and its number;
Step 4:The data fusion similarity calculation factor is introduced, is calculated between state variable real-time estimation data and historical data Similarity, extract the importation of all similar historical data and its corresponding output value part, build the optimal phase of instant learning Like training set, JIT-RVM models are established, finally obtain multi-modal batch process model.
2. a kind of multi-modal batch process modeling method based on JIT-RVM according to claim 1, it is characterised in that: Step 1 includes the following steps:
By batch process three-dimensional historic state variable dataTwo-dimensional matrix X is expanded into according to time orientationt(IJ1 × K), wherein i is Mission Number, XiFor the status data matrix of i-th of batch, I is total batch number, J1For state variable Number, K are total sampling time, and according to formula (1) to expanding data XtCarry out the standardization on time orientation;
In formula, mean () is data set according to the mean value on time orientation;Std () is data set according on time orientation Standard deviation;For the data set after standardization;
By batch process state variable and measurand historical dataAccording to variable side To expanding into two-dimensional matrix Xv(IK×J1)、Yv(IK×J2), YiFor the measurand data matrix of i-th of batch, J2To measure Variable number provides data basis for subsequent step.
3. a kind of multi-modal batch process modeling method based on JIT-RVM according to claim 1, it is characterised in that: Step 2 includes the following steps:
Using track partitioning to the data set to be divided after the time is unfoldedMode thick division is carried out, C is obtained A thick division cluster centre1<C < < N;Left borderAnd subordinated-degree matrix U=[uc,n](C×N), Wherein N is training data number, and n is data number, and c numbers for cluster centre, vcAnd bcRespectively c-th of cluster centre and its Left margin, uc,nFor the degree of membership of n-th of training data pair, c-th of cluster centre;
It carries out SCFCM mode according to error sum of squares minimum principle using the object function of formula (2) and carefully divides;
In formula, m is fuzzy clustering index, 1≤m<∞;| | | | it is L2 normal forms;
In order to solve majorized function shown in formula (2), Lagrange multiplier is introduced, is calculated separately k-th using formula (3) and formula (4) The degree of membership of k-th of cluster centre of cluster centre and data pair, k are current cluster centre serial number;
Consider the timing of training dataset, introduce iteration optimal policy, reads in training data and its thick stroke one by one according to sequential If point after as a result, training dataPreceding half-interval [b in kth classk,bk+zk/ 2], zk=bk+1-bk,zC=N-bC, k= 2,3 ..., C, the then degree of membership updated the data according to formula (5);If training dataRear half-interval [b in kth classk+zk/ 2,bk+1], zk=bk+1-bk, k=1,2 ..., C-1, then the degree of membership updated the data according to formula (6);
In formula, h numbers for cluster centre;zkFor the siding-to-siding block length of kth class;
Using formula (7) to k-th of cluster centre vkIt is updated, remembers that updated k-th of cluster centre is
Using formula (8) to the error sum of squares L of k-th of cluster centrekIt is updated, remembers the mistake of updated k-th of cluster centre Poor quadratic sum is
Judge target error quadratic sumWhether the threshold value of setting, or update after degree of membership are less thanWith former degree of membership uc,nDifference Whether it is less than a certain range, if two conditions are unsatisfactory for, re-execute formula (5) to the calculating process of formula (8), carry out mode Update;If there are one conditions to meet, update terminates;
Extract dataMembership vectorMiddle maximum valueAnd its serial numberMode knowledge is carried out using formula (9) Not, if maximum membership degreeThen thinkBelong toMaster mode;If maximum membership degreeAnd secondary maximum is subordinate to Degree isOrThen think current sampleIn byToIt is a or byToBetween a master mode Transition mode;
In formula, κnFor current dataAffiliated mode serial number;
Judge whether to have traversed all training datas, if so, mode division terminates;If it is not, next training sample is then read, And mode division is carried out using formula (6) to formula (9), multi-modal division result is finally obtained, by Xv(IK×J1), Yv(IK×J2) It is divided into C master mode data subset and D transition mode data subset, D=C-1.
4. a kind of multi-modal batch process modeling method based on JIT-RVM according to claim 1, it is characterised in that: The step 3, specifically includes:
State real-time estimation is carried out using batch process mechanism model shown in formula (10), obtains state variable real-time estimation new data xtest, and using mode division result obtained by step 2, obtain state variable real-time estimation data xtestAffiliated mode and its number κtestIf current data belongs to master mode, κ is extractedtestA master mode historical data subset;If current data is in transition Mode then extracts κtestA transition mode and its adjacent two master modes historical data subset, constitute state variable estimate in real time and Data xtestMode historical data subset { Xtest,Ytest};
In formula, υ variables in order to control;η is system noise.
5. a kind of multi-modal batch process modeling method based on JIT-RVM according to claim 1, it is characterised in that: The step 4, specifically includes:
In order to be found and state variable real-time estimation data x simultaneously with spatial character from process characteristictestThe history of characteristic close Data calculate data modality similarity S according to formula (11) and formula (12) respectivelym,nWith data algebraic space similarity So,n
Sm,n=exp (- | | utest-un||2) (11)
So,n=exp (- | | xtest-xn||2) (12)
In formula, n is data sequence number;unAnd utestN-th historical data and state variable new data are subordinate to angle value respectively;xnFor N-th of historical state data;
In order to merge two similarities, using formula (13) and formula (14) respectively to calculated data modality similarity Sm,nSum number According to algebraic space similarity So,nIt is standardized;
In formula,For data modality similarity set;For data modality similarity after standardization; For data algebra space similarity set;For data algebra space similarity after standardization;For historical data number;
By the data algebra similarity after standardizationWith data modality similarityAverage value it is similar as data fusion The metric calculation factor;
State variable real-time estimation data x is calculated using formula (15)testWith each historical state data xnBetween data fusion it is similar Spend Sn, obtain data fusion similarity setIn order to select the higher historical data of data fusion similarity to make For training data, important similarity discrimination threshold is setCompare the data fusion similarity S of historical datanWith discrimination threshold Size, if historical data xnData fusion similarity SnWith setting discrimination thresholdBetween relationship meet formula (16), then it is assumed that should Data xnWith state variable real-time estimation data xtestIt is similar;
Extract all historical state data x for meeting formula (17)nAnd its corresponding measurand value yn, build the optimal instruction of instant learning Practice data acquisition system
Utilize the optimal training dataset of instant learningIn conjunction with RVM algorithms, JIT-RVM models are built by formula (17);
In formula, ε is that mean value is 0, variance β-1Gaussian sequence, and independently of each other;For power Value vector;For set of metadata of similar data number; For kernel function;
It is calculated using formula (18),For n-th of historical data similar with current state;
In formula, σ is gaussian kernel function width parameter;
For each weight w of modelnIt is 0 that parameter, which introduces mean value, and variance isIndependent Gaussian prior distribution, and according to formula (19) institute Show, updates its Posterior distrbutionp using Bayes' theorem;
Σ=(β-1ΦTΦ+A) (21)
In formula, μ is weights Posterior Mean;Σ is weights posterior variance;For the hyper parameter of model,
It converts formula (19) to
Formula (22) is solved using II type maximum-likelihood method, calculates the optimal of model hyper parameter, noise variance and weighting parameter Estimated valueAnd associated vector set, obtain observational variable and the JIT-RVM models of state variable using formula (23);
The mechanism model g of bonding state variable finally obtains multi-modal batch process model shown in formula (24);
CN201810471890.2A 2018-05-17 2018-05-17 JIT-RVM-based multi-modal intermittent process modeling method Active CN108732931B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810471890.2A CN108732931B (en) 2018-05-17 2018-05-17 JIT-RVM-based multi-modal intermittent process modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810471890.2A CN108732931B (en) 2018-05-17 2018-05-17 JIT-RVM-based multi-modal intermittent process modeling method

Publications (2)

Publication Number Publication Date
CN108732931A true CN108732931A (en) 2018-11-02
CN108732931B CN108732931B (en) 2021-03-26

Family

ID=63938383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810471890.2A Active CN108732931B (en) 2018-05-17 2018-05-17 JIT-RVM-based multi-modal intermittent process modeling method

Country Status (1)

Country Link
CN (1) CN108732931B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109754010A (en) * 2018-12-29 2019-05-14 北京化工大学 A kind of multi-modal division methods of batch process of temporal constraint fuzzy clustering
CN109918692A (en) * 2018-11-08 2019-06-21 北京华风超越科技有限公司 A kind of statistical model method for building up and device based on numerical simulation
CN111079856A (en) * 2019-12-28 2020-04-28 北京化工大学 CSJITL-RVM-based multi-period intermittent process soft measurement modeling method
CN111813064A (en) * 2020-07-03 2020-10-23 浙江大学 Industrial process running state online evaluation method based on instant learning thought
CN111881993A (en) * 2020-08-03 2020-11-03 长沙有色冶金设计研究院有限公司 Operation mode multilayer grading matching optimization method for copper matte converting process
CN112800253A (en) * 2021-04-09 2021-05-14 腾讯科技(深圳)有限公司 Data clustering method, related device and storage medium
CN113570070A (en) * 2021-09-23 2021-10-29 深圳市信润富联数字科技有限公司 Streaming data sampling and model updating method, device, system and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08310271A (en) * 1995-05-17 1996-11-26 Aisin Seiki Co Ltd Vehicles speed control device
WO2005089434A2 (en) * 2004-03-17 2005-09-29 Seadragon Software, Inc. Method for encoding and serving geospatial or other vector data as images
CN103336507A (en) * 2013-06-24 2013-10-02 浙江大学 Statistical modeling and on-line monitoring method based on multimodality collaboration time frame automatic division
US8761909B2 (en) * 2007-11-30 2014-06-24 Honeywell International Inc. Batch process monitoring using local multivariate trajectories
CN104699894A (en) * 2015-01-26 2015-06-10 江南大学 JITL (just-in-time learning) based multi-model fusion modeling method adopting GPR (Gaussian process regression)
KR101579732B1 (en) * 2015-01-14 2015-12-30 순천향대학교 산학협력단 A method for novel health monitoring scheme for smart concrete structures
CN105511445A (en) * 2015-12-01 2016-04-20 沈阳化工大学 Multi-modal process fault detection method based on local neighbor standardization matrix
CN105930860A (en) * 2016-04-13 2016-09-07 闽江学院 Simulated analysis method of classification optimizing model for temperature-sensing big data of intelligent building
CN106919162A (en) * 2015-12-24 2017-07-04 发那科株式会社 The control device of the learning functionality with detection noise producing cause
CN107403196A (en) * 2017-07-28 2017-11-28 江南大学 Instant learning modeling method based on spectral clustering analysis
CN107766968A (en) * 2017-09-26 2018-03-06 河海大学 Short-term wind speed forecasting method based on CAPSO RVM built-up patterns
CN107798353A (en) * 2017-11-16 2018-03-13 中国民航大学 A kind of batch process monitoring data processing method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08310271A (en) * 1995-05-17 1996-11-26 Aisin Seiki Co Ltd Vehicles speed control device
WO2005089434A2 (en) * 2004-03-17 2005-09-29 Seadragon Software, Inc. Method for encoding and serving geospatial or other vector data as images
US8761909B2 (en) * 2007-11-30 2014-06-24 Honeywell International Inc. Batch process monitoring using local multivariate trajectories
CN103336507A (en) * 2013-06-24 2013-10-02 浙江大学 Statistical modeling and on-line monitoring method based on multimodality collaboration time frame automatic division
KR101579732B1 (en) * 2015-01-14 2015-12-30 순천향대학교 산학협력단 A method for novel health monitoring scheme for smart concrete structures
CN104699894A (en) * 2015-01-26 2015-06-10 江南大学 JITL (just-in-time learning) based multi-model fusion modeling method adopting GPR (Gaussian process regression)
CN105511445A (en) * 2015-12-01 2016-04-20 沈阳化工大学 Multi-modal process fault detection method based on local neighbor standardization matrix
CN106919162A (en) * 2015-12-24 2017-07-04 发那科株式会社 The control device of the learning functionality with detection noise producing cause
CN105930860A (en) * 2016-04-13 2016-09-07 闽江学院 Simulated analysis method of classification optimizing model for temperature-sensing big data of intelligent building
CN107403196A (en) * 2017-07-28 2017-11-28 江南大学 Instant learning modeling method based on spectral clustering analysis
CN107766968A (en) * 2017-09-26 2018-03-06 河海大学 Short-term wind speed forecasting method based on CAPSO RVM built-up patterns
CN107798353A (en) * 2017-11-16 2018-03-13 中国民航大学 A kind of batch process monitoring data processing method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YIQI LIU: "Adaptive just-in-time and relevant vector machine based soft-sensors with adaptive differential evolution algorithms for parameter optimization", 《CHEMICAL ENGINEERING SCIENCE》 *
付钊 等: "基于即时学习的间歇过程复合模型", 《上海交通大学学报》 *
刘伟旻 等: "基于DHSC的多模态间歇过程测量数据异常检测方法", 《化工学报》 *
刘俊顺: "基于聚类的相关向量机快速分类算法研究", 《中国优秀硕士学位论文全文数据库 基础科学辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109918692A (en) * 2018-11-08 2019-06-21 北京华风超越科技有限公司 A kind of statistical model method for building up and device based on numerical simulation
CN109754010A (en) * 2018-12-29 2019-05-14 北京化工大学 A kind of multi-modal division methods of batch process of temporal constraint fuzzy clustering
CN111079856A (en) * 2019-12-28 2020-04-28 北京化工大学 CSJITL-RVM-based multi-period intermittent process soft measurement modeling method
CN111079856B (en) * 2019-12-28 2023-09-01 北京化工大学 Multi-period intermittent process soft measurement modeling method based on CSJITL-RVM
CN111813064A (en) * 2020-07-03 2020-10-23 浙江大学 Industrial process running state online evaluation method based on instant learning thought
CN111813064B (en) * 2020-07-03 2021-06-25 浙江大学 Industrial process running state online evaluation method based on instant learning thought
CN111881993A (en) * 2020-08-03 2020-11-03 长沙有色冶金设计研究院有限公司 Operation mode multilayer grading matching optimization method for copper matte converting process
CN111881993B (en) * 2020-08-03 2024-04-12 长沙有色冶金设计研究院有限公司 Operation mode multilayer hierarchical matching optimization method for copper matte converting process
CN112800253A (en) * 2021-04-09 2021-05-14 腾讯科技(深圳)有限公司 Data clustering method, related device and storage medium
CN113570070A (en) * 2021-09-23 2021-10-29 深圳市信润富联数字科技有限公司 Streaming data sampling and model updating method, device, system and storage medium

Also Published As

Publication number Publication date
CN108732931B (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN108732931A (en) A kind of multi-modal batch process modeling method based on JIT-RVM
Mirahadi et al. Simulation-based construction productivity forecast using neural-network-driven fuzzy reasoning
CN104408518B (en) Based on the neural network learning optimization method of particle swarm optimization algorithm
CN113722985B (en) Method and system for evaluating health state and predicting residual life of aero-engine
CN109472397B (en) Polymerization process parameter adjusting method based on viscosity change
Yang et al. Neural network and GA approaches for dwelling fire occurrence prediction
CN112364560B (en) Intelligent prediction method for working hours of mine rock drilling equipment
CN103676649A (en) Local self-adaptive WNN (Wavelet Neural Network) training system, device and method
CN106296434B (en) Grain yield prediction method based on PSO-LSSVM algorithm
CN114944203A (en) Wastewater treatment monitoring method and system based on automatic optimization algorithm and deep learning
CN108460462A (en) A kind of Interval neural networks learning method based on interval parameter optimization
CN116187835A (en) Data-driven-based method and system for estimating theoretical line loss interval of transformer area
CN112163671A (en) New energy scene generation method and system
CN115982141A (en) Characteristic optimization method for time series data prediction
CN111191823A (en) Production logistics prediction method based on deep learning
CN113762370A (en) Depth network set generation method combined with Gaussian random field
CN112149896A (en) Attention mechanism-based mechanical equipment multi-working-condition fault prediction method
CN112766548A (en) Order completion time prediction method based on GASA-BP neural network
CN113156473A (en) Self-adaptive discrimination method for satellite signal environment of information fusion positioning system
Awadalla et al. Spiking neural network-based control chart pattern recognition
CN108073979A (en) A kind of ultra-deep study of importing artificial intelligence knows method for distinguishing for image
CN113539517B (en) Method for predicting time sequence intervention effect
CN115242428A (en) Network security situation prediction method based on optimized CW-RNN
CN114492988A (en) Method and device for predicting product yield in catalytic cracking process
CN114463994A (en) Chaos and reinforcement learning based traffic flow prediction parallel method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant