US20050187643A1  Parametric universal nonlinear dynamics approximator and use  Google Patents
Parametric universal nonlinear dynamics approximator and use Download PDFInfo
 Publication number
 US20050187643A1 US20050187643A1 US10842157 US84215704A US2005187643A1 US 20050187643 A1 US20050187643 A1 US 20050187643A1 US 10842157 US10842157 US 10842157 US 84215704 A US84215704 A US 84215704A US 2005187643 A1 US2005187643 A1 US 2005187643A1
 Authority
 US
 Grant status
 Application
 Patent type
 Prior art keywords
 model
 process
 nonlinear
 output
 punda
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Pending
Links
Images
Classifications

 G—PHYSICS
 G05—CONTROLLING; REGULATING
 G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
 G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
 G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
 G05B13/04—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
 G05B13/048—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators using a predictor

 G—PHYSICS
 G05—CONTROLLING; REGULATING
 G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
 G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
 G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
 G05B13/04—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
 G05B13/042—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance

 G—PHYSICS
 G05—CONTROLLING; REGULATING
 G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
 G05B17/00—Systems involving the use of models or simulators of said systems
 G05B17/02—Systems involving the use of models or simulators of said systems electric
Abstract
Description
 [0001]This application claims benefit of priority to U.S. Provisional Application 60/545,766 titled “Parametric Universal Nonlinear Dynamics Approximator and Use”, filed Feb. 19, 2004, whose inventors were Bijan SayyarRodsari, Edward Plumer, Eric Hartman, Kadir Liano, and Celso Axelrud.
 [0002]1. Field of the Invention
 [0003]The present invention generally relates to the field of predictive modeling and control, and more particularly to a combined modeling architecture for building numerically efficient dynamic models for systems of arbitrary complexity.
 [0004]2. Description of the Related Art
 [0005]Many systems or processes in science, engineering, and business are characterized by the fact that many different interrelated parameters contribute to the behavior of the system or process. It is often desirable to determine values or ranges of values for some or all of these parameters which correspond to beneficial behavior patterns of the system or process, such as productivity, profitability, efficiency, etc. However, the complexity of most real world systems generally precludes the possibility of arriving at such solutions analytically, i.e., in closed form. Therefore, many analysts have turned to predictive models and optimization techniques to characterize and derive solutions for these complex systems or processes.
 [0006]Predictive models generally refer to any representation of a system or process which receives input data or parameters related to system or model attributes and/or external circumstances/environment and generates output indicating the behavior of the system or process under those parameters. In other words, the model or models may be used to predict behavior or trends based upon previously acquired data. There are many types of predictive models, including linear, nonlinear, analytic, and empirical (e.g., statistical) models, among others, several types of which are described in more detail below.
 [0007]Optimization generally refers to a process whereby past (or synthesized) data related to a system or process are analyzed or used to select or determine optimal parameter sets for operation of the system or process. For example, the predictive models mentioned above may be used in an optimization process to test or characterize the behavior of the system or process under a wide variety of parameter values. The results of each test may be compared, and the parameter set or sets corresponding to the most beneficial outcomes or results may be selected for implementation in the actual system or process.
 [0008]
FIG. 1 illustrates a general optimization process as applied to an industrial system or process 104, such as a manufacturing plant, according to the prior art. It may be noted that the optimization techniques described with respect to the manufacturing plant are generally applicable to all manner of systems and processes. More specifically,FIG. 1 illustrates an optimization system where a computer based optimization system 102 operates in conjunction with a process (or system) 104 to optimize the process, according to the prior art. In other words, the computer system 102 executes software programs (including computer based predictive models) that receive process data 106 from the process 104 and generate optimized decisions and/or actions, which may then be applied to the process 104 to improve operations based on specified goals and objectives.  [0009]Thus, many predictive systems may be characterized by the use of an internal model (e.g., a mathematical model) that represents a process or system 104 for which predictions are made. As mentioned above, predictive model types may be linear, nonlinear, stochastic, or analytical, among others.
 [0010]Generally, mathematical models are developed using one of two approaches (or a combination of both). One approach is to conceptually partition the system into subsystems whose properties are well understood, e.g., from previous experience or use. Each subsystem is then modeled using physical or natural laws and other wellestablished relationships that have their roots in earlier empirical work. These subsystems are then joined mathematically and a model of the whole system is obtained. The other approach to developing mathematical models is directly based on experimentation. For example, input and output signals from the system being modeled are recorded and subjected to data analysis in order to infer a model. Note that as used herein, static nonlinearity in the input/output mapping of a system is viewed as a special case of the general nonlinear dynamic input/output mapping, and hence the techniques described are also applicable when only a static input/output mapping is to be modeled.
 [0011]The first approach is generally referred to as firstprinciples (FP) modeling, while the second approach is commonly referred to as empirical modeling (although it should be noted that empirical data are often used in building FP models). Each of these two approaches has substantial strengths and weaknesses when applied to realworld complex systems.
 [0012]For example, regarding firstprinciples models:
 [0013]1. FP models are built based on the science underlying the process being modeled, and hence are better suited for representing the general process behavior over the entire operational regime of the process.
 [0014]However:
 [0015]2. Firstprinciples information is often incomplete and/or inaccurate, and so the model and thus its outputs may lack the accuracy required.
 [0016]3. Tuning of the parameters in the model is needed before the model could be used for optimization and control.
 [0017]4. FP models may be computationally expensive and hence useful for realtime optimization and control only in slower processes. This is particularly apparent when the outputs in FP models are not explicit. For example, consider a model of the form G(y_{k}, u_{k}, x_{k}=0, where the output vector y_{k }is an implicit function of input vector u_{k}, state vector x_{k}. In this case, an internal solver is needed to solve for y_{k }at each interval.
 [0018]5. When the process changes, modification of the first principles model is generally expensive. For example, designed experiments may be necessary to obtain or generate the data needed to update the model.
 [0019]Regarding empirical models:
 [0020]1. Since data capture the nonidealities of the actual process, where data are available, an empirical model can often be more accurate than a firstprinciples model.
 [0021]However:
 [0022]2. The available data are often highly correlated and process data alone is not sufficient to unambiguously break the correlation. This is particularly apparent when process operation is recipedominated. For example, in a linear system with 2 inputs and 1 output, a recipe may require two inputs to move simultaneously, one to increase by one unit and the other to decrease by one unit. If the output increases by one unit, the sign and value of the gain from the two inputs to the output cannot be uniquely determined based on these data alone.
 [0023]3. Additional designed experiments are often needed in order to produce the necessary data for system identification; however, designed experiments disrupt the normal operation of the plant and hence are thus highly undesirable.
 [0024]4. Certain regions or regimes of operation are typically avoided during plant operation, and hence the representative data for that region may not be available.
 [0025]The complementary strengths and weaknesses of these two modeling routes are widely recognized, and efforts that combine the two are reported in the literature, some examples of which are described below.
 [0026]One approach for using both FP information/models and empirical data is to develop combined models. For example, in “Modeling Chemical Processes Using Prior Knowledge and Neural Networks,” AIChE Journal, vol. 40, p. 1328, 1994, by M. Thompson and M. Kramer, (Thompson (1994)), a proposal is made to combine firstprinciples models with empirical nonparametric models, such as neural network models, in a hybrid architecture to model complex chemical processes, illustrated in
FIG. 2 . AsFIG. 2 shows, inputs 201 are provided to a default parametric model 202 and a nonparametric model 204 (e.g., a neural network), whose combined (and optionally processed) outputs Z 205 are provided as input to a static nonlinear model 404, which then generates outputs 207. In Thompson's proposed hybrid architecture the neural network (nonparametric model) 204 is responsible for learning the difference between the default FP model 202 and the target data. Although the neural network is a nonparametric estimator capable of approximating this difference, it is also required to provide a negligible contribution to the model output for inputs far from the training data. In other words, the nonparametric model is required to contribute substantially in the operational range of the system, but not outside of this range. The training of the neural network in Thompson is therefore formulated as a semiinfinite programming (SIP) problem (reducible to a constrained nonlinear programming (NLP) problem if all inequalities are finite or infinite inequalities can be transformed into finite constraints) for which SIP solvers (constrained NLP algorithms in the case of NLP problem) may be used for training.  [0027]Another example of a combined model is described in “Identification and Optimizing Control of a Rougher Flotation Circuit using an Adaptable Hybrid Neural Model,” Minerals Eng., vol. 10, p. 707, 1997, by F. Cubillos and E. Lima (Cubillos (1997)), where a neural network model is used to model reaction rates for an ideal Continuous Stir Tank Reactor (CSTR) as a function of temperature and output concentration. In this example, the input and output data for the training of the neural network model is generated synthetically using the ideal CSTR model. Therefore, the neural network model is trained with explicit data for inputs/outputs of the neural network block in the combined model. In other words, the neural network block is detached from the combined model structure for training purposes, and is included in the combined model structure for optimization and control after training. Cubillos shows that the combined model has superior generalization capability compared to the neural network models alone, and that the modeling process was easier than synthesizing a FP model based on physical considerations.
 [0028]In “Hybrid FirstPrinciples/Neural Networks Model for Column Flotation,” AIChE Journal, vol. 45, p. 557, 1999, by S. Gupta, P. Liu, S. Svoronos, R. Sharma, N. AbdelKhalek, Y. Cheng, and H. ElShall (Gupta (1999)), yet another example of a combined model is presented, where the combined model is used for phosphate column flotation. In this approach, the FP model is obtained from material balances on both phosphate particles and gangue (undesired material containing mostly silica). Neural network models relate the attachment rate constants to the operating variables. A nonlinear optimizer in the form of a combination of simulated annealing and conjugate gradient algorithm is used for the training of the neural network models.
 [0029]An alternative approach to combining FP knowledge and empirical modeling is to use FP information to impose constraints on the training of the empirical model. An example of this approach is reported in E. Hartman, “Training feedforward neural networks with gain constraints,” Neural Computation, vol. 12, pp. 811829, April 2000 (Hartman (2000)), where gain information is used as constraints for the training of the neural network models. Hartman develops a method for training feedforward neural networks subject to inequality or equalitybound constraints on the gains (i.e., partial derivatives of outputs with respect to inputs) of the learned mapping. Hartman argues that since accurate gains are essential for the use of neural network models for optimization and control, it is only natural to train neural network models subject to gain constraints when they are known through additional means (such as, for example, bounds extracted from FP models or operator knowledge about the sign of a particular gain).
 [0030]A further example of including first principles knowledge in the training of an empirical model is a bounded derivative network (BDN) (i.e., the analytical integral of a neural network) as described in “Introducing the state space bounded derivative network for commercial transition control,” IEEE American Control Conference, June 2003, by P. Turner, J. Guiver, and B. Lines of Aspen Technology, Inc. (Turner (2003)), and illustrated in
FIG. 3 . In this reference the BDN is proposed as a universal nonlinear approximator. AsFIG. 3 shows, in this approach, a state space model 302 is coupled to the BDN 304, and inputs 301 are received by the state space model 302 and by the BDN 204. Based on the received input 301, the state space model then provides state information 303 to the GDN 204, as shown, and, based on the received inputs 301 and the received states 303, the BDN generates output predictions 307. As indicated by the name “bounded derivative network”, the parameters of the nonlinear approximator are trained through the application of a constrained NLP solver where one set of potential constraints is the bounds on input/output gains in the model.  [0031]Prior art approaches to using combined models (as described above) have used neural network models to represent the variation in a specific set of parameters in a FP model. The overall model is therefore the original FP model with some of its parameters varying depending on the input(s)/state(s) of the system. These prior art approaches are generally inadequate in the following situations:
 [0032]1. When the FP model does not fully describe the process. For example, if FP information for only a part of the process is known, a combined model of the process that is appropriate for optimization and control cannot be built based on the prior art techniques (e.g., using the system of
FIG. 2 ), even if representative measurements of all the relevant process variables are available.  [0033]2. When the FP model only implicitly describes the relationship between inputs/states/parameters/outputs. The prior art approaches do not address the issue of training a neural network that models the parameters of an implicit FP model.
 [0034]3. When higherorder fidelity of the input/output mapping (such as first or second order derivatives of the outputs with respect to the inputs) is critical to the usability of the combined model for optimization and control. Prior art does not address the imposition of such constraints in the training of neural network models in the context of combined models as depicted in
FIG. 2 .  [0035]While the system described in Turner (2003) does address the issue of gain constraints in the proposed boundedderivativenetwork (BDN), the training of the BDN is performed with explicit access to inputs and outputs of the trained model (similar to conventional training of a standalone neural network by a NLP solver), and the issue of bounded derivatives when a FP block appears in series with the output of the BDN is not addressed. More specifically, the bounded derivative network of Turner is used in a Wiener model architecture or structure (i.e. in a series connection with a linear state space model) to construct a nonlinear model for a physical process. The Weiner model architecture is illustrated in
FIG. 4A , where a static nonlinear model follows a linear dynamic model 402. Thus, the BDN ofFIG. 3 may be considered a special case of the Weiner model ofFIG. 4A .  [0036]According to the Wiener model structure, the modification of the BDN will only affect the effective gain(s) between the inputs and outputs of the model. The identification of the dynamic behavior of the physical process occurs prior to the training of the BDN, and so changes in the state space model may require retraining of the BDN model. Indeed, the entire theory behind the training of the BDN in Turner (2003) is developed to ensure accurate representation of the process gains in the model. In an alternative but similar approach,
FIG. 4B illustrates a Hammerstein model, where the nonlinear static model 404 precedes the linear dynamic model 402. Similar to the Weiner model structure, the nonlinear static model 404 and the linear dynamic model 402 are developed or trained in isolation of each other, and so modifications in the dynamic model 402 generally requires retraining of the nonlinear static model 404. Further information regarding Weiner and Hammerstein models may be found in Adaptive Control, 2nd Edition. 1994, by K. Astrom and B. Wittenmark.  [0037]Thus, improved systems and methods for combined models and their use are desired.
 [0038]The present invention comprises various embodiments of a system and method for modeling nonlinear process or systems. More specifically, a parametric universal nonlinear dynamics approximator (PUNDA), also referred to as a PUNDA model, and its use are described.
 [0039]In one embodiment, the PUNDA model includes a nonlinear approximator, coupled to a dynamic parameterized model in series. The nonlinear approximator may be a neural network, although any type of nonlinear approximator may be used, including for example, support vector machines, statistical models, parametric descriptions, Fourier Series models, or any other type of empirical or data based model, among others. In a preferred embodiment, the nonlinear approximator is a universal nonlinear approximator, such that any type of nonlinear mapping may be implemented. The nonlinear approximator operates to provide parameters to the dynamic parameterized model. In some embodiments, the nonlinear approximator (e.g., neural network) may also include a feedback loop, whereby the output of the approximator is provided as further input to itself, thus supporting dependencies of the output upon prior output of the approximator. In some embodiments, the dynamics approximator may reduce to a static function.
 [0040]In a preferred embodiment, the dynamic parameterized model may be a multiinput, multioutput (MIMO) dynamic model implemented with a set of difference equations, i.e., a set of discrete time polynomials. Thus, the dynamic parameterized model may receive its parameters from the nonlinear approximator, and operate accordingly.
 [0041]The PUNDA model may be coupled to a physical process or a representation of the process. Process inputs may be provided to the process and to the PUNDA model as input. The process produces process outputs, which may be combined or used with PUNDA model outputs to determine model errors, which may then be provided back to the PUNDA model and used, e.g., with an optimizer, to train the PUNDA model.
 [0042]Although the PUNDA model is described below in terms of a series combination of a neural network model and a set of MIMO difference equations that can be used to model any complex nonlinear dynamic system with any desired degree of accuracy, as noted above, other nonlinear approximators and dynamic parameterized models are also contemplated. For example, in various embodiments, the physical process may be described or represented by the process itself, a first principles model, empirical data, or any combination of the three. For simplicity, in this training configuration of the system, the representation of the process may be referred to as the process.
 [0043]The PUNDA model disclosed herein allows the empirical information and/or the firstprinciples knowledge available about the process to be systematically used in building a computationally efficient model of the physical process that is suitable for online optimization and control of the process, i.e., substantially in real time. Additionally, such a model may be capable of approximating the nonlinear physical process with any desired degree of accuracy.
 [0044]It is noted that partial FP models that by themselves are not sufficient to fully describe a physical process (and hence are currently ignored in practice) could be used to build a representative model of the physical process with the proposed PUNDA structure. The neural network block in the PUNDA model may be trained while it is serially connected to the MIMO difference equation block, and hence, in general, the output of the neural network model may not be directly available. It is expected that the complexities of the real world physical processes may dictate the need for the training of the neural network model under such combined architecture in a majority of the applications, and indeed, such integrated training is a primary feature and benefit of the present invention.
 [0045]A preferred methodology for the training of the neural network model within the PUNDA architecture is to formulate the training of the neural network parameters as a constrained nonlinear programming problem, which may then be solved with any appropriate NLP solver technology (e.g., Sequential Quadratic Programming (SQP)). The parameters of the neural network model may include: (a) parameters that determine the topology of the neural network model (e.g. number of layers, connectivity of the network), (b) parameters that determine the type/shape of the activation function used at each node, and/or (c) weights/biases in the network, among others.
 [0046]It is generally accepted that a successful model for optimization and control must accurately capture both process gains and dynamics. To ensure the high fidelity of the combined PUNDA model for optimization and control, the constrained NLP problem for the training of the neural network model may include constraints on the derivatives (of any desired order) of the process outputs with respect to the process inputs. Other constraints, such as, for example, mass and energy balances, may also be included.
 [0047]In addition to the derivative constraints (the first order of which are commonly referred to as gain constraints in the literature), the training of the neural network block in the PUNDA model can be constrained to ensure desired dynamic behavior for the PUNDA model. For example, a time constant in the system may be bounded to a certain range based on prior knowledge about the physics of the process. This is a key attribute that distinguishes the PUNDA model from prior art approaches.
 [0048]In one embodiment, the PUNDA model may be part of an industrial prediction/control system. For example, the PUNDA model may receive process outputs from the physical process and provides model output to a controller, which in turn provides controller output to a distributed control system (DCS). Note that the controller preferably includes an optimizer which receives, and operates according to, optimizer constraints, as is well known in the art. As is also well known, the DCS may operate to filter or otherwise provide checks or other processing regarding the controller output, e.g., for safety purposes, and to provide process inputs to the physical process, as well as the controller and PUNDA model. Of course, other components, such as pre or postprocessors may also be included as desired, such as, for example, between the process and the PUNDA model, for processing the process output data, etc.
 [0049]The (trained) PUNDA model may thus operate to control the process in an adaptive or dynamic manner. Further details regarding the PUNDA model and its training and use are provided below.
 [0050]One embodiment of a method for training a model of a nonlinear process is presented below for an embodiment of the PUNDA model using a neural network and a set of MIMO difference equations, although it should be noted that the method is broadly applicable to other types of PUNDA models, and to other types of nonlinear models in general.
 [0051]First, process inputs/outputs (I/O), i.e., I/O parameters, to be included in the model may be identified, e.g., material inputs and outputs, conditions, such as temperature and pressure, power, costs, and so forth, e.g., via expert knowledge, programmatically through systematic search algorithms, such as correlation analysis, or other approaches or techniques.
 [0052]Data for the process input(s)/output(s) may be collected, e.g., from historical data available from plant normal operation, from other models, assembled or averaged from multiple sources, or collected substantially in real time from an operating process, e.g., from an online source. One or more signal processing operations may optionally be performed on the data, including for example, filtering the data to reduce noise contamination in the data, removing outlier data from the data set (i.e., anomalous data points), data compression, variable transformation, and normalization, among others.
 [0053]Prior knowledge about the process may optionally be assembled or gathered, e.g., operator knowledge regarding the sign of a particular gain, or a residence time in the system, a partial or complete first principles model of the process, e.g., in the form of a set of nonlinear differential or partial differential equations, among approaches. Well known methodologies exist to determine or extract constraints, such as derivatives of the outputs with respect to inputs (commonly referred to as gains), from first principles models or information. The prior knowledge may be processed to determine or create the constraints for the training problem. For example, commercially available software may be used to derive analytical expressions for the first or higher order derivatives of the outputs with respect to the inputs, including these derivatives in the constraints.
 [0054]An order for the MIMO difference equations may be determined, i.e., the order of the equations comprised in the parameterized dynamic model may be determined. For example, in one embodiment, the order may be determined by an expert, i.e., one or more human experts, or by an expert system. In another embodiment, the order may be determined as a result of a systematic optimization problem, in which case the determination of the order of the model may be performed simultaneously or concurrently with the training of the model.
 [0055]An optimization problem may be formulated in which model parameters are or include decision variables, e.g., where an objective function operates to minimize model errors subject to a set of constraints. Optimization algorithms may be executed or performed to determine the parameters (i.e., values of the parameters) of the PUNDA model.
 [0056]Finally, satisfaction of the constraint set may be verified and the value of the objective function may be computed. If the constraints are not satisfied, or the objective value is not sufficiently small, formulating and solving the model optimization task may be repeated one or more times, e.g., via the use of heuristics or through the application of systematic analysis techniques, among others. For example, in a preferred embodiment, the dataindependent gains of the model may be verified using interval arithmetic over the global input region and/or interval arithmetic with inputregion partitioning.
 [0057]One embodiment of a method of operation of the PUNDA model in a control application for a physical process, e.g., a physical plant, is described, where the PUNDA model couples to the physical process, and also to a controller which operates to manage or control the process based on outputs from the PUNDA model. As mentioned earlier, however, the methods presented herein are also contemplated as being broadly applicable in a wide variety of application domains, including both physical and nonphysical (e.g., analytical) processes.
 [0058]The model may be initialized to a current status of the physical process to be controlled, e.g., to ensure that the PUNDA model and the physical plant are correctly aligned, and thus that the predictions produced by the PUNDA model are relevant to the physical process. In various embodiments, the initialization may be performed by a human expert, and expert system, or via a systematic methodology of identifying the initial conditions of the model given available current and past measurements from the physical process, among others.
 [0059]Various attributes or parameters of the combined model and process may be determined or defined, such as, for example, control variable and manipulated variable (CV and MV) target profiles, CV/MV constraint profiles, disturbance variable (DV) profiles, prediction and control horizons, objective function and constraints, and tuning parameters for the controller, among others.
 [0060]A profile for the MV moves or changes, i.e., a trajectory of the MV values, over the control horizon may be generated, and the model's response over the prediction horizon may be observed, and the deviation from the desired behavior determined. In one embodiment, the MV profiles may be determined by a human operator, although in a preferred embodiment, the MV profiles may be determined programmatically, e.g., by an optimization algorithm or process. The model response to the presumed MV profile may be calculated over the prediction horizon and compared to the desired behavior and constraints. The appropriateness or suitability of the MV profile may be measured or evaluated by or via corresponding value or values of the objective function.
 [0061]Then, an optimal MV profile may be determined. For example, in a preferred embodiment, the generation of the a trajectory and determination of the deviation from the desired behavior may be performed iteratively with different MV profiles until a satisfactory predicted system response is obtained, preferably by using an optimizer to systematically search for the optimal MV profiles, e.g., by systematically seeking those MV moves or changes for which the objective function is improved (e.g. minimized when the objective function reflects the control cost) while respecting constraints. The determined optimal MV profile may be considered or referred to as a decision, and the corresponding model response may be considered or referred to as the predicted response of the process.
 [0062]Information related to or indicating the MV profiles and corresponding model response (e.g., MV profiles and predicted system response) may optionally be displayed and/or logged, as desired. A portion or the entirety of the decision (MV) profiles may be transmitted to a distributed control system (DCS) to be applied to the physical system. In one embodiment, final checks or additional processing may be performed by the DCS. For example, the DCS may check to make sure that a decision (e.g., a value or set of values of the manipulated variables) does not fall outside a range, e.g., for safety. If the value(s) is/are found to be outside a valid or safe range, the value(s) may be reset, and/or an alert or alarm may be triggered to call attention to the violation.
 [0063]The output of the DCS, e.g., the (possibly modified) decision profiles, may be provided as actual input to the physical process, thereby controlling the process behavior, and the input to the physical process (i.e., the output of the DCS) and the actual process response (i.e., the actual process outputs) may be measured. In a preferred embodiment, the information may be fed back to the PUNDA model, where the actual process input/output measurements may be used to improve the estimate of the current status of the process in the model, and to produce a new deviation from the desired system response. The method may then repeat, dynamically monitoring and controlling the process in an ongoing manner, attempting to satisfy the objective function subject to the determined or specified constraints.
 [0064]In one embodiment, the input/output of the process may be used to continue training the PUNDA model online. Alternatively, in other embodiments, the model may be decoupled intermittently for further training, or, a copy of the model may be created and trained offline while the original model continues to operate, and the newly trained version substituted for the original at a specified time or under specified conditions.
 [0065]A better understanding of the present invention can be obtained when the following detailed description of the preferred embodiment is considered in conjunction with the following drawings, in which:
 [0066]
FIG. 1 an optimization system where a computer based optimization system operates in conjunction with a process to optimize the process, according to the prior art;  [0067]
FIG. 2 is a block diagram of a combined model using parametric and nonparametric models, according to the prior art;  [0068]
FIG. 3 illustrates a state space bounded derivative network, according to the prior art;  [0069]
FIGS. 4A and 4B illustrate Weiner and Hammerstein model structures, according to the prior art;  [0070]
FIG. 5A illustrates a parametric universal nonlinear dynamics approximator in a training configuration, according to one embodiment of the invention;  [0071]
FIG. 5B illustrates the parametric universal nonlinear dynamics approximator ofFIG. 5A in an industrial control system, according to one embodiment of the invention;  [0072]
FIG. 6 illustrates a node in a nonlinear approximator network, according to one embodiment;  [0073]
FIG. 7A illustrates an exemplary neural network, according to one embodiment;  [0074]
FIG. 7B illustrates an exemplary node in the neural network ofFIG. 7A , according to one embodiment;  [0075]
FIG. 8 flowcharts one embodiment of a method for training a model, according to one embodiment of the present invention; and  [0076]
FIG. 9 flowcharts one embodiment of a method for operating a combined model, according to one embodiment of the present invention.  [0077]While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.
 [0000]Incorporation by Reference
 [0078]The following references are hereby incorporated by reference in their entirety as though fully and completely set forth herein:
 [0079]U.S. patent application Ser. No. 10/350,830, titled “Parameterizing a Steady State Model Using Derivative Constraints”, filed Jan. 24, 2003, whose inventor was Gregory D. Martin.
 [0000]Terms
 [0080]The following is a glossary of terms used in the present application:
 [0081]Objective Function—a mathematical expression of a desired behavior or goal.
 [0082]Constraint—a limitation on a property or attribute used to limit the search space in an optimization process.
 [0083]Optimizer—a tool or process that operates to determine an optimal set of parameter values for a system or process by solving an objective function, optionally subject to one or more constraints.
 [0084]Control Variables—process outputs, e.g., output states of the process or system being controlled.
 [0085]Manipulated Variables—manipulable inputs to the process being controlled.
 [0086]Disturbance Variables—inputs which are not manipulable, e.g., ambient temp/pressure, etc., that affect the process, but which are not controllable
 [0087]Target Profile—a desired profile or trajectory of variable values, i.e., a desired behavior of a variable, e.g., of a control variable or manipulated variable.
 [0088]Control Horizon—the period of the time extending from the present into the future in which one plans to move or change manipulated variables. Beyond this horizon the MV is assumed to stay constant at its last or most recent value in the control horizon.
 [0089]Prediction Horizon—the period of time extending from the present into the future over which the process or system response is monitored and compared to a desired behavior. A prediction horizon is usually greater than the control horizon.
 [0090]Memory Medium—Any of various types of memory devices or storage devices. The term “memory medium” is intended to include an installation medium, e.g., a CDROM, floppy disks 104, or tape device; a computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Rambus RAM, etc.; or a nonvolatile memory such as a magnetic media, e.g., a hard drive, or optical storage. The memory medium may comprise other types of memory as well, or combinations thereof. In addition, the memory medium may be located in a first computer in which the programs are executed, or may be located in a second different computer which connects to the first computer over a network, such as the Internet. In the latter instance, the second computer may provide program instructions to the first computer for execution. The term “memory medium” may include two or more memory mediums which may reside in different locations, e.g., in different computers that are connected over a network.
 [0091]Carrier Medium—a memory medium as described above, as well as signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a bus, network and/or a wireless link.
 [0092]Medium—includes one or more of a memory medium, carrier medium, and/or programmable hardware element; encompasses various types of mediums that can either store program instructions/data structures or can be configured with a hardware configuration program. For example, a medium that is “configured to perform a function or implement a software object” may be 1) a memory medium or carrier medium that stores program instructions, such that the program instructions are executable by a processor to perform the function or implement the software object; 2) a medium carrying signals that are involved with performing the function or implementing the software object; and/or 3) a programmable hardware element configured with a hardware configuration program to perform the function or implement the software object.
 [0093]Program—the term “program” is intended to have the full breadth of its ordinary meaning. The term “program” includes 1) a software program which may be stored in a memory and is executable by a processor or 2) a hardware configuration program useable for configuring a programmable hardware element.
 [0094]Software Program—the term “software program” is intended to have the full breadth of its ordinary meaning, and includes any type of program instructions, code, script and/or data, or combinations thereof, that may be stored in a memory medium and executed by a processor. Exemplary software programs include programs written in textbased programming languages, such as C, C++, Pascal, Fortran, Cobol, Java, assembly language, etc.; graphical programs (programs written in graphical programming languages); assembly language programs; programs that have been compiled to machine language; scripts; and other types of executable software. A software program may comprise two or more software programs that interoperate in some manner.
 [0095]Computer System—any of various types of computing or processing systems, including a personal computer system (PC), mainframe computer system, workstation, network appliance, Internet appliance, personal digital assistant (PDA), television system, grid computing system, or other device or combinations of devices. In general, the term “computer system” can be broadly defined to encompass any device (or combination of devices) having at least one processor that executes instructions from a memory medium.
 [0000]FIGS. 5A and 5B—A Parametric Universal Nonlinear Dynamics Approximator
 [0096]
FIGS. 5A and 5B illustrate a parametric universal nonlinear dynamics approximator (PUNDA), according to one embodiment. It should be noted that the block diagrams ofFIGS. 5A and 5B are simplified depictions and are intended to be exemplary only. In other words, there are components that may be required in actual systems which are omitted in the figures for clarity, such as, for example controller blocks, optimizers, input and output processors, and so forth, these items not being necessary to understand the present invention.FIG. 5A is a high level block diagram of a PUNDA model 506, which uses a new architecture for combined models, coupled to a physical process (or system) 104 or representation thereof, for purposes of training the PUNDA model 506.FIG. 5B is a block diagram illustrating the use of the PUNDA model in an industrial system. The PUNDA model may be stored on a memory medium of a computer system, and executed by a processor to implement the operations described herein.  [0097]As
FIG. 5A shows, in this embodiment, the PUNDA model 506 includes a nonlinear approximator 502, coupled to a dynamic parameterized model 504 in series. In one embodiment, the nonlinear approximator 502 may be a neural network, although any type of nonlinear approximator may be used, including, for example, support vector machines, statistical models, parametric descriptions, Fourier series models, or any other type of empirical or data based model, among others. In a preferred embodiment, the nonlinear approximator is a universal nonlinear approximator, such that any type of nonlinear mapping may be implemented. The nonlinear approximator 502 operates to provide parameters {overscore (p)} to the dynamic parameterized model 504, as shown. As indicated, in some embodiments, the nonlinear approximator (e.g., neural network) 502 may also include a feedback loop 505, whereby the output of the approximator is provided as further input to itself, thus supporting dependencies of the output upon prior output of the approximator.  [0098]In a preferred embodiment, the dynamic parameterized model 504 may be a multiinput, multioutput (MIMO) dynamic model implemented with a set of difference equations, i.e., a set of discrete time polynomials, an example of which is provided below. Thus, the dynamic parameterized model 504 may receive its parameters {overscore (p)} from the nonlinear approximator 502, and operate accordingly.
 [0099]As also shown in
FIG. 5A , the PUNDA model 506 may be coupled to the physical process 104 or a representation of the process 104. Process inputs 501 may be provided to the process 104 and to the PUNDA model 506 as input. The process 104 produces process outputs 503, which may be combined or used with PUNDA model outputs 507 to determine model errors 509, as shown. These model errors 509 may then be provided back to the PUNDA model and used, e.g., with an optimizer, to train the PUNDA model.  [0100]In the descriptions that follow, the PUNDA model 506 is described in terms of a series combination of a neural network model and a set of MIMO difference equations that can be used to model any complex nonlinear dynamic system with any desired degree of accuracy, although, as noted above, other nonlinear approximators and dynamic parameterized models are also contemplated.
 [0101]For example, in various embodiments, the physical process 104 of
FIG. 5A may be described or represented by the process itself, a first principles model, empirical data, or any combination of the three, among others. Examples of first principles models include a state space description of the process in the form of x_{k+1}=F_{k}(x_{k}, u_{k}, p_{k}), y_{k}=G_{k}(x_{k}, u_{k}, p_{k}), or input/output difference equations in the form of y_{k}=G_{k}(y_{k1}, . . . , y_{kN}, u_{k}, . . . ,u_{kM}, p_{k}). Here x_{k }is the state vector, u_{k }is the input vector (manipulated or disturbance variables), p_{k }is the parameter vector, and y_{k }is the output vector for the process. Examples of empirical data include test data for all process inputs/outputs, or correlated measurements from normal operation of the process, e.g., plant, for certain input/output pairs. Other representations are also contemplated, including, for example, statistical models, parametric descriptions, Fourier series models, and empirical models, among others. For simplicity, in this training configuration of the system, the representation of the process may be referred to as the process 104.  [0102]The PUNDA model disclosed herein allows the empirical information and/or the firstprinciples knowledge available about the process to be systematically used in building a computationally favorable (i.e., efficient) model of the physical process that is suitable for online optimization and control of the process. In other words, the computations may be made substantially in real time. Additionally, such a model may be capable of approximating the nonlinear physical process with any desired degree of accuracy, as will be described in detail below.
 [0103]It is noted that partial FP models that by themselves are not sufficient to fully describe a physical process (and hence are currently ignored in practice) could be used to build a representative model of the physical process with the proposed PUNDA structure. The neural network block 502 in the proposed PUNDA model may be trained while it is serially connected to the MIMO difference equation block 504, and hence, in general, the output of the neural network model 502 may not be directly available. It is expected that the complexities of the real world physical processes may dictate the need for the training of the neural network model 502 under such combined architecture in a majority of the applications, and indeed, such integrated training is a primary feature and benefit of the present invention.
 [0104]A preferred methodology for the training of the neural network model 502 within the PUNDA architecture of
FIG. 5A is to formulate the training of the neural network parameters as a constrained nonlinear programming problem. This constrained NLP problem may then be solved with any appropriate NLP solver technology (e.g., Sequential Quadratic Programming (SQP)). The parameters of the neural network model may include: (a) parameters that determine the topology of the neural network model (e.g. number of layers, connectivity of the network), (b) parameters that determine the type/shape of the activation function used at each node, and/or (c) weights/biases in the network, among others.  [0105]It is generally accepted that a successful model for optimization and control must accurately capture both process gains and dynamics. To ensure the high fidelity of the combined PUNDA model for optimization and control, the constrained NLP problem for the training of the neural network model 502 may include constraints on the derivatives (of any desired order) of the process outputs with respect to the process inputs. Other constraints, such as, for example, mass and energy balances, may also be included. Potential sources of information for such constraints include first principle models and operator knowledge. A variety of techniques may be used to translate such information into constraints for the NLP problem. For example, one approach is to use commercially available software, such as, for example, Maple, provided by Waterloo Maple, Inc., to derive analytical expressions for the first (or higher order) derivatives of the outputs with respect to inputs in extremely sophisticated first principles models. The derived expressions may then be included in the constraint set for the NLP problem of neural network training. For further information regarding the use of derivative constraints for parameterizing models, please see U.S. patent application Ser. No. 10/350,830, titled “Parameterizing a Steady State Model Using Derivative Constraints”, which was incorporated by reference above.
 [0106]In addition to the derivative constraints (the first order of which are commonly referred to as gain constraints in the literature), the training of the neural network block in the PUNDA model can be constrained to ensure desired dynamic behavior for the PUNDA model. For example, a time constant in the system may be bounded to a certain range based on prior knowledge about the physics of the process. This is a key attribute that distinguishes the PUNDA model from prior art approaches.
 [0107]Thus, in contrast to the Weiner and Hammerstein model architectures described earlier, in the PUNDA model disclosed herein, the notion of decomposing the nonlinear dynamic behavior of a physical system into linear (or even nonlinear) dynamics and static input/output mappings is completely avoided. In the PUNDA model, the identification of the dynamic behavior of the physical process and the input/output static mappings (i.e. gain relationships) are performed simultaneously. The nonlinear approximator block 502, e.g., neural network, in the PUNDA model 506 specifies how the parameters of the dynamic parameterized model 504, e.g., the MIMO difference equation block, may vary as a function of process operating conditions, and gain and dynamic behavior of the PUNDA model is a global property of the entire PUNDA model. Therefore, a desired gain behavior may be enforced on the training of the PUNDA model in precisely the same way that a desired dynamic behavior is enforced.
 [0108]Therefore, the PUNDA model disclosed here departs greatly from the classical notions of Wiener and Hammerstein models for describing nonlinear dynamic systems where the behavior of the nonlinear dynamic system is conceptually decomposed into a linear dynamic system in series with a static nonlinear system (see
FIGS. 4A and 4B , described above). As described above, in a PUNDA model, the nonlinear approximator block 502 determines and provides the parameters of the dynamic parameterized model 504, e.g., the MIMO difference equations block, and therefore the input/output relationship in the PUNDA model does not preimpose the conceptual decomposition inherent in the Wiener and Hammerstein model architectures or structures (for further information related to Weiner and Hammerstein models, please see M. Henson and D. Seborg, Nonlinear Process Control, Prentice Hall, 1997). It should be noted, however, that the Wiener and Hammerstein models may be derived from the PUNDA model as special cases if certain simplifications are applied to the PUNDA model.  [0109]Turning now to
FIG. 5B , a simplified block diagram of the PUNDA model ofFIG. 5A is illustrated as part of an industrial prediction/control system. AsFIG. 5B shows, controller 512 receives process outputs 503 from the physical process 104 and provides controller output 515 to a distributed control system (DCS) 516. Note that the controller 512 preferably includes an optimizer 514 that receives, and operates according to, optimizer constraints 513, as is well known in the art. AsFIG. 5B also shows, the controller 512 also includes or couples to PUNDA model 506. The optimizer 514 provides trial model inputs 508 (e.g., MVs) to the PUNDA model 506, and the PUNDA model 506 provides resulting PUNDA model output 507 (e.g., QVs) back to the optimizer 514. As is well known in the art of optimization, the optimizer 506 and PUNDA model 506 operate in an iterative manner to generate an optimal set of MVs as controller output 515. In other words, in a preferred embodiment, the controller output 515 is the final iterate of the trial model input 508.  [0110]The DCS 516 operates to receive the controller output 515, and provide process inputs 501 to both the physical process 104 and the controller 512, as shown. As is well known, the process inputs 501 may be used to control the operation of the physical process 104, and may also be used by the controller 512, e.g., for control optimization and/or adaptive training of the PUNDA model 506. As is also well known, the DCS 516 may operate to filter or otherwise provide checks or other processing regarding the controller output 515, e.g., for safety purposes. Of course, other components, such as pre or postprocessors may also be included as desired, such as, for example, between the process 104 and the controller 512, e.g., for processing the process output data 503, etc.
 [0111]The (trained) PUNDA model 506 may thus operate to control the process 104 in an adaptive or dynamic manner. Further details regarding the PUNDA model and its training and use are provided below.
 [0000]MultiInput MultiOutput Parametric Difference Equations
 [0112]As is well known in the art, FP or fundamental models are generally implemented as a set of partial differential equations. Standard methods for translating a differential equation into a difference equation model are well established (see, for example, R. Middleton and G. Goodwin, Digital Control and Estimation: A Unified Approach. Prentice Hall, 1990.) Therefore, the approach disclosed herein may also be applied to systems described in continuous time domain using the following general description:
{dot over (x)}(t)=F _{t}(u(t), x(t), p(t)) (1)
y(t)=G _{t}(u(t), x(t), p(t))  [0113]Representing the system of Eq. (1) in terms of a discrete time or difference formulation gives:
x _{k} =F _{k}(u _{k} , x _{k−1} , p _{k}) (2)
y _{k} =G _{k}(u _{k} , x _{k−1} , p _{k})
where x_{k }ε R^{N} ^{ x } ^{×1 }is the state vector, u_{k }ε R^{N} ^{ u } ^{×1 }is the input vector, y_{k }ε R^{N} ^{ y } ^{×1 }is the output vector, and p_{k }ε R^{N} ^{ p } ^{×1 }is the parameter vector at time k. Note that for clarity of the derivation, x_{k }and y_{k }are defined as explicit functions of state/input/parameters. Assuming that the system is initially at (x^{ic}, u^{ic}, y^{ic}, p^{ic}), the state and the output of the system can be universally approximated by:$\begin{array}{cc}\begin{array}{c}{x}_{k}={x}^{\mathrm{ic}}+\sum _{i=1}^{{I}_{\mathrm{xx}}}\text{\hspace{1em}}{{\alpha}_{x,i}\left(\delta \text{\hspace{1em}}{x}_{k1}\right)}^{i}+\sum _{i=1}^{{I}_{\mathrm{xu}}}\text{\hspace{1em}}{\beta}_{x,i}{\left(\delta \text{\hspace{1em}}{u}_{k}\right)}^{i}+\\ \sum _{i=1}^{{I}_{\mathrm{xp}}}\text{\hspace{1em}}{{\gamma}_{x,i}\left(\delta \text{\hspace{1em}}{p}_{k}\right)}^{i}+\\ {\varsigma}_{x,\mathrm{xu}}\left(\delta \text{\hspace{1em}}{x}_{k1}\right)\left(\delta \text{\hspace{1em}}{u}_{k}\right)+{\varsigma}_{x,\mathrm{ux}}\left(\delta \text{\hspace{1em}}{u}_{k}\right)\left(\delta \text{\hspace{1em}}{x}_{k1}\right)+\\ {\varsigma}_{x,\mathrm{xp}}\left(\delta \text{\hspace{1em}}{x}_{k1}\right)\left(\delta \text{\hspace{1em}}{p}_{k}\right)+{\varsigma}_{x,\mathrm{px}}\left(\delta \text{\hspace{1em}}{p}_{k}\right)\left(\delta \text{\hspace{1em}}{x}_{k1}\right)+\\ {\varsigma}_{x,\mathrm{up}}\left(\delta \text{\hspace{1em}}{u}_{k}\right)\left(\delta \text{\hspace{1em}}{p}_{k}\right)+{\varsigma}_{x,\mathrm{pu}}\left(\delta \text{\hspace{1em}}{p}_{k}\right)\left(\delta \text{\hspace{1em}}{u}_{k}\right)+H.O.C.T.\end{array}\text{}\begin{array}{c}{y}_{k}={y}^{\mathrm{ic}}+\sum _{i=1}^{{I}_{\mathrm{yx}}}\text{\hspace{1em}}{{\alpha}_{y,i}\left(\delta \text{\hspace{1em}}{x}_{k1}\right)}^{i}+\sum _{i=1}^{{I}_{\mathrm{yu}}}\text{\hspace{1em}}{\beta}_{y,i}{\left(\delta \text{\hspace{1em}}{u}_{k}\right)}^{i}+\\ \sum _{i=1}^{{I}_{\mathrm{yp}}}\text{\hspace{1em}}{{\gamma}_{y,i}\left(\delta \text{\hspace{1em}}{p}_{k}\right)}^{i}+\\ {\varsigma}_{y,\mathrm{xu}}\left(\delta \text{\hspace{1em}}{x}_{k1}\right)\left(\delta \text{\hspace{1em}}{u}_{k}\right)+{\varsigma}_{y,\mathrm{ux}}\left(\delta \text{\hspace{1em}}{u}_{k}\right)\left(\delta \text{\hspace{1em}}{x}_{k1}\right)+\\ {\varsigma}_{y,\mathrm{xp}}\left(\delta \text{\hspace{1em}}{x}_{k1}\right)\left(\delta \text{\hspace{1em}}{p}_{k}\right)+{\varsigma}_{y,\mathrm{px}}\left(\delta \text{\hspace{1em}}{p}_{k}\right)\left(\delta \text{\hspace{1em}}{x}_{k1}\right)+\\ {\varsigma}_{y,\mathrm{up}}\left(\delta \text{\hspace{1em}}{u}_{k}\right)\left(\delta \text{\hspace{1em}}{p}_{k}\right)+{\varsigma}_{y,\mathrm{pu}}\left(\delta \text{\hspace{1em}}{p}_{k}\right)\left(\delta \text{\hspace{1em}}{u}_{k}\right)+H.O.C.T.\end{array}& \left(3\right)\end{array}$
where parameter matrices, α_{x,i}, . . . , γ_{x,i}, ζ_{x,xu}, . . . , ζ_{y,pu}, indicate or highlight the parametric nature of the difference equations describing the evolution of the state and output vectors of the nonlinear system under a transition, and where H.O.C.T. stands for “higher order coupling terms” of the Taylor series expansion. Note that model form of Eqs. (3) may be used to model or approximate phenomena, e.g., as represented by Eq. (2), of any order, and to any accuracy desired, in that the order of the difference equations may be specified, and the higher order coupling terms included as desired. The universal approximation property of the model of Eq. (3) may be proven by simply setting the coefficients in Eq. (3) to the values of the coefficients in a Taylor series expansion of Eq. (2), as is well known in the art.  [0114]A special case of importance is when the state vector in Eqs. (2) or (3) can be constructed as an explicit function of current and past inputs/outputs. In this case the MIMO difference equation block may be modeled as a function of inputs/outputs (present and past) only, which is extremely efficient for online optimization/control. This special case includes systems where the evolution of the state is linear, i.e. F_{k }in Eq. (2) is a linear vector function. M. Phan, R. Lim, and R. Longman, “Unifying inputoutput and statespace perspectives of predictive control,” tech. rep., Dept. of Mech. & Aero. Eng., Princeton University, 1998, show that for a linear vector function F_{k }in Eq. (2), if the system is observable, an appropriate number of past inputs/outputs are enough to construct the state vector completely. Therefore, the output y at any given time in the future can be expressed solely as a function of past inputs/outputs and current and future inputs. For example, under linear state and output equations in Eq. (3), the MIMO difference equation block can be replaced with:
$\begin{array}{cc}{{y}_{k}={y}^{\mathrm{init}}+\sum _{i=1}^{{Y}_{\mathrm{past}}}\text{\hspace{1em}}{A}_{i}\delta \text{\hspace{1em}}{y}_{ki})}^{i}+\sum _{i=1}^{{U}_{\mathrm{past}}}\text{\hspace{1em}}{B}_{i}\delta \text{\hspace{1em}}{u}_{ki}& \left(4\right)\end{array}$
where Y_{past }and U_{past }are the number of past outputs and inputs required to fully construct the state vector, and A_{i }and B_{i }are the coefficient matrices of appropriate dimension. The bias term y^{init }is introduced as a parameter that encompasses both y^{ic }and the contribution from parameter perturbation (e.g., γ_{x, 1 }(δp_{k})) in the state update and γ_{y, 1 }(δp_{k}) in the output update in Eq. (3)). Note that y^{init}, A_{i}, and B_{i }are varying parameters that are the outputs of the nonlinear approximator block (e.g., neural network) 502 inFIG. 5A . The mathematical foundation for the training of such models is described below, as is a generally applicable methodology for constructing the constraint set for the training of the nonlinear approximator model 502 in the case of a neural network.  [0115]It is contemplated that in most if not all cases, even a low order expansion in Eq. (4), i.e. I_{xx}= . . . =I_{yp}≦n with n small, and higher order coupling or cross terms dropped, is an appropriate parametric model for representing complex nonlinear system if the coefficients α_{x, 1}, β_{x, 1 }γ_{x, 1}, α_{y, 1},β_{y, 1}, γ_{y, 1}, ζ_{x,xu}, . . . , ζ_{y,up}, and ζ_{y,pu }are outputs of a nonlinear model, such as a neural network, trained under the combined model structure depicted in
FIG. 5A . The main advantage of a low order model is that it is computationally efficient for online optimization and control.  [0116]The parametric nature of the model facilitates easier maintenance of the models in that the deterioration of the model can be traced back to parameters, and online constrained training could be used to reduce parameter errors. It is noted that for n≦2, these parameters may be related to physically meaningful properties of the dynamic system such as gain, damping factors, time constants, etc., hence further facilitating the maintenance of the model by operation personnel.
 [0000]Problem Formulation for the Training of the Nonlinear Model
 [0117]In one embodiment, the training of the nonlinear approximator block 502 in the PUNDA model (see
FIG. 5A ) may be formulated as a constrained nonlinear optimization problem for a simple feedforward neural network with linear input and output layers and a single hidden layer with a sigmoidal activation function. However, as noted earlier, the derivation described below applies for any nonlinear approximator used in the systems ofFIGS. 5A and 5B . Examples of alternate nonlinear approximators include, but are not limited to, a nonlinear approximator with a different activation function (e.g., an nth order integral of the sigmoid function, with n≧1), or different topology (e.g. a different number of nodes, layers, and/or connectivity).  [0118]A node in the general nonlinear approximator block 502 may be represented by the block diagram shown in
FIG. 6 . This basic building block may appear at any position in the nonlinear approximator network. Note that this embodiment, x_{o }is an affine function of the inputs h_{i }to the block, and f(x_{o}, ρ_{o}) is a potentially parametric nonlinear mapping from x_{o }to the output of the node, h_{o}. The feedforward neural network (with linear input and output layers, and a single hidden layer) for which the expository derivations are presented herein is one of the most commonly adopted nonlinear approximators to date. For the k^{th }output unit of this neural network, the output of the node is the output of the nonlinear approximator model. For consistency of the notation with that used inFIG. 5A , the output of the k^{th }output unit is denoted as p_{k}. In this embodiment, it is also assumed that the activation function for this output unit is an identity function, i.e., f(x_{k}, ρ_{k})=x_{k}. The k^{th }output unit may be described as:
p_{k}=x_{k } (5)
x _{k}=Σ_{j}(w _{jk} h _{j})+b _{k }
where h_{j }is the output of the j^{th }hidden unit, w_{jk }is the weight from the j^{th }hidden unit to the k^{th }output unit, and b_{k }is the bias term for the summation at the k^{th }output unit. Utilizing the same fundamental building block ofFIG. 6 for the hidden units of the single hidden layer, the output of the j^{th }hidden unit, h_{j}, may be described as:
h _{j}=ƒ(x_{j}, ρ_{j}) (6)
x _{j}=Σ_{i}(w _{ij} u _{i})+b _{j }
where x_{j }is the input to the nonlinear activation function in the j^{th }hidden unit, w_{ij }is the weight from input u_{i }to the j^{th }hidden unit, b_{j }is the bias of the j^{th }hidden unit, and ƒ(xj,ρj) is a nonlinear (potentially parametric with parameter vector ρ_{j}) activation function. Acceptable activation function include, but are not limited to, sigmoidal (“sshaped”) functions such as$f\left({x}_{j}\right)=\frac{1}{1+{e}^{{x}_{j}}},$
which ranges from 0 to 1, or Σ(xj)=tan h(xj), which ranges from −1 to 1. Note that the input layer in this simplified example is assumed as an identity unit and hence the inputs to the hidden unit are the inputs to the neural network. In general, however, the input layer may admit the structure ofFIG. 6 , and/or the neural network may include additional inputs that are obtained by applying various signal processing operations to the inputs of the overall PUNDA model (e.g., tap delayed samples of an input, or linearly filtered versions of the input).  [0119]A constrained optimization problem for the training of the expository nonlinear approximator block described earlier may be stated in the following form: such that
$\begin{array}{cc}\underset{\Phi}{\mathrm{min}}\sum _{d}^{\text{\hspace{1em}}}\text{\hspace{1em}}\sum _{k}\text{\hspace{1em}}{\left({t}_{\mathrm{kd}}{y}_{\mathrm{kd}}\right)}^{2}\text{}{L}_{\mathrm{md}}\le {G}_{m}(\Phi ,{u}_{d},{y}_{d},\frac{\partial {y}_{\mathrm{kd}}}{\partial {y}_{\mathrm{id}}},\frac{{\partial}^{2}{y}_{\mathrm{kd}}}{\partial {u}_{\mathrm{ld}}\partial {u}_{\mathrm{id}}},\dots \text{\hspace{1em}})\le {U}_{\mathrm{md}}& \left(7\right)\end{array}$
where the decision vector includes the parameter approximator network's weights and the biases, as well as any potential parameter in the MIMO difference equation block that is not designated as an output of the parameter approximation block. Note that d indexes the dataset, which in some embodiments may include synthetic data points, used for example in extrapolation training, t_{kd }is the target output for the PUNDA model, and y_{kd }is the predicted output of the combined model that is computed using the architecture of the PUNDA model depicted inFIG. 5A . Also note that the sumsquarederror objective is minimized while simultaneously satisfying a set of constraints that may include constraints at each datapoint in the dataset or constraints over the entire input range. Other objective functions, including but not limited to, the log of the absolute error may be used as appropriate.  [0120]Constraints used during training may include, but are not limited to, functions of any or all of the following: the parameter approximator inputs, the parameter approximator outputs, the parameter approximator parameters (e.g. weights and biases), the PUNDA model inputs and/or outputs, and any number of derivatives of any order of the PUNDA model outputs with respect to the PUNDA model inputs.
 [0000]A Preferred Methodology for Imposing Constraints
 [0121]Successful training of the nonlinear model (e.g., neural network) in the combined PUNDA model may require that certain constraints be satisfied globally (independent from the available data for the training of the model). In some cases it may be beneficial to replace the exact constraints with appropriately constructed upper/lower bounds. Interval arithmetic is described below in a preferred methodology for the derivation of such bounds under arbitrary order of the dynamic parameterized model 504, e.g., MIMO difference equation block, and arbitrary architecture of the nonlinear approximator block 502, e.g., the neural network block.
 [0122]Given the range of the applications in which the disclosed parametric universal nonlinear dynamics approximator may be deployed, it is crucial to develop a methodology by which the constraints may be imposed and additionally verified at any node in the parameter approximator block in the PUNDA model of
FIG. 5A . Interval arithmetic may be used to develop a preferred methodology for systematically computing such constraints. The methodology disclosed herein permits the computation of guaranteed interval bounds on a composite function without having to derive these bounds explicitly for every composite model form that that function may represent. It is in general considered practically impossible to develop a generally applicable software tool for the modeling of complex nonlinear dynamical systems without a generally applicable constraining methodology, given the diversity of the applications for which such models must be developed. The approach to constraint determination disclosed herein may be applied systematically to any inputoutput model which can be represented as a flowgraph of other more elementary calculations, including both cyclic and acyclic graphs. Potential applications of the techniques described herein include, but are not limited to, process industry, food, pulp and paper, power generation, biological systems, and financial systems, among others. For more detailed information regarding interval analysis and arithmetic, please see R. Moore, Interval Analysis. Prentice Hall, 1966.  [0000]Interval Arithmetic
 [0123]Interval arithmetic is an established numerical computation technique in which the evaluation of numerical values is replaced by the evaluation of equivalent numerical ranges. Interval analysis has broad application to problems for which it is necessary to understand how errors, uncertainties, or predefined ranges of parameters propagate through a set of numerical calculations; for example, see R. Hammer, M. Hocks, U. Kulisch, and D. Ratz, C++ Toolbox for Verified Computing, SpringerVerlag, 1995.
 [0124]In one embodiment of the present invention, interval arithmetic is used to compute global bounds on model properties used within a model training formulation. These properties include, but are not limited to: output values, inputoutput gains, and higherorder inputoutput derivatives.
 [0125]In an interval arithmetic framework, each realvalued x is replaced by an equivalent real valued interval [x, {overscore (x)}]={x ε Rx≦x≦{overscore (x)}} represented by the shorthand notation [x]. The notation x refers to the minimum value of x over the interval and {overscore (x)} refers to the maximum value of x over the interval.
 [0126]Given any multidimensional function z=ƒ(x, y, . . . ), the interval equivalent [z]=ƒ([x], [y], . . . ) is sought, specifically, the minimum and maximum values that the function can assume given any tuple of values {x ε [x], y ε [y], . . . } within the specified domain. If the function is monotonic, these extremal values are found at the end points of the function. For example, if the function is monotonically increasing in each dimension, ƒ([x, {overscore (x)}])=[ƒ(x, y, . . . ), ƒ({overscore (x)}, {overscore (y)}, . . . )]. In general, the extremal values may occur anywhere in the interval and the exact interval cannot necessarily be inferred from samples of the original function.
 [0127]Consider the interval equivalent of the four basic binary arithmetic operators ∘ ε {+,−,×,÷}. The interval equivalent of each of these operators is:
[x]+[y]=[x+y, {overscore (x)}+{overscore (y)}]
[x]−[y]=[x−y, x−y]
[x]×[y]=[min {xy, x{overscore (y)}, {overscore (x)}y, {overscore (xy)}}, max {xy, x{overscore (y)}, {overscore (x)}y, {overscore (xy)}}]
[x]÷[y]=[x]×[1/y,1/ y],0 ∉ [y] (8)  [0128]Scaling and biasing by a constant as well as change of sign are specializations of these rules:
$\begin{array}{cc}\begin{array}{c}\left[x\right]=\left[\stackrel{\_}{x},\underset{\_}{x}\right]\\ \left[x\right]+b=\left[\underset{\_}{x}+b,\stackrel{\_}{x}+b,\right]\\ a\times \left[x\right]=\{\begin{array}{c}\left[a\underset{\_}{x},a\stackrel{\_}{x}\right],\mathrm{if}\text{\hspace{1em}}a\text{\hspace{1em}}\ge 0\\ \left[a\stackrel{\_}{x},a\underset{\_}{x}\right],\mathrm{if}\text{\hspace{1em}}a\le 0\end{array}\end{array}& \left(9\right)\end{array}$  [0129]Similar analysis can be repeated for elementary functions such as sin( ), tan( ), exp( ), and so forth.
 [0130]A key aspect of interval arithmetic is the computation of bounds on any function ƒ( ) which is defined by an expression of other elementary operations. This can be accomplished by replacing each elementary operation in the expression with its interval equivalent. The resulting interval function is called an interval extension of ƒ(.), and is denoted by f_{[ ]}([.]), which has the property
x ε [x], y ε [y], . . . ƒ(x, y, . . . ) ε ƒ_{[ ]}([x], [y], . . . ) (10)  [0131]This can be written as a set inclusion
ƒ([x]) ⊂ f_{[ ]}([x]).  [0132]While the bounds of the interval extension are not exact, they are guaranteed to contain the actual interval. The degree of overbounding of the interval extension is dependent on the form of the expression which defines ƒ( ).
 [0000]Interval Arithmetic for Models
 [0133]A primary benefit of using interval extension is that it provides a computational mechanism for computing “auto bounds.” In other words, it permits the computation of guaranteed interval bounds on a composite function without having to derive these bounds explicitly for every composite form. As noted above, this idea may be applied systematically to any inputoutput model that can be represented as a flowgraph of other more elementary calculations, including both cyclic and acyclic graphs.
 [0134]For example, in one embodiment, the PUNDA model 506 shown in
FIG. 5A may be composed of a neuralnetwork (the nonlinear approximator 502) and a set of MIMO difference equations (the dynamic parameterized model 504). Each of these components may in turn be composed of other elementary operations. An interval extension of the model relationships can then be defined by systematic substitution of interval operations. As a simple example, consider the simple neural network structure shown inFIG. 7A , although it should be noted that the neural network ofFIG. 7A is exemplary only, and is not intended to denote an actual neural network. The equations that define the numerical calculations associated with a single node, illustrated inFIG. 7B , are:$\begin{array}{cc}\begin{array}{c}{x}_{o}=\sum _{i=1}^{N}\text{\hspace{1em}}{w}_{i}{h}_{i}+b\\ {h}_{o}=f\left({x}_{o}\right)\end{array}& \left(11\right)\end{array}$  [0135]The interval extension of the summing junctions calculations can be summarized as follows:
$\begin{array}{cc}\left[\underset{\_}{{x}_{o}},\stackrel{\_}{{x}_{o}}\right]=\left[\sum _{i=1}^{N}\text{\hspace{1em}}{w}_{i}{h}_{i}+b\right]\text{}\underset{\_}{{x}_{o}}=\{\begin{array}{c}\left[{w}_{1}\underset{\_}{{h}_{1}}\right],\text{\hspace{1em}}\mathrm{if}\text{\hspace{1em}}{w}_{1}\ge 0\\ \left[{w}_{1}\stackrel{\_}{{h}_{1}}\right],\text{\hspace{1em}}\mathrm{if}\text{\hspace{1em}}{w}_{1}\le 0\end{array}+\dots \text{\hspace{1em}}+\text{}\{\begin{array}{c}\left[{w}_{N}\underset{\_}{{h}_{N}}\right],\text{\hspace{1em}}\mathrm{if}\text{\hspace{1em}}{w}_{N}\ge 0\\ \left[{w}_{N}\stackrel{\_}{{h}_{N}}\right],\text{\hspace{1em}}\mathrm{if}\text{\hspace{1em}}{w}_{N}\le 0\end{array}+b\text{}\stackrel{\_}{{x}_{o}}=\{\left[\begin{array}{c}{w}_{1}\stackrel{\_}{{h}_{1}}\\ {w}_{1}\underset{\_}{{h}_{1}}\end{array}\right]\begin{array}{c},\mathrm{if}\text{\hspace{1em}}{w}_{1}\ge 0\\ \text{\hspace{1em}},\mathrm{if}\text{\hspace{1em}}{w}_{1}\le 0\end{array}+\dots \text{\hspace{1em}}+\text{}\{\left[\begin{array}{c}{w}_{N}\stackrel{\_}{{h}_{N}}\\ {w}_{N}\underset{\_}{{h}_{N}}\end{array}\right]\begin{array}{c},\mathrm{if}\text{\hspace{1em}}{w}_{N}\ge 0\\ \text{\hspace{1em}},\mathrm{if}\text{\hspace{1em}}{w}_{N}\le 0\end{array}+b& \left(12\right)\end{array}$  [0136]In this example, it is assumed that the weight w_{i }and bias b parameters are constant values, not intervals. Assuming that the nonlinear activation function is monotonically increasing, the interval extension may be computed as:
[h _{ o } , {overscore (h _{o})}]=ƒ_{[ ]([} x _{ o } , {overscore (x _{o})}]
h _{ o } =ƒ( x _{ o } )
{overscore (h _{ o } )}=ƒ( {overscore (x _{ o } )}) (13)  [0137]These interval expressions can be composed such that the output interval [h _{ o } , {overscore (h_{o})}] of one node can be used as the input interval [h _{ i } , {overscore (h_{i})}] for a subsequent node in the flowgraph. A similar derivation can be performed for a simple difference equation, as follows:
$\begin{array}{c}{y}_{k}=\sum _{i=1}^{{N}_{y}}\text{\hspace{1em}}{A}_{i}{y}_{ki}+\sum _{i=0}^{{N}_{u}}\text{\hspace{1em}}{B}_{i}{u}_{ki}\\ ={A}_{1}{y}_{k1}+{A}_{2}{y}_{k2}+\dots \text{\hspace{1em}}+{B}_{0}{u}_{k}+{B}_{1}{u}_{k1}+\dots \end{array}$  [0138]This is a simplification of equations that in some embodiments may appear in the dynamic parameterized model block 504, e.g., the MIMO block 504, of the PUNDA model 506. In describing the interval extension of this recurrent equation, it is assumed that the parametric values A_{i }and B_{i }are not constants. Rather, they are also intervals. This allows correct composition of this model with the previously described neural network structure.
$\begin{array}{cc}\begin{array}{c}\left[\underset{\_}{{y}_{k}},\stackrel{\_}{{y}_{k}}\right]=\left[\underset{\_}{{A}_{1}},\stackrel{\_}{{A}_{1}}\right]\left[\underset{\_}{{y}_{k1}},\stackrel{\_}{{y}_{k1}}\right]+\left[\underset{\_}{{A}_{2}},\stackrel{\_}{{A}_{2}}\right]\left[\underset{\_}{{y}_{k2}},\stackrel{\_}{{y}_{k2}}\right]+\\ \dots \text{\hspace{1em}}\left[\underset{\_}{{B}_{0}},\stackrel{\_}{{B}_{0}}\right]\left[\underset{\_}{{u}_{k}},\stackrel{\_}{{u}_{k}}\right]+\left[\underset{\_}{{B}_{1}},\stackrel{\_}{{B}_{1}}\right]\left[\underset{\_}{{u}_{k1}},\stackrel{\_}{{u}_{k1}}\right]+\dots \end{array}\text{}\begin{array}{c}\underset{\_}{{y}_{k}}=\mathrm{min}\left\{\underset{\_}{{A}_{1}{y}_{k1}},\underset{\_}{{A}_{1}}\stackrel{\_}{{y}_{k1}},\stackrel{\_}{{A}_{1}}\underset{\_}{{y}_{k1}},\stackrel{\_}{{A}_{1}{y}_{k1}}\right\}+\\ \mathrm{min}\left\{\underset{\_}{{A}_{2}{y}_{k2}},\underset{\_}{{A}_{2}}\stackrel{\_}{{y}_{k2}},\stackrel{\_}{{A}_{2}}\underset{\_}{{y}_{k2}},\stackrel{\_}{{A}_{2}{y}_{k2}}\right\}+\\ \dots \text{\hspace{1em}}+\mathrm{min}\left\{\underset{\_}{{B}_{0}{u}_{k}},\underset{\_}{{B}_{0}}\stackrel{\_}{{u}_{k}},\stackrel{\_}{{B}_{0}}\underset{\_}{{u}_{k}},\stackrel{\_}{{B}_{0}}\stackrel{\_}{{u}_{k}}\right\}+\\ \mathrm{min}\left\{\underset{\_}{{B}_{1}{u}_{k1}},\underset{\_}{{B}_{1}}\stackrel{\_}{{u}_{k1}},\stackrel{\_}{{B}_{1}}\text{\hspace{1em}}\underset{\_}{{u}_{k1}},\stackrel{\_}{{B}_{1}}\stackrel{\_}{{u}_{k1}}\right\}+\dots \end{array}\text{}\begin{array}{c}\stackrel{\_}{{y}_{k}}=\mathrm{max}\left\{\underset{\_}{{A}_{1}{y}_{k1}},\underset{\_}{{A}_{1}}\stackrel{\_}{{y}_{k1}},\stackrel{\_}{{A}_{1}}\underset{\_}{{y}_{k1}},\stackrel{\_}{{A}_{1}{y}_{k1}}\right\}+\\ \mathrm{max}\left\{\underset{\_}{{A}_{2}{y}_{k2}},\underset{\_}{{A}_{2}}\stackrel{\_}{{y}_{k2}},\stackrel{\_}{{A}_{2}}\underset{\_}{{y}_{k2}},\stackrel{\_}{{A}_{2}{y}_{k2}}\right\}+\\ \dots \text{\hspace{1em}}+\mathrm{max}\left\{\underset{\_}{{B}_{0}{u}_{k}},\underset{\_}{{B}_{0}}\stackrel{\_}{{u}_{k}},\stackrel{\_}{{B}_{0}}\underset{\_}{{u}_{k}},\stackrel{\_}{{B}_{0}}\stackrel{\_}{{u}_{k}}\right\}+\\ \mathrm{max}\left\{\underset{\_}{{B}_{1}{u}_{k1}},\underset{\_}{{B}_{1}}\stackrel{\_}{{u}_{k1}},\stackrel{\_}{{B}_{1}}\text{\hspace{1em}}\underset{\_}{{u}_{k1}},\stackrel{\_}{{B}_{1}}\stackrel{\_}{{u}_{k1}}\right\}+\dots \end{array}& \left(14\right)\end{array}$
Auto Differentiation and Interval Arithmetic  [0139]In addition to computing functional bounds on a model, interval arithmetic may be used to compute bounds on input/output gains as well. This may be accomplished by combining interval arithmetic with autodifferentiation techniques (again, for more information, please see R. Hammer, M. Hocks, U. Kulisch, and D. Ratz, C++ Toolbox for Verified Computing, SpringerVerlag, 1995.) Autodifferentiation is an application of the chainrule that allows the derivative of a complex function to be decomposed into a sequence of elementary derivative operations. Consider, for example, the exemplary neural network illustrated in
FIG. 7A . In order to compute the gain term dy_{1}/du_{2}, the following procedure may be performed: Let the variable θ be the input value with respect to which the output derivative is taken. Initialize the boundary condition correspondingly:$\begin{array}{cc}\frac{d{u}_{i}}{d\theta}=\{\begin{array}{c}1\text{\hspace{1em}}\mathrm{if}\text{\hspace{1em}}i=2\\ 0\text{\hspace{1em}}\mathrm{if}\text{\hspace{1em}}\text{\hspace{1em}}i\ne 2\end{array}& \left(15\right)\end{array}$  [0140]Then perform the following set of chainrule operations for each node in sequential order in conjunction with the normal evaluations:
$\begin{array}{cc}\frac{d{x}_{o}}{d\theta}=\sum _{i=1}^{N}{w}_{i}\frac{d{h}_{i}}{d\theta}\text{}\frac{d{h}_{o}}{d\theta}=\frac{df\left({x}_{o}\right)}{d{x}_{o}}\frac{d{x}_{o}}{d\theta}& \left(16\right)\end{array}$  [0141]Finally, note that the propagated output quantity dy_{1}/dθ is, by construction, the desired gain element dy_{1}/du_{2}.
 [0142]Thus, the computation of any inputoutput gain term may be reduced to a flowgraph operation. As such, the previously described interval extension techniques may be applied, and bounds of these gains computed for inclusion in a training problem. First, for the previous example the input boundary conditions may be augmented thus:
$\begin{array}{cc}\left[\frac{d{u}_{i}}{d\theta}\right]=\{\begin{array}{c}\left[1,1\right]\text{\hspace{1em}}\mathrm{if}\text{\hspace{1em}}i=2\\ \left[0,0\right]\text{\hspace{1em}}\mathrm{if}\text{\hspace{1em}}i\ne 2\end{array}& \left(17\right)\end{array}$
and apply interval extension to the recursive gain operations, resulting in:$\begin{array}{cc}\left[\frac{d{x}_{o}}{d\theta}\right]=\left[\sum _{i=1}^{N}{w}_{i}\frac{d{h}_{i}}{d\theta}\right]\text{}\frac{d{x}_{o}}{\underset{\_}{d\theta}}=\{\begin{array}{c}\left[{w}_{1}\frac{d{h}_{1}}{\underset{\_}{d\theta}}\right]\text{\hspace{1em}}\mathrm{if}\text{\hspace{1em}}{w}_{1}\ge 0\\ \left[{w}_{1}\frac{\stackrel{\_}{d{h}_{1}}}{d\theta}\right]\text{\hspace{1em}}\mathrm{if}\text{\hspace{1em}}{w}_{1}\le 0\end{array}\text{}+\dots \text{}+\{\begin{array}{c}\left[{w}_{N}\frac{d{h}_{N}}{\underset{\_}{d\theta}}\right]\text{\hspace{1em}}\mathrm{if}\text{\hspace{1em}}{w}_{N}\ge 0\\ \left[{w}_{N}\frac{\stackrel{\_}{d{h}_{N}}}{d\theta}\right]\text{\hspace{1em}}\mathrm{if}\text{\hspace{1em}}{w}_{N}\le 0\end{array}\text{}\frac{\stackrel{\_}{d{x}_{o}}}{d\theta}=\{\begin{array}{c}\left[{w}_{1}\frac{\stackrel{\_}{d{h}_{1}}}{d\theta}\right]\text{\hspace{1em}}\mathrm{if}\text{\hspace{1em}}{w}_{1}\ge 0\\ \left[{w}_{1}\frac{d{h}_{1}}{\underset{\_}{d\theta}}\right]\text{\hspace{1em}}\mathrm{if}\text{\hspace{1em}}{w}_{1}\le 0\end{array}\text{}+\dots \text{}+\{\begin{array}{c}\left[{w}_{N}\frac{\stackrel{\_}{d{h}_{N}}}{d\theta}\right]\text{\hspace{1em}}\mathrm{if}\text{\hspace{1em}}{w}_{N}\ge 0\\ \left[{w}_{N}\frac{d{h}_{N}}{\underset{\_}{d\theta}}\right]\text{\hspace{1em}}\mathrm{if}\text{\hspace{1em}}{w}_{N}\le 0\end{array}& \left(18\right)\end{array}$  [0143]Note that
$\left[\frac{df\left({x}_{o}\right)}{d{x}_{o}}\right]$
represents the interval of possible first derivatives of the activation function over all possible input values u. This range may be computed during the forward pass using interval arithmetic techniques starting from a global range of input values [x _{ o } , {overscore (x_{o})}] that is preselected to be some infinite or finite range of the input space. It follows that:$\begin{array}{cc}\frac{d{h}_{o}}{\underset{\_}{d\theta}}=\mathrm{min}\left\{\frac{df\left({x}_{o}\right)}{\underset{\_}{d{x}_{o}}}\frac{d{x}_{o}}{\underset{\_}{d\theta}},\frac{df\left({x}_{o}\right)}{\underset{\_}{d{x}_{o}}}\frac{\stackrel{\_}{d{x}_{o}}}{d\theta},\frac{\stackrel{\_}{df\left({x}_{o}\right)}}{d{x}_{o}}\frac{d{x}_{o}}{\underset{\_}{d\theta}},\frac{\stackrel{\_}{df\left({x}_{o}\right)}}{d{x}_{o}}\frac{\stackrel{\_}{d{x}_{o}}}{d\theta},\right\}\text{}\frac{\stackrel{\_}{d{h}_{o}}}{d\theta}=\mathrm{min}\left\{\frac{df\left({x}_{o}\right)}{\underset{\_}{d{x}_{o}}}\frac{d{x}_{o}}{\underset{\_}{d\theta}},\frac{df\left({x}_{o}\right)}{\underset{\_}{d{x}_{o}}}\frac{\stackrel{\_}{d{x}_{o}}}{d\theta},\frac{\stackrel{\_}{df\left({x}_{o}\right)}}{d{x}_{o}}\frac{d{x}_{o}}{\underset{\_}{d\theta}},\frac{\stackrel{\_}{df\left({x}_{o}\right)}}{d{x}_{o}}\frac{\stackrel{\_}{d{x}_{o}}}{d\theta},\right\}& \left(19\right)\end{array}$
Again, the output values may be interpreted as estimates of the overall gain bounds:$\left[\frac{d{y}_{1}}{\underset{\_}{d\theta}},\frac{\stackrel{\_}{d{y}_{1}}}{d\theta}\right]$
which are guaranteed to contain the actual gains over the selected input space [x _{ o } ,{overscore (x_{o})}] by virtue of the following guaranteed inequality:$\begin{array}{cc}\frac{d{y}_{1}}{\underset{\_}{d\theta}}\le \frac{d{y}_{1}}{d{u}_{2}}{\u2758}_{x\in \left[\underset{\_}{{x}_{o}},\stackrel{\_}{{x}_{o}}\right]}\le \frac{\stackrel{\_}{d{y}_{1}}}{d\theta}& \left(20\right)\end{array}$  [0144]To ensure that the actual model gains comply with the operational constraint [L, U] at all required input values, we need to ensure that the following inequality:
$\begin{array}{cc}L\le \frac{d{y}_{1}}{d{u}_{2}}{\u2758}_{x\in \left[\underset{\_}{{x}_{o}},\stackrel{\_}{{x}_{o}}\right]}\le U& \left(21\right)\end{array}$
is satisfied for all required values of x. To accomplish this, it is sufficient to make sure that the gain bound estimates are within the range of the operational constraints, [L,U]:$\begin{array}{cc}L\le \frac{d{y}_{1}}{\underset{\_}{d\theta}}\le \stackrel{\_}{\frac{d{y}_{1}}{d\theta}\le U}& \left(22\right)\end{array}$
While satisfaction of the inequality in Eq. (22) will ensure that the actual gains of the model comply with the desired operational constraints, the overestimation inherent in Eq. (20) may result in the actual gains being restricted to a narrow subset of [L,U].
Interval Arithmetic and InputRegion Partitioning  [0145]Bounds on model outputs or model gains computed using interval arithmetic are, by their nature, conservative bound estimates. Tighter bounds can be computed using verified global optimization techniques. In the above description of using interval arithmetic methods to compute bounds on model outputs and gains, the entire operating region (or suitable superset of that region) was used as the input interval. This inputinterval is, in general, a multidimensional hyperrectangle. Tighter bounds can be achieved by starting with smaller input interval. In the limiting case, a point input region results in the exact computation of the output or gain at that single pointinput. This observation gives rise to a nature divideandconquer strategy for determining minimum and maximum values for outputs and gains of the model, see R. Hammer, M. Hocks, U. Kulisch, and D. Ratz, C++ Toolbox for Verified Computing, SpringerVerlag, 1995 and E. Hansen, Global Optimization Using Interval Analysis. Marcel Dekker, Inc. New York, 1992. As described later, this technique may be used during model training or, in the preferred embodiment, it can be performed as part of a posttraining verification step.
 [0146]We describe specifically how to search for the minimum value of a model output or gain, denoted as ƒ, over a desired global input region and note that only slight modification is needed to search for the maximum value. The search begins with: (1) a single hyperrectangle representing the global input region and (2) a global upper bound on the minimum value of ƒ denoted as {tilde over (ƒ)}. The initial value of {tilde over (ƒ)} may be selected as the minimum of a set of pointevaluations of ƒ. The input region is recursively partitioned by selecting an existing hyperrectangle, repartitioning it along a selected dimension, and replacing it with the two new smaller hyperrectangles. The intervalbased computation of the [ƒ] is performed for the two new hyperrectangles as described earlier. A hyperrectangle in the working set can be discarded if {tilde over (ƒ)}<ƒ for that hyperrectangle. In addition, the global lower bound {tilde over (ƒ)} may be reduced if a hyperrectangle is constructed for which {overscore (ƒ)}<{tilde over (ƒ)}, or if a pointevaluation of ƒ results in ƒ<{tilde over (ƒ)}. Many heuristics have been described in the literature for tuning the performance of this basic branchandbound search strategy. See for example R. Patil, Efficient Verified Global Optimization Using Interval Arithmetic, Dissertation for Degree of Doctor of Philosophy, New Mexico State University, 1996.
 [0000]
FIG. 8 —Training a Model of a Nonlinear Process  [0147]
FIG. 8 is a high level flowchart of a method for training a model of a nonlinear process, such as the PUNDA model described herein, according to one embodiment. It should be noted, however, that various embodiments of the training method described may be applied to training other types of nonlinear models as well. It should also be noted that in various embodiments, some of the method elements described may be performed concurrently, in a different order than shown, or omitted. Additional method elements may also be performed as desired. The method below is described for an embodiment of the PUNDA model using a neural network and a set of MIMO difference equations, although it should be noted that the method is broadly applicable to other types of PUNDA models, and to other types of nonlinear models in general.  [0148]As
FIG. 8 shows, in 802, process inputs/outputs (I/O), i.e., I/O parameters, to be included in the model may be identified. Examples of I/O parameters may include material inputs and outputs, conditions, such as temperature and pressure, power, costs, and so forth. This identification of process I/O may be accomplished in a variety of different ways. For example, in one embodiment, expert knowledge may be used to determine or otherwise identify the process inputs and outputs. As another example, in one embodiment, the process I/O may be determined or identified programmatically through systematic search algorithms, such as correlation analysis. Other approaches or techniques for identifying the process inputs and outputs are also contemplated.  [0149]In 804, data for the process input(s)/output(s) may be collected. For example, the data may be historical data available from plant normal operation, e.g., from plant operation logs, and/or test data. Alternatively, in some embodiments, all or part of the data may be generated from other models, assembled or averaged from multiple sources, etc. In yet another embodiment, the data may be collected substantially in real time from an operating process, e.g., from an online source.
 [0150]In 806, one or more signal processing operations may optionally be performed on the data. For example, the signal processing operations may include filtering the data to reduce noise contamination in the data, removing outlier data from the data set (i.e., anomalous data points), data compression, variable transformation, and normalization, among others. Thus, the collected data from 804 may be preprocessed or otherwise manipulated to put the data into a form suitable for use in training the model.
 [0151]In 808, prior knowledge about the process may optionally be assembled or gathered. For example, the prior knowledge may include operator knowledge regarding the sign of a particular gain, or a residence time in the system. As another example, the prior knowledge may include more systematic information, such as, for example, a partial or complete first principles model of the process, e.g., in the form of a set of nonlinear differential or partial differential equations. Well known methodologies exist to determine or extract constraints, such as derivatives of the outputs with respect to inputs (commonly referred to as gains), from first principles models or information.
 [0152]In 810, the prior knowledge of 808 may be processed to determine or create the constraints for the training problem. For example, commercially available software may be used to derive analytical expressions for the first or higher order derivatives of the outputs with respect to the inputs, including these derivatives in the constraints. In other embodiments, the processing may also include sophisticated checks on the consistency of the prior knowledge.
 [0153]In 812, an order for the MIMO difference equations may be determined. In other words, the order of the equations comprised in the parameterized dynamic model 504 may be determined. For example, in one embodiment, the order may be determined by an expert, i.e., one or more human experts, or by an expert system. In another embodiment, the order may be determined as a result of a systematic optimization problem, in which case the determination of the order of the model may be performed simultaneously or concurrently with the training of the model.
 [0154]In 814, an optimization problem may be formulated in which model parameters are or include decision variables. Equation 7 provides an example of a mathematical programming formulation, where an objective function operates to minimize model errors subject to a set of constraints. This mathematical programming formulation may, in one embodiment, be determined by transforming or recasting the prior knowledge into a mathematical description suitable for a NLP problem. The constraint set may include terms computed on a pointbypoint basis over the set of data points. The constraint set may include aggregations of pointbypoint constraints. The constraint set may also include dataindependent bounds on constraint values, which in the preferred embodiment may be evaluated using interval arithmetic methods over either a single global input region or using inputregion partitioning.
 [0155]In one embodiment, formulating an optimization problem may include determining or modifying the objective function. For example, the objective function may be input by a user, or may be programmatically determined by the optimization process, e.g., based on user specifications or stored criteria. In some embodiments, a preexisting objective function may be received or retrieved from memory, and may optionally be modified. The objective function may be modified based on user input, and/or programmatically, i.e., automatically by the optimization process.
 [0156]In 816, optimization algorithms may be executed or performed to determine the parameters (i.e., values of the parameters) of the PUNDA model. Note that in various embodiments, any type of commercially available solver (such as, for example, solvers utilizing sequential quadratic programming or any other techniques) may be used for this purpose. In other embodiments, any of various traditional neural network training algorithms, such as back propagation, may be used as desired and appropriate.
 [0157]Finally, in 818, satisfaction of the constraint set may be verified and the value of the objective function may be computed. If the constraints are not satisfied, or the objective value is not sufficiently small, the method elements 810, 812, 814, and 816 of formulating and solving the model optimization task may be repeated. This verification may be performed in a number of ways including the use of heuristics or through the application of systematic analysis techniques, among others. For example, in a preferred embodiment, the dataindependent gains of the model may be verified using interval arithmetic over the global input region and/or interval arithmetic with inputregion partitioning.
 [0158]Thus, various embodiments of the method of
FIG. 8 may be used to train a nonlinear model, such as a PUNDA model, where the training process results in the determination of model parameters and their values over the operational regime of the process. In other words, because the nonlinear approximator (e.g., the neural network) 502 and the parameterized dynamic model (e.g., the MIMO difference equations) 504 are trained together, the parameter values provided by the nonlinear approximator 502 to the parameterized dynamic model 504 may vary during operation of the process, e.g., as conditions or other operational aspects of the process change. This integrated training of the nonlinear approximator 502 and the parameterized dynamic model 504 thus treats the combined model in a holistic manner, i.e., considers the combined model behavior as a global property via the confluence of the entire set of model parameters and their values over the operational regime of the process, and thus the training is not limited to some isolated aspect of the system or process behavior, as is typically the case with prior art systems and methods.  [0000]
FIG. 9 —Operation of the PUNDA Model  [0159]
FIG. 9 is a high level flowchart of a method of operation of the PUNDA model in a control application for a physical process, e.g., a physical plant, according to one embodiment. Thus, in the embodiment described, the PUNDA model couples to the physical process, and also to a controller which operates to manage or control the process based on outputs from the PUNDA model, as illustrated byFIG. 5B . As mentioned earlier, however, the methods presented herein are also contemplated as being broadly applicable in a wide variety of application domains, including both physical and nonphysical (e.g., analytical) processes. As noted above, in various embodiments, some of the method elements described may be performed concurrently, in a different order than shown, or omitted. Additional method elements may also be performed as desired.  [0160]In 902, the model may be initialized to a current status of the physical process to be controlled. This initialization may ensure that the PUNDA model and the physical plant are correctly aligned, and thus that the predictions produced by the PUNDA model are relevant to the physical process. In various embodiments, the initialization may be performed by a human expert, and expert system, or via a systematic methodology of identifying the initial conditions of the model given available current and past measurements from the physical process. Other approaches to initialization are also contemplated.
 [0161]In 904, various attributes or parameters of the combined model and process may be determined or defined, such as, for example, control variable and manipulated variable (CV and MV) target profiles, CV/MV constraint profiles, disturbance variable (DV) profiles, prediction and control horizons, objective function and constraints, and tuning parameters for the controller, among others. In various embodiments, these determinations or definitions may be performed by an operator, programmatically, or a combination of the two. In an embodiment where the determinations are made programmatically, the controller may be a hierarchical controller, where a higher level controller in the control hierarchy decides or determines the desired set points for a lower level controller.
 [0162]In 906, a profile for the MV moves or changes, i.e., a trajectory of the MV values, over the control horizon may be generated, and the model's response over the prediction horizon may be observed, and the deviation from the desired behavior determined. In one embodiment, the MV profiles may be determined by a human operator, although in a preferred embodiment, the MV profiles may be determined programmatically, e.g., by an optimization algorithm or process. The model response to the presumed MV profile may be calculated over the prediction horizon and compared to the desired behavior and constraints. The appropriateness or suitability of the MV profile may be measured or evaluated by or via corresponding value or values of the objective function. In other words, values of the manipulated variables are provided to the process model (i.e., the PUNDA model), e.g., to control the model, and the resulting behavior observed. This response is then compared to the desired response, e.g., as quantified by the value of the objective function, as is well known in the art of optimization.
 [0163]Then, in 908, an optimal MV profile may be determined. For example, in a preferred embodiment, method element 906 may be performed iteratively with different MV profiles until a satisfactory predicted system response is obtained. Although this may be performed via trial and error by a human operator, the preferred mode of operation is to use an optimizer to systematically search for the optimal MV profiles, e.g., by systematically seeking those MV moves or changes for which the objective function is improved (e.g. minimized when the objective function reflects the control cost) while respecting constraints. The determined optimal MV profile may be considered or referred to as a decision, and the corresponding model response may be considered or referred to as the predicted response of the process.
 [0164]In 910, information related to or indicating the MV profiles and corresponding model response (e.g., MV profiles and predicted system response) may optionally be displayed and/or logged, as desired. For example, the MV profiles and system response may be displayed in an appropriate user interface, or logged in a database, e.g., for future diagnosis.
 [0165]In 912, a portion or the entirety of the decision (MV) profiles may be transmitted to a distributed control system (DCS) to be applied to the physical system. In one embodiment, final checks or additional processing may be performed by the DCS. For example, the DCS may check to make sure that a decision (e.g., a value or set of values of the manipulated variables) does not fall outside a range, e.g., for safety. If the value(s) is/are found to be outside a valid or safe range, the value(s) may be reset, and/or an alert or alarm may be triggered to call attention to the violation.
 [0166]In 914, the output of the DCS, e.g., the (possibly modified) decision profiles, may be provided as actual input to the physical process, thereby controlling the process behavior, and the input to the physical process (i.e., the output of the DCS) and the actual process response (i.e., the actual process outputs) may be measured. In a preferred embodiment, the information may be fed back to the PUNDA model, where the actual process input/output measurements may be used to improve the estimate of the current status of the process in the model, and to produce a new deviation from the desired system response. In one embodiment, the optimization problem may be modified based on the input to the model. For example, in various embodiments modifying the optimization problem may include modifying one or more of: constraints, the objective function, model parameters, optimization parameters, and optimization data, or any other aspect of the optimization process. The method may then return to method element 902 above, and continue as described above, dynamically monitoring and controlling the process in an ongoing manner, where the method attempts to satisfy the objective function subject to the determined or specified constraints.
 [0167]As noted above, in one embodiment, the input/output of the process may be used to continue training the PUNDA model online. Alternatively, in other embodiments, the model may be decoupled intermittently for further training, or, a copy of the model may be created and trained offline while the original model continues to operate, and the newly trained version substituted for the original at a specified time or under specified conditions.
 [0168]Thus, various embodiments of the parametric universal nonlinear dynamics approximator, or PUNDA model, described herein may provide a more powerful and flexible model architecture for prediction, optimization, control, and/or simulation applications. Additionally, the interval analysis approach described herein for determining constraints for this and other types of models provides a reliable and computationally tractable method for training such models. In combination, these concepts and techniques may facilitate substantially real time or online operation of prediction, optimization, and/or control systems in any of a wide variety of application domains. Offline modeling, prediction, and/or simulation of nonlinear processes and systems are also facilitated by embodiments of the systems and methods disclosed herein.
 [0169]Various embodiments further include receiving or storing instructions and/or data implemented in accordance with the foregoing description upon a carrier medium. Suitable carrier media include a memory medium as described above, as well as signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as networks and/or a wireless link.
 [0170]Although the system and method of the present invention has been described in connection with the preferred embodiment, it is not intended to be limited to the specific form set forth herein, but on the contrary, it is intended to cover such alternatives, modifications, and equivalents, as can be reasonably included within the spirit and scope of the invention as defined by the appended claims.
Claims (30)
 1. A combined model for predictive optimization or control of a nonlinear process, comprising:a nonlinear approximator; anda parameterized dynamic model, coupled to the nonlinear approximator, wherein the parameterized dynamic model is operable to model the nonlinear process;wherein the nonlinear approximator is operable to:receive one or more process inputs; andgenerate one or more parameters for the parameterized dynamic model;wherein the parameterized dynamic model is operable to:receive the one or more parameters;receive the one or more process inputs; andgenerate one or more predicted process outputs based on the received one or more parameters and the received one or more process inputs; andwherein the one or more predicted process outputs are useable to analyze and/or control the nonlinear process.
 2. The combined model of
claim 1 , wherein the combined model is operable to be trained to model the nonlinear process in an integrated manner, and wherein the nonlinear approximator and the parameterized dynamic model are trained together substantially concurrently.  3. The combined model of
claim 2 ,wherein the combined model is operable to be trained to model the nonlinear process in an integrated manner by an optimization process; andwherein the optimization process is operable to perform an optimization algorithm to determine model parameters for the parameterized dynamic model.  6. The combined model of
claim 3 ,wherein the combined model is operable to be coupled to the nonlinear process or a representation of the nonlinear process;wherein the nonlinear process is operable to receive the one or more process inputs and produce the one or more process outputs;wherein the optimization process is operable to determine model errors based on the one or more process outputs and the one or more predicted process outputs; andwherein the optimization process is operable to train the combined model in an iterative manner using the model errors and an optimizer.  7. The combined model of
claim 6 , wherein, in training the combined model in an iterative manner using the model errors and an optimizer, the optimization process is operable to:identify process inputs and outputs (I/O);collect data for process inputs and outputs I/O;determine constraints on model behavior from prior knowledge;formulate an optimization problem;execute an optimization algorithm to determine model parameters subject to the determined constraints by solving the optimization problem; andverify the compliance of the model with the specified constraints.  8. The combined model of
claim 7 , wherein, in verifying the compliance of the model with the specified constraints, the optimization process is operable to:use interval arithmetic over the global input region; and/or use interval arithmetic with inputregion partitioning.  9. The method of
claim 7 , wherein in executing an optimization algorithm to determine model parameters, the optimization process is operable to:execute the optimization algorithm to determine an optimal order of the model.  10. The combined model of
claim 7 ,wherein the optimization process is further operable to:determine an order of the model; andwherein, in executing the optimization algorithm to determine model parameters, the optimization process is operable to:execute the optimization algorithm to determine optimal parameters of the model based on the determined order of the model.  11. The combined model of
claim 7 ,wherein, in formulating the optimization problem, the optimization process is operable to determine or modify an objective function.  12. The combined model of
claim 7 ,wherein, in solving the optimization problem, the optimization process is operable to solve an objective function subject to the determined constraints.  13. The combined model of
claim 2 ,wherein, after being trained, the overall behavior of the combined model is consistent with a priori knowledge of the nonlinear process.  14. The combined model of
claim 1 , wherein the nonlinear approximator comprises one or more of:a neural network;a support vector machine;a statistical model;a parametric description of the nonlinear process;a Fourier series model; andan empirical model.  15. The combined model of
claim 1 , wherein the nonlinear approximator comprises a universal nonlinear approximator.  16. The combined model of
claim 1 , wherein the nonlinear approximator includes a feedback loop, and wherein the feedback loop is operable to provide output of the nonlinear approximator from a previous cycle as input to the nonlinear approximator for a current cycle.  17. The combined model of
claim 1 , wherein the parameterized dynamic comprises a multiinput, multioutput (MIMO) dynamic model implemented with a set of difference equations.  18. The combined model of
claim 17 , wherein the set of difference equations comprises a set of discrete time polynomials.  19. The combined model of
claim 17 , wherein the one or more process inputs are received from one or more of:the nonlinear process; anda representation of the nonlinear process.  20. The combined model of
claim 19 , wherein the a representation of the nonlinear process comprises one or more of:a first principles model;a statistical model;a parametric description of the nonlinear process;a Fourier series model;an empirical model; andempirical data.  21. The combined model of
claim 1 , wherein the combined model is operable to be coupled to the nonlinear process, wherein the combined model is further operable to be coupled to a control process, wherein the control process is operable to:a) initialize the model to a current status of the nonlinear process;b) determine parameters of the model, including manipulated variables;c) generate a profile of manipulated variables;d) operate the model in accordance with the generated profile of manipulated variables, thereby generating a model response;e) determine a deviation of the model response from a desired behavior;f) repeat c)e) one or more times to determine an optimal profile of manipulated variables;g) operate the nonlinear process in accordance with the optimal profile of manipulated variables, thereby generating process output; andh) provide the nonlinear process output as input to the model; andi) repeat a)h) one or more times to dynamically control the nonlinear process.  22. A method for training a model of a nonlinear process, the method comprising:identifying process inputs and outputs (I/O);collecting data for process inputs and outputs I/O;determining constraints on model behavior from prior knowledge;formulating an optimization problem;executing an optimization algorithm to determine model parameters subject to the determined constraints by solving the optimization problem; andverifying the compliance of the model with the specified constraints.
 23. The method of
claim 22 , wherein said verifying the compliance of the model with the specified constraints comprises one or more of:using interval arithmetic over the global input region; andusing interval arithmetic with inputregion partitioning.  24. The method of
claim 22 , wherein said executing an optimization algorithm to determine model parameters comprises:executing the optimization algorithm to determine an optimal order of the model.  25. The method of
claim 22 , further comprising:determining an order of the model; andwherein said executing an optimization algorithm to determine model parameters comprises:executing the optimization algorithm to determine optimal parameters of the model based on the determined order of the model.  26. The method of
claim 22 ,wherein the model comprises a parametric universal nonlinear dynamics approximator (PUNDA) model, comprising:a nonlinear approximator; anda parameterized dynamic model, coupled to the nonlinear approximator, wherein the parameterized dynamic model is operable to model the nonlinear process; andwherein, after said verifying, the overall behavior of the PUNDA model is consistent with the prior knowledge.  27. The method of
claim 22 ,wherein formulating the optimization problem comprises:determining an objective function; andwherein solving the optimization problem comprises:solving the objective function subject to the determined constraints.  28. A system for training a model of a nonlinear process, the system comprising:means for identifying process inputs and outputs (I/O);means for collecting data for process inputs and outputs I/O;means for determining constraints on model behavior from prior knowledge;means for formulating an optimization problem;means for executing an optimization algorithm to determine model parameters subject to the determined constraints by solving the optimization problem; andmeans for verifying the compliance of the model with the specified constraints.
 29. A method for controlling a nonlinear process, the method comprising:a) initializing the model to a current status of the nonlinear process;b) determining parameters of the model, including manipulated variables;c) generating a profile of manipulated variables;d) operating the model in accordance with the generated profile of manipulated variables, thereby generating a model response;e) determining a deviation of the model response from a desired behavior;f) repeating c)e) one or more times to determine an optimal profile of manipulated variables;g) operating the nonlinear process in accordance with the optimal profile of manipulated variables, thereby generating process output; andh) providing the nonlinear process output as input to the model; and repeating a)h) one or more times to dynamically control the nonlinear process.
 30. The method of
claim 29 , further comprising:i) modifying the optimization problem based on the input to the model;wherein said repeating a)h) comprises repeating a)i).  31. The method of
claim 30 , wherein said modifying the optimization problem comprises modifying one or more of:constraints;an objective function;model parameters;optimization parameters; andoptimization data.  32. A system for controlling a nonlinear process, the system comprising:means for a) initializing the model to a current status of the nonlinear process;means for b) determining parameters of the model, including manipulated variables;means for c) generating a profile of manipulated variables;means for d) operating the model in accordance with the generated profile of manipulated variables, thereby generating a model response;means for e) determining a deviation of the model response from a desired behavior;means for f) repeating c)e) one or more times to determine an optimal profile of manipulated variables;means for g) operating the nonlinear process in accordance with the optimal profile of manipulated variables, thereby generating process output; andmeans for h) providing the nonlinear process output as input to the model; andmeans for repeating a)h) one or more times to dynamically control the nonlinear process.
Priority Applications (2)
Application Number  Priority Date  Filing Date  Title 

US54576604 true  20040219  20040219  
US10842157 US20050187643A1 (en)  20040219  20040510  Parametric universal nonlinear dynamics approximator and use 
Applications Claiming Priority (4)
Application Number  Priority Date  Filing Date  Title 

US10842157 US20050187643A1 (en)  20040219  20040510  Parametric universal nonlinear dynamics approximator and use 
US12112847 US20080208778A1 (en)  20021209  20080430  Controlling a nonlinear process 
US12112750 US8019701B2 (en)  20021209  20080430  Training a model of a nonlinear process 
US14659003 US20150185717A1 (en)  20040219  20150316  Parametric universal nonlinear dynamics approximator and use 
Related Child Applications (1)
Application Number  Title  Priority Date  Filing Date 

US14659003 Continuation US20150185717A1 (en)  20040219  20150316  Parametric universal nonlinear dynamics approximator and use 
Publications (1)
Publication Number  Publication Date 

US20050187643A1 true true US20050187643A1 (en)  20050825 
Family
ID=34864540
Family Applications (4)
Application Number  Title  Priority Date  Filing Date 

US10842157 Pending US20050187643A1 (en)  20040219  20040510  Parametric universal nonlinear dynamics approximator and use 
US12112750 Active 20240925 US8019701B2 (en)  20021209  20080430  Training a model of a nonlinear process 
US12112847 Pending US20080208778A1 (en)  20021209  20080430  Controlling a nonlinear process 
US14659003 Pending US20150185717A1 (en)  20040219  20150316  Parametric universal nonlinear dynamics approximator and use 
Family Applications After (3)
Application Number  Title  Priority Date  Filing Date 

US12112750 Active 20240925 US8019701B2 (en)  20021209  20080430  Training a model of a nonlinear process 
US12112847 Pending US20080208778A1 (en)  20021209  20080430  Controlling a nonlinear process 
US14659003 Pending US20150185717A1 (en)  20040219  20150316  Parametric universal nonlinear dynamics approximator and use 
Country Status (1)
Country  Link 

US (4)  US20050187643A1 (en) 
Cited By (35)
Publication number  Priority date  Publication date  Assignee  Title 

US20040181498A1 (en) *  20030311  20040916  Kothare Simone L.  Constrained system identification for incorporation of a priori knowledge 
US20070130302A1 (en) *  20051114  20070607  Deere & Company, A Delaware Corporation  Managing heterogeneous data streams for remote access 
US20090037003A1 (en) *  20050720  20090205  Jian Wang  Realtime operating optimized method of multiinput and multioutput continuous manufacturing procedure 
US7502715B1 (en) *  20040921  20090310  Asml Netherlands B.V  Observability in metrology measurements 
WO2009051891A1 (en) *  20070820  20090423  Cleveland State University  Extended active disturbance rejection controller 
WO2009115323A1 (en) *  20080318  20090924  Siemens Aktiengesellschaft  Method for modelbased determination of parameters and/or state variables of a piezodriven setting element 
WO2009155260A2 (en) *  20080620  20091223  Honeywell International Inc.  Apparatus and method for model predictive control (mpc) of a nonlinear process 
US20100082120A1 (en) *  20080930  20100401  Rockwell Automation Technologies, Inc.  System and method for optimizing a paper manufacturing process 
CN101403893B (en)  20081117  20100602  杭州电子科技大学  Automatic generation method for dyeing formula 
US20110106277A1 (en) *  20091030  20110505  Rockwell Automation Technologies, Inc.  Integrated optimization and control for production plants 
US7949417B2 (en)  20060922  20110524  Exxonmobil Research And Engineering Company  Model predictive controller solution analysis process 
US20110218782A1 (en) *  20100302  20110908  FisherRosemount Systems, Inc.  Rapid process model identification and generation 
US20110299050A1 (en) *  20080923  20111208  Asml Netherlands B.V.  Lithographic System, Lithographic Method And Device Manufacturing Method 
US20110301723A1 (en) *  20100602  20111208  Honeywell International Inc.  Using model predictive control to optimize variable trajectories and system control 
US20120098481A1 (en) *  20101022  20120426  Nucleus Scientific, Inc.  Apparatus and Method for Rapidly Charging Batteries 
EP2477117A1 (en) *  20110114  20120718  Honeywell International Inc.  Type and range propagation through dataflow models 
US8265854B2 (en)  20080717  20120911  Honeywell International Inc.  Configurable automotive controller 
US8360040B2 (en)  20050818  20130129  Honeywell International Inc.  Engine controller 
CN103234610A (en) *  20130514  20130807  湖南师范大学  Weighing method applicable to truck scale 
USRE44452E1 (en)  20041229  20130827  Honeywell International Inc.  Pedal position and/or pedal change rate for use in control of an engine 
US20130289945A1 (en) *  20120427  20131031  U.S.A. As Represented By The Administrator Of The National Aeronautics And Space Administration  System and Method for Space Utilization Optimization and Visualization 
WO2013163840A1 (en) *  20120504  20131107  浙江大学  Nonlinear parameter varying (npv) model identification method 
US8620461B2 (en)  20090924  20131231  Honeywell International, Inc.  Method and system for updating tuning parameters of a controller 
EP2693279A1 (en) *  20120801  20140205  Fujitsu Limited  Method and program for generating a simulator 
US8670945B2 (en)  20100930  20140311  Honeywell International Inc.  Apparatus and method for product movement planning to support safety monitoring in inventory management systems 
US8682635B2 (en)  20100528  20140325  Rockwell Automation Technologies, Inc.  Optimal selfmaintained energy management system and use 
US20140095129A1 (en) *  20120928  20140403  Fujitsu Limited  Nonlinear term selection apparatus and method, identification system and compensation system 
EP2728425A1 (en)  20121105  20140507  Rockwell Automation Technologies, Inc.  Online integration of modelbased optimization and modelless control 
EP2728426A2 (en)  20121105  20140507  Rockwell Automation Technologies, Inc.  Secure models for modelbased control and optimization 
US20140129491A1 (en) *  20121106  20140508  Rockwell Automation Technologies, Inc.  Empirical modeling with globally enforced general constraints 
EP2778806A1 (en)  20130315  20140917  Rockwell Automation Technologies, Inc.  Deterministic optimization based control system and method for linear and nonlinear systems 
US8874242B2 (en)  20110318  20141028  Rockwell Automation Technologies, Inc.  Graphical language for optimization and use 
US20150100282A1 (en) *  20121003  20150409  Operation Technology, Inc.  Generator dynamic model parameter estimation and tuning using online data and subspace state space model 
US9650934B2 (en)  20111104  20170516  Honeywell spol.s.r.o.  Engine and aftertreatment optimization system 
US9677493B2 (en)  20110919  20170613  Honeywell Spol, S.R.O.  Coordinated engine and emissions control system 
Families Citing this family (34)
Publication number  Priority date  Publication date  Assignee  Title 

US7890412B2 (en) *  20031104  20110215  New York Mercantile Exchange, Inc.  Distributed trading bus architecture 
US8032235B2 (en)  20070628  20111004  Rockwell Automation Technologies, Inc.  Model predictive control system and method for reduction of steady state error 
US8595119B2 (en) *  20080215  20131126  New York Mercantile Exchange, Inc.  Symbolic language for trade matching 
US8306788B2 (en) *  20080611  20121106  Sas Institute Inc.  Computerimplemented systems and methods for executing stochastic discrete event simulations for design of experiments 
US20100082405A1 (en) *  20080930  20100401  Shan Jerry Z  Multiperiodahead Forecasting 
US8229835B2 (en)  20090108  20120724  New York Mercantile Exchange, Inc.  Determination of implied orders in a trade matching system 
US8560283B2 (en) *  20090710  20131015  Emerson Process Management Power And Water Solutions, Inc.  Methods and apparatus to compensate first principlebased simulation models 
US8417618B2 (en) *  20090903  20130409  Chicago Mercantile Exchange Inc.  Utilizing a trigger order with multiple counterparties in implied market trading 
US20110066537A1 (en) *  20090915  20110317  Andrew Milne  Implied volume analyzer 
US8255305B2 (en) *  20090915  20120828  Chicago Mercantile Exchange Inc.  Ratio spreads for contracts of different sizes in implied market trading 
US8868460B2 (en) *  20090915  20141021  Chicago Mercantile Exchange Inc.  Accelerated trade matching using speculative parallel processing 
US8266030B2 (en) *  20090915  20120911  Chicago Mercantile Exchange Inc.  Transformation of a multileg security definition for calculation of implied orders in an electronic trading system 
US8229838B2 (en)  20091014  20120724  Chicago Mercantile Exchange, Inc.  Leg pricer 
US8260732B2 (en) *  20091124  20120904  King Fahd University Of Petroleum And Minerals  Method for identifying Hammerstein models 
US8346693B2 (en) *  20091124  20130101  King Fahd University Of Petroleum And Minerals  Method for hammerstein modeling of steam generator plant 
US8346711B2 (en) *  20091124  20130101  King Fahd University Of Petroleum And Minerals  Method for identifying multiinput multioutput Hammerstein models 
FR2958911B1 (en) *  20100419  20120427  Snecma  Method and system level monitoring of oil contained in a tank of an aircraft engine 
US8423334B2 (en) *  20100518  20130416  Honeywell International Inc.  Distributed model identification 
DE102010017687A1 (en) *  20100701  20120105  Claas Selbstfahrende Erntemaschinen Gmbh  A method for adjusting at least one working member of a selfpropelled harvesting machine 
US9141936B2 (en)  20100804  20150922  Sas Institute Inc.  Systems and methods for simulating a resource constrained process 
US8571722B2 (en)  20101022  20131029  Toyota Motor Engineering & Manufacturing North America, Inc.  Method for safely parking vehicle near obstacles 
US20120239169A1 (en) *  20110318  20120920  Rockwell Automation Technologies, Inc.  Transparent models for large scale optimization and control 
US8918352B2 (en) *  20110523  20141223  Microsoft Corporation  Learning processes for single hidden layer neural networks with linear output units 
US8799201B2 (en)  20110725  20140805  Toyota Motor Engineering & Manufacturing North America, Inc.  Method and system for tracking objects 
ES2427645B1 (en)  20111115  20140902  Telefónica, S.A.  Method for managing application performance multilayer implanted in an information technology infrastructure 
US9207653B2 (en) *  20120914  20151208  Horiba Instruments Incorporated  Control system autotuning 
US9436179B1 (en)  20130313  20160906  Johnson Controls Technology Company  Systems and methods for energy cost optimization in a building system 
US9235657B1 (en)  20130313  20160112  Johnson Controls Technology Company  System identification and model development 
US9852481B1 (en)  20130313  20171226  Johnson Controls Technology Company  Systems and methods for cascaded model predictive control 
US9208449B2 (en)  20130315  20151208  International Business Machines Corporation  Process model generated using biased process mining 
US9564757B2 (en) *  20130708  20170207  Eaton Corporation  Method and apparatus for optimizing a hybrid power system with respect to longterm characteristics by online optimization, and realtime forecasts, prediction or processing 
US9837970B2 (en) *  20150304  20171205  King Fahd University Of Petroleum And Minerals  Behavioral model and predistorter for modeling and reducing nonlinear effects in power amplifiers 
CN106161125A (en) *  20150331  20161123  富士通株式会社  Device and method for estimating nonlinear characteristic 
US20170205809A1 (en)  20160114  20170720  Rockwell Automation Technologies, Inc.  Optimization based controller tuning systems and methods 
Citations (8)
Publication number  Priority date  Publication date  Assignee  Title 

US5377307A (en) *  19921007  19941227  Schlumberger Technology Corporation  System and method of global optimization using artificial neural networks 
US5479571A (en) *  19910614  19951226  The Texas A&M University System  Neural node network and model, and method of teaching same 
US5847952A (en) *  19960628  19981208  Honeywell Inc.  Nonlinearapproximatorbased automatic tuner 
US6047221A (en) *  19971003  20000404  Pavilion Technologies, Inc.  Method for steadystate identification based upon identified dynamics 
US20020072828A1 (en) *  20000629  20020613  Aspen Technology, Inc.  Computer method and apparatus for constraining a nonlinear approximator of an empirical process 
US6453308B1 (en) *  19971001  20020917  Aspen Technology, Inc.  Nonlinear dynamic predictive device 
US20040148144A1 (en) *  20030124  20040729  Martin Gregory D.  Parameterizing a steadystate model using derivative constraints 
US6882992B1 (en) *  19990902  20050419  Paul J. Werbos  Neural networks for intelligent control 
Patent Citations (14)
Publication number  Priority date  Publication date  Assignee  Title 

US5479571A (en) *  19910614  19951226  The Texas A&M University System  Neural node network and model, and method of teaching same 
US5377307A (en) *  19921007  19941227  Schlumberger Technology Corporation  System and method of global optimization using artificial neural networks 
US5847952A (en) *  19960628  19981208  Honeywell Inc.  Nonlinearapproximatorbased automatic tuner 
US20020178133A1 (en) *  19971001  20021128  Aspen Technology, Inc.  Nonlinear dynamic predictive device 
US7065511B2 (en) *  19971001  20060620  Aspen Technology, Inc.  Nonlinear dynamic predictive device 
US6453308B1 (en) *  19971001  20020917  Aspen Technology, Inc.  Nonlinear dynamic predictive device 
US6047221A (en) *  19971003  20000404  Pavilion Technologies, Inc.  Method for steadystate identification based upon identified dynamics 
US6882992B1 (en) *  19990902  20050419  Paul J. Werbos  Neural networks for intelligent control 
US20100057222A1 (en) *  20000629  20100304  Aspen Technology, Inc.  Computer method and apparatus for constraining a nonlinear approximator of an empirical process 
US20020072828A1 (en) *  20000629  20020613  Aspen Technology, Inc.  Computer method and apparatus for constraining a nonlinear approximator of an empirical process 
US7330804B2 (en) *  20000629  20080212  Aspen Technology, Inc.  Computer method and apparatus for constraining a nonlinear approximator of an empirical process 
US20080071394A1 (en) *  20000629  20080320  Paul Turner  Computer method and apparatus for constraining a nonlinear approximator of an empirical process 
US7630868B2 (en) *  20000629  20091208  Aspen Technology, Inc.  Computer method and apparatus for constraining a nonlinear approximator of an empirical process 
US20040148144A1 (en) *  20030124  20040729  Martin Gregory D.  Parameterizing a steadystate model using derivative constraints 
Cited By (55)
Publication number  Priority date  Publication date  Assignee  Title 

US20040181498A1 (en) *  20030311  20040916  Kothare Simone L.  Constrained system identification for incorporation of a priori knowledge 
US7502715B1 (en) *  20040921  20090310  Asml Netherlands B.V  Observability in metrology measurements 
USRE44452E1 (en)  20041229  20130827  Honeywell International Inc.  Pedal position and/or pedal change rate for use in control of an engine 
US20090037003A1 (en) *  20050720  20090205  Jian Wang  Realtime operating optimized method of multiinput and multioutput continuous manufacturing procedure 
US7848831B2 (en) *  20050720  20101207  Jian Wang  Realtime operating optimized method of multiinput and multioutput continuous manufacturing procedure 
US8360040B2 (en)  20050818  20130129  Honeywell International Inc.  Engine controller 
US7562167B2 (en) *  20051114  20090714  Deere & Company  Managing heterogeneous data streams for remote access 
US20070130302A1 (en) *  20051114  20070607  Deere & Company, A Delaware Corporation  Managing heterogeneous data streams for remote access 
US7949417B2 (en)  20060922  20110524  Exxonmobil Research And Engineering Company  Model predictive controller solution analysis process 
WO2009051891A1 (en) *  20070820  20090423  Cleveland State University  Extended active disturbance rejection controller 
WO2009115323A1 (en) *  20080318  20090924  Siemens Aktiengesellschaft  Method for modelbased determination of parameters and/or state variables of a piezodriven setting element 
US8046089B2 (en) *  20080620  20111025  Honeywell International Inc.  Apparatus and method for model predictive control (MPC) of a nonlinear process 
WO2009155260A2 (en) *  20080620  20091223  Honeywell International Inc.  Apparatus and method for model predictive control (mpc) of a nonlinear process 
US20090319059A1 (en) *  20080620  20091224  Honeywell International Inc.  Apparatus and method for model predictive control (mpc) of a nonlinear process 
WO2009155260A3 (en) *  20080620  20100311  Honeywell International Inc.  Apparatus and method for model predictive control (mpc) of a nonlinear process 
US8265854B2 (en)  20080717  20120911  Honeywell International Inc.  Configurable automotive controller 
US20110299050A1 (en) *  20080923  20111208  Asml Netherlands B.V.  Lithographic System, Lithographic Method And Device Manufacturing Method 
US9632430B2 (en) *  20080923  20170425  Asml Netherlands B.V.  Lithographic system, lithographic method and device manufacturing method 
US8594828B2 (en) *  20080930  20131126  Rockwell Automation Technologies, Inc.  System and method for optimizing a paper manufacturing process 
US20100082120A1 (en) *  20080930  20100401  Rockwell Automation Technologies, Inc.  System and method for optimizing a paper manufacturing process 
CN101403893B (en)  20081117  20100602  杭州电子科技大学  Automatic generation method for dyeing formula 
US8620461B2 (en)  20090924  20131231  Honeywell International, Inc.  Method and system for updating tuning parameters of a controller 
US9170573B2 (en)  20090924  20151027  Honeywell International Inc.  Method and system for updating tuning parameters of a controller 
EP2320283A1 (en)  20091030  20110511  Rockwell Automation Technologies, Inc.  Integrated optimization and control for production plants 
US20110106277A1 (en) *  20091030  20110505  Rockwell Automation Technologies, Inc.  Integrated optimization and control for production plants 
US9141098B2 (en)  20091030  20150922  Rockwell Automation Technologies, Inc.  Integrated optimization and control for production plants 
US20110218782A1 (en) *  20100302  20110908  FisherRosemount Systems, Inc.  Rapid process model identification and generation 
US8756039B2 (en)  20100302  20140617  FisherRosemount Systems, Inc.  Rapid process model identification and generation 
US8682635B2 (en)  20100528  20140325  Rockwell Automation Technologies, Inc.  Optimal selfmaintained energy management system and use 
US8504175B2 (en) *  20100602  20130806  Honeywell International Inc.  Using model predictive control to optimize variable trajectories and system control 
US20110301723A1 (en) *  20100602  20111208  Honeywell International Inc.  Using model predictive control to optimize variable trajectories and system control 
US8670945B2 (en)  20100930  20140311  Honeywell International Inc.  Apparatus and method for product movement planning to support safety monitoring in inventory management systems 
US20120098481A1 (en) *  20101022  20120426  Nucleus Scientific, Inc.  Apparatus and Method for Rapidly Charging Batteries 
US9397516B2 (en) *  20101022  20160719  Nucleus Scientific, Inc.  Apparatus and method for rapidly charging batteries 
US8984488B2 (en)  20110114  20150317  Honeywell International Inc.  Type and range propagation through dataflow models 
EP2477117A1 (en) *  20110114  20120718  Honeywell International Inc.  Type and range propagation through dataflow models 
US8874242B2 (en)  20110318  20141028  Rockwell Automation Technologies, Inc.  Graphical language for optimization and use 
US9677493B2 (en)  20110919  20170613  Honeywell Spol, S.R.O.  Coordinated engine and emissions control system 
US9650934B2 (en)  20111104  20170516  Honeywell spol.s.r.o.  Engine and aftertreatment optimization system 
US20130289945A1 (en) *  20120427  20131031  U.S.A. As Represented By The Administrator Of The National Aeronautics And Space Administration  System and Method for Space Utilization Optimization and Visualization 
WO2013163840A1 (en) *  20120504  20131107  浙江大学  Nonlinear parameter varying (npv) model identification method 
US9864356B2 (en)  20120504  20180109  Zhejiang University  Nonlinear parameter varying (NPV) model identification method 
EP2693279A1 (en) *  20120801  20140205  Fujitsu Limited  Method and program for generating a simulator 
US20140095129A1 (en) *  20120928  20140403  Fujitsu Limited  Nonlinear term selection apparatus and method, identification system and compensation system 
US9646116B2 (en) *  20120928  20170509  Fujitsu Limited  Nonlinear term selection apparatus and method, identification system and compensation system 
US9864820B2 (en) *  20121003  20180109  Operation Technology, Inc.  Generator dynamic model parameter estimation and tuning using online data and subspace state space model 
US20150100282A1 (en) *  20121003  20150409  Operation Technology, Inc.  Generator dynamic model parameter estimation and tuning using online data and subspace state space model 
EP2728425A1 (en)  20121105  20140507  Rockwell Automation Technologies, Inc.  Online integration of modelbased optimization and modelless control 
EP2728426A2 (en)  20121105  20140507  Rockwell Automation Technologies, Inc.  Secure models for modelbased control and optimization 
US20140129491A1 (en) *  20121106  20140508  Rockwell Automation Technologies, Inc.  Empirical modeling with globally enforced general constraints 
US9147153B2 (en) *  20121106  20150929  Rockwell Automation Technologies, Inc.  Empirical modeling with globally enforced general constraints 
US9448546B2 (en) *  20130315  20160920  Rockwell Automation Technologies, Inc.  Deterministic optimization based control system and method for linear and nonlinear systems 
CN104049598A (en) *  20130315  20140917  洛克威尔自动控制技术股份有限公司  Deterministic optimization based control system and method for linear and nonlinear systems 
EP2778806A1 (en)  20130315  20140917  Rockwell Automation Technologies, Inc.  Deterministic optimization based control system and method for linear and nonlinear systems 
CN103234610A (en) *  20130514  20130807  湖南师范大学  Weighing method applicable to truck scale 
Also Published As
Publication number  Publication date  Type 

US20080235166A1 (en)  20080925  application 
US8019701B2 (en)  20110913  grant 
US20080208778A1 (en)  20080828  application 
US20150185717A1 (en)  20150702  application 
Similar Documents
Publication  Publication Date  Title 

Barton  Simulation metamodels  
CRAIG et al.  LSOPT user’s manual  
Beck et al.  Updating models and their uncertainties. I: Bayesian statistical framework  
Williams  Prediction with Gaussian processes: From linear regression to linear prediction and beyond  
Kleijnen  Experimental design for sensitivity analysis, optimization, and validation of simulation models  
Hou et al.  From modelbased control to datadriven control: Survey, classification and perspective  
Goodwin et al.  A moving horizon approach to networked control system design  
Wan et al.  An efficient offline formulation of robust model predictive control using linear matrix inequalities  
Kuo et al.  An annotated overview of systemreliability optimization  
Borrelli et al.  Predictive control for linear and hybrid systems  
Mosca  Optimal, predictive, and adaptive control  
Vrabie et al.  Adaptive optimal control for continuoustime linear systems based on policy iteration  
Morari et al.  Model predictive control: past, present and future  
Campos et al.  Generaltospecific modeling: an overview and selected bibliography  
Wu et al.  Output feedback stabilization of linear systems with actuator saturation  
Kerrigan  Robust constraint satisfaction: Invariant sets and predictive control  
Bemporad et al.  Observability and controllability of piecewise affine and hybrid systems  
Allgöwer et al.  Nonlinear predictive control and moving horizon estimation—an introductory overview  
Palit et al.  Computational intelligence in time series forecasting: theory and engineering applications  
Marrel et al.  An efficient methodology for modeling complex computer codes with Gaussian processes  
Kassmann et al.  Robust steady‐state target calculation for model predictive control  
Boyle et al.  Dependent gaussian processes  
Gümüş et al.  Global optimization of nonlinear bilevel programming problems  
US6725208B1 (en)  Bayesian neural networks for optimization and control  
Sala  On the conservativeness of fuzzy and fuzzypolynomial control of nonlinear systems 
Legal Events
Date  Code  Title  Description 

AS  Assignment 
Owner name: PAVILION TECHNOLOGIES, INC., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SAYYARRODSARI, BIJAN;PLUMER, EDWARD;HARTMAN, ERIC;AND OTHERS;REEL/FRAME:015321/0226 Effective date: 20040428 

AS  Assignment 
Owner name: SILICON VALLEY BANK, CALIFORNIA Free format text: SECURITY INTEREST;ASSIGNOR:PAVILION TECHNOLOGIES, INC.;REEL/FRAME:017240/0396 Effective date: 20051102 

AS  Assignment 
Owner name: PAVILION TECHNOLOGIES, INC., TEXAS Free format text: RELEASE;ASSIGNOR:SILICON VALLEY BANK;REEL/FRAME:020609/0702 Effective date: 20080220 

AS  Assignment 
Owner name: ROCKWELL AUTOMATION PAVILION, INC., TEXAS Free format text: CHANGE OF NAME;ASSIGNOR:PAVILION TECHNOLOGIES, INC.;REEL/FRAME:024741/0984 Effective date: 20071109 

AS  Assignment 
Owner name: ROCKWELL AUTOMATION, INC., WISCONSIN Free format text: MERGER;ASSIGNOR:ROCKWELL AUTOMATION PAVILION, INC.;REEL/FRAME:024755/0492 Effective date: 20080124 

AS  Assignment 
Owner name: ROCKWELL AUTOMATION TECHNOLOGIES, INC., OHIO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ROCKWELL AUTOMATION, INC.;REEL/FRAME:024767/0350 Effective date: 20100730 